Should PLM apps be jealous about Facebook?

I usually think I have nothing to add to discussions about comparison between PLM and social networking. Two facts have contradicted this opinion recently:

  • I had chance to test the Facebook app for iPhone
  • I read the article about Facebook connections as a map

I see here 2 facts:

  • Facebook propose a high level of ergonomics in his iPhone app, far away from ergonomics proposed in our favorite PLM applications
  • Facebook fosters a level of collaboration that any PLM application owner should be jealous about

Is there a link between those facts?

Useful or full featured?

I found a recent post from Oleg with a funny picture. This picture is better than a long discussion about what it is better to get:

  • a full featured PLM application
  • a useful application

Examples are:

or

I don’t know for a camper, but as a PLM user I am sure which one I may choose.

Unfortunately, most of the PLM applications we meet are of the first kind. Why? Some quick thoughts:

Complex business processes

BP are so complex that implementers simply choose to display everything to all users, having not the information about restricted usage per user or per group of user.

IT design

PLM is supported mainly by IT department which don’t know the BPs, or choose to mix many different business processes in the same application.

New capabilities

Introducing new functions without killing the previous ones which might be obsolete are increasing the volume. Always try to kill existing functions!

Lack of user feedback

I believe user feedback is not at the level today for enterprise applications, compared with what can be found on the web. Today, the smallest shoe vendor in the US provides a way to post comments about its products on his website. PLM vendors are far away from that practice. Even if some surveys are sent once a year into a company, the level of feedback provided is not at the level where it could be used.

What do you think?

Monitoring PLM data

Hello

I would like to talk today about PLM data monitoring. This activity is a real activity into a company, from simply examining dead data by a data administrator, in order to cleanse it, to producing dashboards on data to figuring program follow up, data usage across programs or countries. There is a numerous amount of examples where PLM data monitoring is required.

On the other hand, a PLM system has a complex data model, which may be partially known by the monitoring user, so the monitoring activity requires the usage of dedicated tools, dedicated methodologies, and so requires time to develop, test, deliver, explain… So this activity becomes difficult to sell to a manager, while it would bring a lot of information on:

  • PLM application usage
  • PLM data usage
  • Product data trends
  • Project convergence

How to solve this challenge? I guess that modern search engines could help solve this challenge, being able to provide more information to the user to discover not only the data, but as well the real trend on business which is behind the data.

What do you think?

Semantic search, Classification and Data migration: the winning team

Seven years ago, I had a mission to perform data migration from one system to another. One of the major challenges was to import parts inside a hierarchy of categories, which have been designed in the new system. Analyzing the legacy data, another hierarchy had been set, but this hierarchy had more than 900 entries, so users had mostly used a wrong category, making this information totally unreliable.

So we estimated we could try to use the description field of parts to classify the objects, guessing that the users had used meaningful words to describe their objects. The method had to be found.

So I imagined an algorithm to do so. The method was to analyze the words of the description, and to compare those words to a dictionary, providing as well a multiplication factor to each word depending on its position in the description. In parallel, I built the technical dictionary analyzing the description of roughly 500 000 parts, founding the most used words.

I shown that more than 75% of the parts could be automatically migrated using this algorithm. For the remaining 25%, I built an application which was providing the list of parts to classify, and the possible categories available in the new system, and we asked to experts to manually classify the remaining parts. Having done that, I enriched my dictionary with some new words that I had not been able to imagine the meaning (including some funny ones…). With the new dictionnary, we could be able to automatically classify more than 90% of the parts.

Then we set up an automatic procedure using this algorithm in order to migrate data at night from the legacy system to the new one, as both systems were decided to run in parallel for a given period of time. This system ran for one year, until all project data was migrated to the new system. Then the migration system was stopped, and put on archive. I created a semantic search engine without knowing it.

Years after, I have now to implement a search engine based on Exalead search engine. This technology implements semantic options, and hopefully I can reuse the dictionary I built seven years ago to provide more value this new technology.

My conclusion today is that there are several lessons I learnt from this experience:

  • semantic search can help migrate data
  • semantic search can help classify data
  • data migration activity can bring value for future activities
  • companies should pay attention building technical dictionaries, compiling words that users are using everyday

Quick pick: look at inteview of John Alpine on Deelip.com

Hello

I had chance to read the interview of John Alpine, VP of R&D at Spatial. He is telling interesting things about collaboration in CAD domain.

Do we need SBA for Product data?

I had chance recently to work for a project to refund part of the IS system. This project was accumulating several applications serving different user communities like engineering, program, purchasing and more. One of the objectives of that project was to enhance access of data across different applications. Indeed, it is a common need for an average user to find quickly the data he is searching, even its own data. Working with an increasing number of applications, generating more and more data, users start to feel anxious simply finding the data they generated some month ago. And for sure this is even truer with the data generated by others.

This was strange at the start to admit that most of the existing applications were containing their own search engine, but was not able to satisfy people, and only basic search methods were used because they are known and safe, like searching by numbers. This method is a bit frustrating for most people, don’t knowing the numbers. The other method is to build classical search methods, based on navigation paths across the application, which paths are simply not manageable by users if they do not use the application everyday. A solution consists in implementing those complex search paths into specific search functions, but this is not satisfying all needs and constrains.

So we turned our head to a new technology, which was Exalead. This application was coming from a completely different world, the web world, trying to solve the issue managing huge volume of data, from different sources, and providing not a perfect result like any existing search engine, but allowing to the user to filter data from qualifiers found in the indexed data.

Then it brings back to me numerous old situations were I was trying to figure how to design data models, and associated business processes, but when at the end I wanted to retrieve data created, I was simply not able, because the way used to search data had to be the one used when designing the application: by number, by state,…

With that application, I could imagine search data from a transversal way, which was not the way the application was designed, but the way the users had used the application to enter data, which may be different. Do they enter the name of a customer in the dedicated field or in the description of the object, it does not matter. And we know how much time we spend defining methodologies and control methods, telling the user to enter data following very specific rules, with dash and no space, no slash, with # between the first and second terms, and so on.

Then it brought to me confusion, and make me do a step back. For sure, Google experience has passed before, and we are used to use now for personal usage one search field to find anything. But one of PLM applications objective is usually to provide ability to reuse data, and search capability is a key tool for that purpose. This new way retrieving data could be a source of innovation for product development activities. Because when we think re-use, we think reusing the exactly same data. In fact, today a lot of data exists, and while in the past it was natural to start brainstorming between a group of experts to design a new product, and get ideas from those experts, now a part of the building blocks for a new product are clearly available in the cloud, private or public. The important thing is now to have the right tool to find this data or at least similar data, and not spend too much time searching for it, to finally recreate it.

But the most exciting characteristic of Exalead was to be able to investigate simplification of the semantic challenges that a company with many sites worldwide has to manage everyday, with many languages and many different wording. This may lead to a new trend, building CPI, Cloud based Product Innovation.

So I see clear opportunities implementing SBA for Product data in the future. What do you think?

SOA for PLM applications (and others?) – Is it really a good idea? Part 2

I would like to come back on that subject initiated in the first part some weeks ago. I exposed my vision of what is SOA, and how this could change the way we design an application.

I would like now to make a step back and look at the entire network of applications used by a given community. CIOs have to manage that network, and to work with business departments to manage the data flow between these applications. It is a tough subject, especially in big organizations, and CIOs have to focus on the right way:

  • to support business processes
  • to master the cost of the whole network

Numerous methods have emerged in the past years to target those objectives, and it can be resumed by BPM (Business Process Management or Modeling). For my point of view, there are three topics to address:

  • which data to export (business data and business decisions)
  • how to transport that data
  • where to transport that data

The benefit of SOA is for me to try to forget the last topic. Why? Mastering the whole network means trying to reduce the number of connections between systems. Several ideas come up to reach that objective: define the right data flow, eliminating the small paths or the bad path, which leads to business process reengineering initiatives.

In today’s world, this kind of schema definition has a short life time, and the business decisions can have changed when the whole schema is ready to be implemented or enhanced.

So for me, CIOs should focus on the two first points:

  • to establish a communication standard over the network, to support data and decision exchanges
  • to ask to application owners to publish which information should bring value to other communities.

This leads to published data. If one or several application owner needs this data, or only a part of it, up to them to take it, or to filter it. It will not change the connection point.

System architects will then have only to define which system masters which data. That’s were the mindset has to change, because it is unusual to publish data without knowing who will use it. It may be a first pragmatic step trying to kill silos, ie pushing people to think about which part of their data could interest other communities.

What do you think?

Paris at Whitsun – A life experience

Hello

The post of today is about Paris. As I whished to get some pictures for my next post, I played the tourist in Paris for the Whitsun monday, as the weather was really nice. And I would like to share these images with you. A real experience!

SOA for PLM applications (and others?) – Is it really a good idea? Part 1

Charles DarwinIn IT domain, a new trend is hunting the previous one, because it is not possible to support/analyze several trends for companies in parallel, each of them having its own life(cycle?), and transformations, while companies/vendors/integrators are adopting/rejecting them.

The current trend is with cloud computing and SaaS (Software as a Service), while the previous trends was web 2.0, SOA and process orchestration. I would like today to come back to one of them, which is SOA, or a more technical view of it, which implements the so-called web-services. It’s an old one, but I was surprised to see it ranked number one as “Tools that count” in the last McKinsey Quaterly.

The web-services concept, introduced about 10 years ago, was based on defining standard based data exchanges between the new internet based applications, mostly applying pre-defined xml grammar, and as well defining a standard protocol to support the data flow. This approach was mostly inspired, in my opinion, by the lesson learned by the IT community, for decades of trying to implement point to point relations between applications, and having struggled to simply maintain the different interfaces, implemented to connect a given set of applications, i.e. defining test scenarios, and each time one of the application was modified, check that the application network was not damaged by a localized change (in the IT landscape sense). This was costing a lot of money, while the business need to maintain such connections was really needed.

So, web-services arrived, supported by a massive adoption of a new standard for everything, the xml format. The xml format added to flat files format the capability which consists in describing a node-relation data model, meaning the capability to represent a N-M cardinality between objects, which was not possible to easily describe in a csv format, which can basically describe a 1-1 cardinality between objects (a table).

But one thing inside web services, which is underrated, was to introduce a standard protocol to support the data flow, meaning defining standardized connections point for applications, and publishing those services into a services dictionary. It was WSDL, SOAP, and UDDI… It provides to a developer the ability to “discover” a web service, simply sending a standardized request to the application, in order to “understand” the service prior using it, and without having to change anything to the application providing the service.

So, people took the opportunity to build interfaces between systems, using this new technology, simply because it was cheaper than to define a proprietary language between two applications. But I think that using web-services to make point to point interfaces is not using the technology at its maximum. That’s the question I have today, which is: why web-services are really interesting to use, ten years after their introduction?

The definition of a web-service is not to define an interface between 2 points, meaning a start, an end and the relation, but to define simply a point, the connector. And so, simply using the underlying standards, other applications can use or not the service you defined.

Let’s take the example of a number generator, which is a basic building block of our PLM applications, but of many other applications as well, like ERP, CRM, and more. If you build an independent application, which objective is to generate a number for business objects, you need to:

  • define the different objects for which someone may need a number
  • define the numbering scheme for those objects
  • define the way other applications will use to get a number for the business object it is managing

The third point is where you need to define the data exchange protocol, and where a web-service can be used. Why, in this example, a web-service is usefull? Because you can build you connector alone, without the requesting application(s). Simply using web-services standards, you can simulate the requesting application(s). The day where the requesting application(s) will be ready to request a number, your application will be ready, and you will have not to change a single line of code, simply because you prepared the connection. The work which is requested afterwards is to officialize the connector and to maintain the connector, and not to maintain an interface with a well known requester.

This will push a mindset change from IT departments, which is to focus on connection points only, and not on the full data transfer process. This capability provides a great flexibility inside IT organizations where you can focus teams on their technology domain, only taking care that they use the web standards, and plan for connections with other technologies. This is SOA.

In the current world, where cost saving is a continuous challenge, there are few companies still having a system architect team, defining corporate business data model, maintaining complex data model diagrams that nobody understand except themselves, simply because it is now impossible to justify internally. So the current organizations come back to a list of silos, because there are no transversal connections between those teams, which were previously organizing the corporate IT strategy. So a new trend appears recently, to “kill silos”, proposing methods and tools to help people working together, without adding new costs for that requested communication. And that’s where SOA can help, simply requesting silos to define their potential requests to other silos. By the way, it should push people to better evaluate the services they will possibly provide to others, compared to existing services. Taking care of who will need/be happy with your information, your application, and its users, should benefit of a better support of business process, and so you will increase the ROI of it.

What does it means for our PLM applications? It means we should design an additional topic when we specify implantation of a given business object:

  • Numbering scheme of the object
  • Properties on the object
  • Relations with other objects
  • Lifecycle of the object
  • Methods of the object
  • Reporting on the objects
  • Need to send this object to another data source?

Technical implementation should then take care of the last topic, defining the needed web-services, for current and future needs.

Because there is a rule: all the things that may happen in the future, for sure, will happen. In today’s world, if we are waiting that someone comes to us with a request, to start designing the solution, it is simply too late.

What do you think?

Product driven, Program driven, Concept driven, and more.

Hello

Today’s post is about trying to find communality selecting/defining a PLM application. Or in other words, trying to define classes of enterprises which may benefit using the same software to build their PLM application. Vendors usually market their offers, defining them towards the industry domain, like automotive A&D, receipe, etc. This segregation is usually not good enough, while in the same industry, several type of businesses may exist, and conversaly same business can exist in different industry domains. On the other hand, a lot of litterature exists about company profile, mainly to determine if a company is engineering driven or manufacturing driven, meaning often to check if it needs an ERP or a PLM strategy. I never liked this second segregation, simply obviously a company usually need both. Nevertheless a company cannot do everything at the same time, so somewhere the question has to be answered.

This topic seems today trendy, as John Stark today try to address the lack of standard in the PLM industry.

So I would like today to look at that question, but into a different angle which is the product focus. Because a company may have a special concern about building a new product:

  • Product Driven. I call Product Driven a company which is building new product by assembly existing ones, ie having a large amount of reuse across programs.
  • Concept driven. I call Concept Driven a company which has no reuse of products, but reuse of concepts already developped. Concept can group assembly methods, architecture, or manufacturing process.
  • Program Driven. I call Program Driven a company which is building a new product for a given program, not necessarily reusing part of existing product, nor reusing specific concept existing on the shelf.

For sure, this classification is quite theorical, as a company may have several businesses, oriented differently, or businesses which are a mix of above categories. Those companies may need/wish at the end to select between one or several approaches, starting to homogeneize practices and applications, but this is another story.

That being said, today’s game is to try out to associate a PLM dedicated technology to each given above category.

Are those product categories forming a closed set? Are there existing PLM technologies for each product category?

What do you think?

« Previous PageNext Page »