(The original version of this article was published on LinkedIn.)
this article, I would like to illustrate how we think PLM can exploit the digital thread and the digital twin. To do this, it is inevitable to provide a concise definition of what these concepts mean. Clearly, I think this specification relates strongly to the notion of knowledge graphs. To illustrate this, I will first report on how we solve graph-based use cases for our customers and then show how an incremental approach leads to a step by step coverage of the overall product lifecycle by a product lifecycle knowledge graph. Moreover, I will explain how graph-as-a-service interfaces can be used to feed business intelligence and advanced analytics. This is followed by a generalization in which the technological concepts digital thread and digital twin defined by means of graph terminology represent major enablers for a successful implementation of PLM as a management method.
As our customers comprise a brown field of data, the focus for us is on the automatic computation of graphs derived from those data stored in given authoring systems. A project usually starts from a set of graph-based use cases that need to be implemented. Such use cases typically are driven by some information demand from one or more sets of user roles. For instance, a design and release engineer - an engineering role responsible for a technical component within the entire V-model - naturally needs data from different data systems to form his business context: e.g., the component requirements from the requirements management system, the components of which it is part of from the BOM, the release status from a business warehouse, etc. The component node of the graph then represents the core of a digital twin subgraph for which this engineer is responsible. We then supply the engineer with a dashboard that helps him or her manage the components he or she is responsible for.
Our experience shows that the initial knowledge graph evolving from this first set of use cases is at the same time the advent of a journey which leads our customers to discover and add new graph use cases for further business roles. In many cases, this implies adding new data sources to be covered by the graph. Even completely different enterprise functions might come up with their demands, e.g., procurement asks for alerting mechanisms on engineering changes to reduce scrapping parts to substantially save costs and provide production with the proper supply. Important success factors for such an incremental model are:
When these requirements were met, we had experienced demand for growth of several dozens of apps on top of the existing graph in a very short time. But this is only half of the story. One could say that this already helps customers leverage investments made in their data infrastructure due to the reusability potential the graph approach offers.
However, if we look at those user roles in business who work within core processes (e.g., engineering types) they might, every now and then, only need certain context information from other data systems. An example could be a user sitting in front of his aftersales product database focusing on a specific glass pane. He might need data from third party applications such as “where used”, “release date”, “where produced” etc. In this case, it would be cool if the needed data found the user rather than the user searching for the data. Ideally, context-based, third party knowledge should be provided through the authoring system, where it is needed. For the user, it should feel seamlessly as if the added context knowledge was provided by the system itself. This requires the graph to be easily embedded in the given authoring systems. Whether an application needs to be designed on top of the graph or as a service embedded in an authoring system may depend on the extension to which cross-linked data are essentially needed by the user. Yet, it may also be driven by the need for simplification of the IT landscape. A logical consequence of this multifaceted usage of the business context is hence to offer a graph-as-a-service interface by which any kind of application, or even machines, may profit from the business context knowledge stored in the graph. Central to this design is that the graph represents a linked data layer abstracted and decoupled from the actual data.
a digital twin is the virtual representation of a product, asset or system; which exactly mimics the physical object with current, as-built and operational data,
whereas the Digital Thread
refers to a communication and data flow framework that allows an integrated view of a product’s or asset’s data throughout its complete lifecycle.
Rainer Stark of Fraunhofer IPK defines
A digital twin is a digital representation of an active unique product (real device, object, machine, service, or intangible asset) or unique product-service system (a system consisting of a product and a related service) that comprises its selected characteristics, properties, conditions, and behaviors by means of models, information, and data within a single or even across multiple life cycle phases.
If we translate these statements into graph language and combine it with the incremental, constructive approach to generate an enterprise knowledge graph, then we may conclude that a Digital Thread represents the graph data model of a Digital Twin. Why is that so? We might argue the establishment of the graph is nothing but the attempt to reconstruct connectivity out of disconnected data resulting from a complex product lifecycle process. This posits connectivity as already existing but not explicitly captured.
A concrete Digital Twin might then refer to any kind of material or immaterial object. This depends on the interest of the desired insight. In the case of airplanes or cars it might be the entire car or airplane, a component, or a service. In the case of a pandemic it might be the representation of the world, a country, a region, or locality together with disease symptoms and personal data (think of chains of infections). In terms of graphs it may be defined by a set of semantic relations such as “part-of”, “used-in”, “belongs-to” or any kind of rule that specifies how an object represents the center of a set of relations that make sense to the observer. Thus, there is some relativity in the definition of what a Digital Twin might be and, correspondingly, this will effect on its defining graph model. Following this logic, concrete graph use cases as the ones exemplified above exploit Digital Twins (graph instances of a digital thread). If the notion of a Digital Thread is rightfully seen as defined by many authors to reach over the full lifecycle from ideation over design (or engineering), manufacturing (or production), operation, maintenance (or service) and retirement, then any graph-based application or embedded service essentially serves the management of the product lifecycle.
The evolutionary graph construction process can thus be interpreted as converging towards the product lifecycle knowledge graph. Based on such a knowledge graph, ABB’s vision of a
capability to refer to data stored in different places from one common digital twin directory enables simulation, diagnostics, prediction, and other advanced use cases
can quickly become a solution if the above-mentioned success requirements are met. The directory then becomes a full graph.
In his blog: “Digital Thread and Digital Twins - are those new names to replace PLM?”  Oleg picked up the discussion of whether PLM should be replaced by such new concepts. I think this reflects that there is quite some conceptional confusion about topics such as “Digital Twin”, “Digital Thread”, "Systems Engineering/MBSE” and “PLM” in the community. But if we have a concise definition of what a Digital Thread and a Digital Twin, as given above, mean then applying graph technology is the natural and appropriate way to employ and PLM just makes use of a holistic form of data representation. And this pays tribute to the originially intended scope of PLM as an integration method for people, data, processes and business systems and as a provider of a product information backbone. How does systems engineering fit into this framework? Very well, because this is a design methodology in which data from domains across the hole lifecycle play a decisive role. The cross-domain provision of data along the lifecycle, in turn, is the essential task of PLM and even more so the  SysLM approach. The above mentioned evolutionary graph construction process supports the incremental construction of the holistic information backbone as required by PLM. Systems engineering is often the starting point of this PLM journey. I found an article on a German PLM blog (unfortunately only in German) by Markus Ripping of Sartorius giving advice to reduce the invention of new buzz words.
He is right, there is no need!
 Michael Grieves, John Vickers, Digital Twin: Mitigating Unpredictable, Undesirable Emergent Behavior in Complex Systems (Excerpt), https://www.researchgate.net/publication/307509727_Origins_of_the_Digital_Twin_Concept
 Bachan, Allan: March April 2020, https://www.aircraftit.com/articles/digital-threads-and-twins-in-mro/
 Ripping Markus: https://www.plm-blog.com/__trashed-2/
 Oleg Shilovitsky: “Digital Thread and Digital Twins - are those new names to replace PLM? http://beyondplm.com/2020/05/07/digital-thread-and-digital-twins-are-those-new-names-to-replace-plm/
 Oleg Shilovitsky: Digital PLM - A Technology Looking For New Customers? https://www.linkedin.com/pulse/digital-plm-technology-looking-new-customers-oleg-shilovitsky/?trackingId=OZSdnNKiEH%2BPbdxCy0mITg%3D%3D
Header: © 4th Life Photography – stock.adobe.com.
Infografic: © CONWEAVER