ARTICLE

Gartner Emerging Technologies: Knowledge Graphs Become Central Component for Data Fabric

Aug 28, 2020
28/8/2020
|
Dr. Thomas Kamps
CEO CONWEAVER GmbH

(The original version of this article was published on LinkedIn.)

According to the Gartner hype cycle for emerging technologies “ontologies and graphs” fall into the trough of disillusionment whereas “knowledge graphs” are the essential value creating components of the data fabric. This leads to the question of the difference between the two methodologies for knowledge representation because mathematically both outcomes represent graphs.

Knowledge graphs as a central component of the data fabric
Forrester Research Big Data Fabric Architecture (left) Applied to Product Lifecycle (right)

The main purpose of the data fabric is to act as the IT platform that enables company wide access to data to support advanced analytics as well as data provision for the business roles along the lifecycles. One could interpret the "control panel" image on the right of the header image above as an application of the Forrester Research Digital Fabric reference architecture to product lifecycle management (PLM). In other words the reference architecture can be applied to any industrial lifecycle as there are asset lifecycle, customer lifecycle, data lifecycle, application lifecycle etc. and this in turn applies to different industries.

In case of the product lifecycle the data fabric delivers the digital twin (all connected instances of artefacts relating to a product, a component, etc. along the lifecycle) which makes it an enabling technology for product lifecycle management (plm). As the plm example shows the data fabric is a companywide platform. Hence, knowledge graphs need to cover a broad range of data along the different lifecycles. This is only possible by applying automated techniques to compute them from the given brown field data. It is even by no means sufficient to just translate data joins stored in authoring systems into linked data because it is the cross connection of the data sitting in silos that matters. This kind of connectivity is captured by industrial knowledge graphs which can easily connect billions of objects. Such prerequisites impose strong demands on the data processing to create the knowledge graphs. In fact, it requires configurable analytics to be able to deal with the idiosyncratic customer data. A manual approach to create large scale knowledge graphs is thus out of the question. On the other hand, ontologies can be used to conceptionally model specific aspects of the world, called domain ontologies (a domain ontology for power train, or infective diseases). This kind of in-depth modelling is obviously not what is needed for industrial scale creation of business context by means of linked business data. In summary it can be said that industrial knowledge graphs must meet necessary technical criteria to be used on an industrial scale:

  1. Knowledge graphs connect a broad range of data across the company
  2. The creation of knowledge graphs from existing data must be automated
  3. Knowledge graph engines must be able to store and process very large graphs
  4. The process of data-driven graph generation and update must be configurable and adaptable to ensure quick customer value

For more information on the potential of knowledge graphs see "Linked Data Connectivity – Graphs are the Crux of the Biscuit" or PLM of Tomorrow Needs Knowledge Graphs and if you are intereseted in a discussion on the relationships between digital thread, digital twin and knowledge graphs, have a look at "Does conceptional confusion lead to a search for a new label for PLM?"


Topics and speakers 2024

Ihr Ansprechpartner

Contact

Termin vereinbarenMake an appointment

Contact

Ihr Ansprechpartner

Further Information
Weitere Informationen