Digital Twins are becoming an increasingly important engineering paradigm. Design and simulation engineers, process experts and IT-departments are more than ever intertwined, yet struggle with mismatching or missing data standards, interfaces and software architectures. An integral optimization of value chains requires maximum reusability and digital interoperability, also across company boundaries.
Our MpCCI Twin Toolbox supports engineers with their digital twin systems by providing features to incorporate rich information assets from heterogeneous sources into a common environment and apply analytics, optimization, automatic calibration or verification & validation, and manage data through their semantic (ontology-based) meta-data. The Toolbox is dedicated to better interoperability and smart workflows around digital twins in engineering and multiphysics applications and their connection to product lifecycle management. Therein, generic base solutions and a growing number of engineering and analytics modules can be used as templates or customized to your project.
Within the project MAVO DigitalTPC, a Fraunhofer consortium set up a digital twin system for the manufacturing of thermo-plastic composite (TPC) parts. Diverse manufacturing steps and non-destructive measurements, CAE-simulations and analytics with artificial intelligence (AI) were integrated. An exemplary motivation for this is to understand the influence of the manufacturing process on the heterogeneous microstructure (fibre orientations, gaps, …) of the composite and their contribution to final component failure.
We at SCAI realized this digital twin demonstrator through our Toolbox. Distributed data storage, from where the meta-data is extracted dynamically during the life of the digital twin system, is interlinked and evaluated.
This way, unique correspondences of individual parts, simulations and measurements in the overall system can be explored. The final twin will enable engineers to identify resources along the overarching real and virtual process chain and simulate production under the real, observed material and process conditions.
The major benefits for the process and materials engineers are
The Twin Toolbox provides a flexible python-based framework
As a Python-based framework, the MpCCI Twin Toolbox can be integrated with many popular data science libraries to enable many different twin solutions. Readily-available processing algorithms are template solutions, yet we offer algorithm customization as a service for your use case.
Users can extend their own existing engineering workflows set up locally in the user environment. For these workflows we offer our classical multiphysics CAE solutions as well as data-driven analytics and AI methods. In these domains, we offer several services to help you get the most out of your data.
The data associated with these workflows can be either stored in the users pre-defined storage systems or in one of our recommended data management systems that is optimal for the type of data. For file-formatted data, we strongly encourage storage in standard, vendor-independent formats, such as VMAP and STEP for CAE and CAD data.
For data access, we offer an interface layer with available connectors for a series of widely used data storage systems. For new or more specific systems, the connectors can be adapted. Through the use of this interface layer, various distributed data sources can be accessed from the engineering workflow.
Therein, required resolved datasets are transferred directly from the source to the user’s system, without central storage. In addition, our parsing modules can determine the semantic meaning of the data-sets in the overall digital twin context is determined, i.e. the process or workflow step it assesses, the information it contains, its validity range and other important meta-data, by translating them to the common ontology.
This builds one big net of re-useable semantic meta-data, that can be either stored locally at the user side, or in a central location for seamless access by many distributed users and our semantic search module. We offer graphical user interfaces to abstract away any technical communication details and expose only the workflows required by the user.
Enterprises and companies require data permeability through their production steps. This comprises integration of data sources from machines and sensors, simulation models and corresponding designs in an automated workflow of control, analysis and monitoring.
Required information for product lifecycle management is integral and hence different to data of local physical behaviour, such as stresses. Yet, information from these various scales and sources must be compatible in order to track cause and effect relations and use machine learning techniques to extract useful information. Ontologies are the logical definitions of the required semantic relations of data assets.
Our MpCCI Ontologies structure the data sources as participants in the digital twin and make them directly compatible with other systems that use these ontologies. They have the most important aspects of digital twins as their main perspective and address them in a modular way:
Acceptance and reuse of semantics is essential for compatibility and interoperability. As far as possible, we therefore follow publically available definitions, industry standards and well-accepted standard literature for the major taxonomies and classifications and relations. This prepares best parallels with existing workflows in companies and maximizes integration potential.
CAE engineers in enterprises or engineering service providers work with various software tools to simulate multiphysics processes. Each tool has different strengths and works with different native data structures. For industrial-scale CAE-chains and their flexibilization through AI, the data transfer between softwares and between models is necessary. Moreover, many applications require coupled solutions of different codes to predict the multiphysics behaviour. For these purposes, we have our specific Multiphysics CAE solutions, that can handle the majority of popular codes and formats.
For core elements, we provide python interfaces to our core MpCCI-products. The main usage of the classical methods in the digital twin contexts are mappings, mesh2mesh conversions and mesh-based comparison of quantities for validation and verification. For our code coupling environment for multiphysics simulations, a smart calibration tool is offered to minimize setup effort and to enable seamless batch simulation, also automatable from a python interface.
In combination with the data management tools, engineers can integrate corresponding simulation and measurement sources easily in their workflow for their evaluation.
If you are running digital twin projects in your enterprise, you will be aware of the importance of data analytics. Linking your data and modelling your system well is only useful if patterns can be extracted, V&V can be supported and automated, or optimizations can be accelerated. Often, the details are crucially important and decide over enormous differences in applicable methods.
Our MpCCI Twin Toolbox integrates base solutions for standard problems and our services customize them for your project
Many IT departments and engineering specialists require strong cooperation. Maintaining software and network are the least important of the many tasks. No software can replace this relation with a generic solution. However, for digital twin projects, we can offer the building blocks that make life of data integration a lot easier.
Fully integrated, the collection of data management tools builds up to a reference solution for a simple data space. Over the course of time, we continue to develop it into an IDS-conforming system. The system runs from a twin manager, a local client that allows for access of distributed data bases through a set of adaptable connectors and organizes the interwoven meta-data. This always includes a case-dependent development of the mapping. Depending on IT-requirements, storage of the meta-data graph can be realized locally at ease or on a remote server.
With this approach, we demonstrated feasibility in our reference project to use process control and quality assurance with cognitive sensor technology to characterize components and detect defects as well as a chain of inter-mapped multi-physics simulations to assess physical stresses, fibre orientations, and further quantities in 3D.
Interoperability for enterprises goes beyond a common platform or data space. Like precise terms, definitions and relations are required for ontologies, so do companies require data and format standards to enable sufficient data permeability.
With RDF/OWL and the SPARQL interface, the ontology development is following standard languages. In addition, the core taxonomies and nomenclatures are based on industry standards. Likewise, we use data connectors based on RESTful services. For more complex data structures, our ontologies and data management solution are strongly integrated with
We continuously extend our methods towards more standards and contribute to standardization efforts in turn and seek to maximixe overlap with other interoperability initiatives, such as the EMMC, PMD, ProSTEP and the VMAP community.