MpCCI Twin Toolbox

Toolbox for Smart Digital Twins in CAE

Digital Twins are becoming an increasingly important engineering paradigm. Design and simulation engineers, process experts and IT-departments are more than ever intertwined, yet struggle with mismatching or missing data standards, interfaces and software architectures. An integral optimization of value chains requires maximum reusability and digital interoperability, also across company boundaries.

Our MpCCI Twin Toolbox supports engineers with their digital twin systems by providing features to incorporate rich information assets from heterogeneous sources into a common environment and apply analytics, optimization, automatic calibration or verification & validation, and manage data through their semantic (ontology-based) meta-data. The Toolbox is dedicated to better interoperability and smart workflows around digital twins in engineering and multiphysics applications and their connection to product lifecycle management. Therein, generic base solutions and a growing number of engineering and analytics modules can be used as templates or customized to your project.
 

User Story – a Digital Twin for Composite Part Production

Within the project MAVO DigitalTPC, a Fraunhofer consortium set up a digital twin system for the manufacturing of thermo-plastic composite (TPC) parts. Diverse manufacturing steps and non-destructive measurements, CAE-simulations and analytics with artificial intelligence (AI) were integrated. An exemplary motivation for this is to understand the influence of the manufacturing process on the heterogeneous microstructure (fibre orientations, gaps, …) of the composite and their contribution to final component failure.

We at SCAI realized this digital twin demonstrator through our Toolbox. Distributed data storage, from where the meta-data is extracted dynamically during the life of the digital twin system, is interlinked and evaluated.

This way, unique correspondences of individual parts, simulations and measurements in the overall system can be explored. The final twin will enable engineers to identify resources along the overarching real and virtual process chain and simulate production under the real, observed material and process conditions.
 

Benefits for Process and Materials Engineers

The major benefits for the process and materials engineers are

  • Improvement of predictive models through smart calibration
  • Faster optimization of individual process stages using combined CAE and surrogate models
  • Cross-process optimization: smart positioning of unfavourable tape thicknesses or fibre properties, dynamic adaptation of extrusion pressures and temperatures for defect reduction
  • Predictive maintenance of extruder


The MpCCI Twin Toolbox

The Twin Toolbox provides a flexible python-based framework

  • To handle data from heterogeneous engineering domains with common ontologies,
  • To connect with typical data-base environments,
  • To identify and label resources with common, re-usable ontology definitions,
  • To run multiphysics engineering modules on these data, and
  • To use AI for search and analytics methods to get more insight and better forecasts.
Building blocks of the MpCCI Twin Toolbox.

General Concept and Architecture of the MpCCI Twin Toolbox

As a Python-based framework, the MpCCI Twin Toolbox can be integrated with many popular data science libraries to enable many different twin solutions. Readily-available processing algorithms are template solutions, yet we offer algorithm customization as a service for your use case.

Users can extend their own existing engineering workflows set up locally in the user environment. For these workflows we offer our classical multiphysics CAE solutions as well as data-driven analytics and AI methods. In these domains, we offer several services to help you get the most out of your data.

The data associated with these workflows can be either stored in the users pre-defined storage systems or in one of our recommended data management systems that is optimal for the type of data. For file-formatted data, we strongly encourage storage in standard, vendor-independent formats, such as VMAP and STEP for CAE and CAD data.

For data access, we offer an interface layer with available connectors for a series of widely used data storage systems. For new or more specific systems, the connectors can be adapted. Through the use of this interface layer, various distributed data sources can be accessed from the engineering workflow.

Therein, required resolved datasets are transferred directly from the source to the user’s system, without central storage. In addition, our parsing modules can determine the semantic meaning of the data-sets in the overall digital twin context is determined, i.e. the process or workflow step it assesses, the information it contains, its validity range and other important meta-data, by translating them to the common ontology.

This builds one big net of re-useable semantic meta-data, that can be either stored locally at the user side, or in a central location for seamless access by many distributed users and our semantic search module. We offer graphical user interfaces to abstract away any technical communication details and expose only the workflows required by the user.

Integration:

  • Users can integrate the toolbox features seamlessly with python
  • Multiple CAE extension features from closed-source C++ products can also be interfaced via python
  • Network services are not provided. (Secure network and permissions are subject to user environment)
  • We can attune to various possible solutions, VPNs, closed submasks, client-side encryption
  • Extensions and license upon request
MpCCI Twin Toolbox supports real and virtual workflows with solutions for semantics, data management and access, CAE, AI-based analytics and UQ.

Ontologies – Semantic Concepts to Define and Organize Your Engineering Data

Enterprises and companies require data permeability through their production steps. This comprises integration of data sources from machines and sensors, simulation models and corresponding designs in an automated workflow of control, analysis and monitoring.

Required information for product lifecycle management is integral and hence different to data of local physical behaviour, such as stresses. Yet, information from these various scales and sources must be compatible in order to track cause and effect relations and use machine learning techniques to extract useful information. Ontologies are the logical definitions of the required semantic relations of data assets.

Our MpCCI Ontologies structure the data sources as participants in the digital twin and make them directly compatible with other systems that use these ontologies. They have the most important aspects of digital twins as their main perspective and address them in a modular way:

  • CAD & Design: How can a product be designed, constructed and what are the measures to accomplish requirements in the entire product life?
  • Product & Component: Which are the parts, the components, semi-finished goods and which are their intermediate and final properties?
  • Material: Which materials are processed, which are alternatives and how do they behave?
  • Manufacturing: Which processes are applied to the materials and components and which machines can fulfil which task?
  • Measurement: Which measurements are performed on the processes, materials and products to examine and control the manufacturing?
  • CAE Simulation: How can real processes, components and their properties be modelled and calculated? Which numerical tools and methods can be applied?
  • Analytics: What does the collected data reveal statistically? How are uncertainties distributed? How can detections and calibrations run cross-stage and cross-source?
  • Data Storage: What type of data is handled? Where is it stored? Which system manages it? What formats are used?
Top level knowledge domains of digital twin environments: MpCCI ontologies enable description of complex twin systems.

Acceptance and reuse of semantics is essential for compatibility and interoperability. As far as possible, we therefore follow publically available definitions, industry standards and well-accepted standard literature for the major taxonomies and classifications and relations. This prepares best parallels with existing workflows in companies and maximizes integration potential.

Overview of used standards for core taxonomies and nomenclature.

Use of Multiphysics CAE in Digital Twins

CAE engineers in enterprises or engineering service providers work with various software tools to simulate multiphysics processes. Each tool has different strengths and works with different native data structures. For industrial-scale CAE-chains and their flexibilization through AI, the data transfer between softwares and between models is necessary. Moreover, many applications require coupled solutions of different codes to predict the multiphysics behaviour. For these purposes, we have our specific Multiphysics CAE solutions, that can handle the majority of popular codes and formats.

For core elements, we provide python interfaces to our core MpCCI-products. The main usage of the classical methods in the digital twin contexts are mappings, mesh2mesh conversions and mesh-based comparison of quantities for validation and verification. For our code coupling environment for multiphysics simulations, a smart calibration tool is offered to minimize setup effort and to enable seamless batch simulation, also automatable from a python interface.

In combination with the data management tools, engineers can integrate corresponding simulation and measurement sources easily in their workflow for their evaluation.

AI-Based Analytics and Semantic Search for Digital Twins

If you are running digital twin projects in your enterprise, you will be aware of the importance of data analytics. Linking your data and modelling your system well is only useful if patterns can be extracted, V&V can be supported and automated, or optimizations can be accelerated. Often, the details are crucially important and decide over enormous differences in applicable methods.

Our MpCCI Twin Toolbox integrates base solutions for standard problems and our services customize them for your project

  • Data Linking and Semantic Search with user intent estimation
  • Correlation of data and other statistical dependencies
  • Supported workflows for Uncertainty Quantification and Surrogate Modelling
  • Pre-built machine learning models as surrogates for CAE
  • Information Extraction through Deep Learning
  • VMAP Conversion

Connecting Distributed Databases

Many IT departments and engineering specialists require strong cooperation. Maintaining software and network are the least important of the many tasks. No software can replace this relation with a generic solution. However, for digital twin projects, we can offer the building blocks that make life of data integration a lot easier.

  • Connectors to various data management systems and databases
  • Semantic data management re-using commong triple stores
  • Local and remote data handling
  • I/O Interfaces and meta-data parsing for your digital twin
  • Ontology- based metadata structuring and data crawling

Fully integrated, the collection of data management tools builds up to a reference solution for a simple data space. Over the course of time, we continue to develop it into an IDS-conforming system. The system runs from a twin manager, a local client that allows for access of distributed data bases through a set of adaptable connectors and organizes the interwoven meta-data. This always includes a case-dependent development of the mapping. Depending on IT-requirements, storage of the meta-data graph can be realized locally at ease or on a remote server.

With this approach, we demonstrated feasibility in our reference project to use process control and quality assurance with cognitive sensor technology to characterize components and detect defects as well as a chain of inter-mapped multi-physics simulations to assess physical stresses, fibre orientations, and further quantities in 3D.

Compatibility with Existing Standards

Interoperability for enterprises goes beyond a common platform or data space. Like precise terms, definitions and relations are required for ontologies, so do companies require data and format standards to enable sufficient data permeability.

With RDF/OWL and the SPARQL interface, the ontology development is following standard languages. In addition, the core taxonomies and nomenclatures are based on industry standards. Likewise, we use data connectors based on RESTful services. For more complex data structures, our ontologies and data management solution are strongly integrated with

  • VMAP
  • STEP
  • DICONDE

We continuously extend our methods towards more standards and contribute to standardization efforts in turn and seek to maximixe overlap with other interoperability initiatives, such as the EMMC, PMD, ProSTEP and the VMAP community.