Speaker

Prof. Dr. Sean Hill

Senscience / EPFL

Title

FAIR²: Building the AI-Ready Data Foundation for the Life Sciences

Abstract

AI is reshaping discovery, development, and applied life sciences, but its progress is constrained by a familiar bottleneck: data that is not structured or documented for modern computational workflows. The FAIR principles created a strong starting point, yet today’s multimodal, high-volume, and tightly regulated data environments require a more robust foundation. FAIR² strengthens this foundation by emphasizing machine-readiness, traceability, interoperability, and documentation that supports reproducibility and regulatory expectations. Through concrete examples across preclinical, clinical, and real-world data, this keynote highlights how improved data foundations reduce risk, streamline workflows, and enable trustworthy, industry-relevant AI at scale.

About Prof. Dr. Sean Hill

Sean Hill, Ph.D., is Founder and CEO of Senscience, an AI company advancing open, connected, and reusable research data. He is also Chief Scientific Officer of Kelello, Adjunct Professor at the University of Toronto, and Titular Professor at EPFL. Previously, he was inaugural Scientific Director of the Krembil Centre for Neuroinformatics at CAMH, where he launched the BrainHealth Databank and advanced national mental health data initiatives. He co-directed the Blue Brain Project at EPFL through 2024 and held leadership roles at the International Neuroinformatics Coordinating Facility. His work focuses on neuroscience, AI, neuroinformatics, and digital infrastructure for open and responsible data-driven research.

Dr. Erik Schultes

GO FAIR Foundation

Title

FAIR-Ready AI: Inverting AI

Abstract

Much attention has been given to making data "AI-ready”. Less attention has been given to the reciprocal requirement: making AI "FAIR-ready." The FAIR Guiding Principles were designed to "enhance the ability of machines" to find, access, interoperate with and reuse data. A decade of investment in FAIRification methods and FAIR orchestration tools has demonstrated the feasibility of this ambition at scale. Simultaneously, AI systems have become capable of performing aspects of data curation, metadata enrichment, and interoperability assessment that were previously manual and costly. These two developments are not independent. They are two components of the same infrastructure challenge, a relationship we term FAIR-AI. AI trained on FAIR data is more trustworthy, while FAIR-ready AI can more reliably FAIRify data at scale. The result is a self-reinforcing cycle in which investment in FAIR data infrastructure yields compounding returns in AI reliability, and each improvement in AI trust lowers the cost of producing FAIR data. Without this virtuous cycle, the converse obtains: AI trained on data lacking provenance and quality metadata degrades progressively, generating outputs that further contaminate the data commons. Drawing on practical experience implementing FAIR Digital Objects for AI-assisted metabolomics workflows at Leiden University and the development of FAIR AI Attribution (FAIA), this presentation will demonstrate that the FAIR-AI virtuous cycle is operationally achievable. The requirements are architectural: convergence on open standards for AI-relevant FAIR Digital Object attributes, deployment of scalable FAIR orchestration infrastructure, and commensurate investment in data FAIRification alongside AI capability development. FAIR-AI is essential for life sciences and medicine, where data provenance and AI auditability are non-negotiable. 

About Dr. Erik Schultes

Erik’s career spans three decades across interconnected domains: life science research; machine learning on biomedical knowledge graphs; FAIR data infrastructure. These threads converge on his current focus: governance infrastructure for safe, sovereign and trustworthy AI.  As co-author of the FAIR Guiding Principles, Erik spent the past decade building global consensus on data standards that enable reproducible, auditable AI. Erik architected the FAIR Implementation Profile, the framework now driving worldwide convergence on interoperable data infrastructure. At Leiden University's Metabolomics and Analytics Center, he implemented FAIR Digital Objects for high-throughput metabolomics and AI-assisted research workflows. As Editor-in-Chief of FAIR Connect (Sage Publishing) and co-chair of the FAIR Digital Object Forum, Erik helps to shape international dialogue on data infrastructure.

Prof. Dr. Nikolaus Forgó

University of Vienna

Title

Data Readiness from a Legap Perspective

Abstract

This talk gives a critical overview on recent trends in Intellectual Property, Data Protection and AI regulation in Europe.

About Prof. Dr. Nikolaus Forgó

Nikolaus Forgó studied law in Vienna and Paris from 1986-1990 and then worked as university assistant at the Faculty of Law at the University of Vienna. In 1997, he received his doctorate in law with a dissertation on legal theory. Since October 1998, he has been head of the university course for information and media law at the University of Vienna, which still exists today. From 2000 to 2017, he was Professor of Legal Informatics and IT Law at the Faculty of Law at Leibniz Universität Hannover, where he headed the Institute for Legal Informatics for 10 years and was also Data Protection Officer and CIO.

Since October 2017, he has been Professor of Technology and Intellectual Property Law at the University of Vienna and Director of the Department of Innovation and Digitalisation in Law at the same university. He is also an honorary expert member of the Austrian Data Protection Council, the Austrian AI Advisory Board, the Regulatory Sandbox Advisory Board at the Federal Ministry of Finance.

He researches, teaches, and advises on all legal issues relating to digitalization, in particular data protection, information security and artificial intelligence.

Dr. Edeltraud Leibrock

Roland Berger GmbH

Title

All Intelligence needs Context

Abstract

Data alone does not make AI work — context does. In this keynote, we explore why semantic data integration is the critical, yet often overlooked, prerequisite for deploying AI that delivers explainable, reproducible, and reliable results in real-world business and industrial environments.
Drawing on practical experience, we show how ontologies and semantic knowledge graphs enable the meaningful integration of structured and unstructured data across organizational silos — transforming raw information into actionable, contextually grounded intelligence. We also examine how to systematically capture and leverage human expert knowledge as an irreplaceable asset in the AI value chain.
But technology alone is not enough. Truly unlocking the potential of AI in an organization requires something more fundamental: a culture of openness, trust, and cross-functional collaboration. We discuss what it takes to build that culture — and why it may be the most decisive factor of all.

About Dr. Edeltraud Leibrock

Dr. Edeltraud Leibrock is Global Managing Director for Innovation at Roland Berger and a Senior Partner in the firm’s Digital Platform. She is a recognized leader at the intersection of strategy, technology, and large scale transformation, advising senior executives worldwide on innovation, AI, data, digitalization, and business transformation.

Over the course of her career, Dr. Leibrock has held top executive roles in the financial sector, including Chief Operating Officer on the Executive Board of one of the world’s leading development banks and Group Chief Information Officer of a major public-sector bank in Germany. She brings deep experience in leading complex organizations through technological and structural change at scale.

In addition to her executive responsibilities, she serves on multiple boards and is actively engaged in the Fraunhofer innovation ecosystem as an alumna, including Fraunhofer Alumni e.V., the Fraunhofer Technology Transfer Fund (FTTF), and the boards of two institutes specializing in mathematics and computer science.

Dr. Leibrock studied physics and biology at the University of Regensburg and holds a PhD in physics from Hamburg University of Technology.

Dr. Vitaly Sedlyarov

Boehringer Ingelheim

Title

From Data to Discovery: Enabling Innovation with FAIR OMICS Data Management

Abstract

The increasing scale and diversity of OMICS data modalities pose significant challenges for organizations aiming to generate business value, enable data reuse, and employ AI driven analytical methods. Global OMICS Data Management (global ODM) addresses these challenges by implementing FAIR data principles within an enterprise scale platform that transforms large, heterogeneous datasets into strategic company assets.
Global ODM is based on a hybrid IT architecture that combines a semantic graph database for rich metadata, provenance, and data lineage; a scalable large file store for raw and processed OMICS data; and high performance databases optimized for fast access to large (often sparse) numeric matrices. This design enables efficient findability of relevant assets, seamless multi modal data integration, and rapid generation of biological insights without duplicating data or infrastructure. By making data machine actionable and semantically connected, global ODM provides a strong foundation for AI readiness, supporting advanced analytics, model training, and reuse across projects.

Beyond technical enablement, the platform delivers clear business impact: reduced storage and processing costs through centralized governance and tiered data storage, as well as faster time to insight through standardized access and reuse. Global ODM enables Boehringer Ingelheim to move from fragmented data silos to a scalable, FAIR, and AI ready OMICS ecosystem.

About Dr. Vitaly Sedlyarov

Vitaly Sedlyarov is a Product Owner leading global OMICS data management platform. As a key driver of Global OMICS Data Management (Global ODM), he leads cross‑functional teams in delivering FAIR, ontology‑driven data solutions that turn complex biomedical data into trusted knowledge assets. Global ODM has achieved significant reductions in data storage costs, minimized hands‑on operational effort, and markedly shortened time‑to‑insight—enabling faster, more sustainable scientific discovery at scale.

Dr. Jens Hollunder

Bayer CropScience

Title

AI-Ready by Design: Turning FAIR Intent into Operational Readiness

Abstract

AI is increasingly embedded across the life sciences value chain, yet many initiatives stall when moving from prototype to production. A core reason is that data is often not prepared for machine interpretation: definitions vary by domain, relationships are implicit, identifiers are inconsistent, and ownership is unclear — all of which undermines trust, reproducibility, and compliance. This presentation introduces an operational framework for AI data readiness, designed to make data decision-grade for both human and machine consumption.

We define readiness through four pragmatic checks: understandable (glossaries and conceptual/logical mappings exist), accessible (clear access paths and request processes), trustworthy (agreed quality rules and monitoring), and owned(named stewards with escalation paths). We then show how these checks are accelerated by a cross-R&D semantic enrichment capability: a semantic layer that functions as a machine-interpretable contract, capturing domain concepts, controlled vocabularies/identifiers, explicit relationships, and mappings between logical and physical data structures.

About Dr. Jens Hollunder

Dr. Jens Hollunder is a Data Science Lead in R&D at Bayer Crop Science, with a background spanning multiple roles across the organization. In his current role in Field Solutions, he is advancing the digital transformation of R&D by shaping the data foundations needed for trustworthy analytics and AI-ready capabilities. His work involves data readiness, designing and evolving semantic data layers, data governance and management, and the practical implementation of FAIR principles, as well as data science modelling and operations research methods. In his role, Jens works closely with R&D domain teams, R&D IT and central IT to transform scientific and operational needs into robust, machine-interpretable data assets and decision support solutions. As a contributor to the Data-to-Decisions (D2D) program, he helps to drive operationalized data stewardship and the creation of semantic, reusable data products that connect business, science and technology, ensuring they can be leveraged sustainably at enterprise scale.

Gene Shoykhet

Bayer CropScience

Title

AI-Ready by Design: Turning FAIR Intent into Operational Readiness

Abstract

AI is increasingly embedded across the life sciences value chain, yet many initiatives stall when moving from prototype to production. A core reason is that data is often not prepared for machine interpretation: definitions vary by domain, relationships are implicit, identifiers are inconsistent, and ownership is unclear — all of which undermines trust, reproducibility, and compliance. This presentation introduces an operational framework for AI data readiness, designed to make data decision-grade for both human and machine consumption.

We define readiness through four pragmatic checks: understandable (glossaries and conceptual/logical mappings exist), accessible (clear access paths and request processes), trustworthy (agreed quality rules and monitoring), and owned(named stewards with escalation paths). We then show how these checks are accelerated by a cross-R&D semantic enrichment capability: a semantic layer that functions as a machine-interpretable contract, capturing domain concepts, controlled vocabularies/identifiers, explicit relationships, and mappings between logical and physical data structures.

About Gene Shoykhet

Gene Shoykhet is an R&D IT Data Strategy Lead at Bayer Crop Science, where he focuses on building data‑centric foundations that enable scalable, trustworthy data foundations across the R&D landscape. His work spans AI data readiness, semantic enrichment, data governance, and the operationalization of FAIR data practices to support decision‑grade analytics and GenAI use cases. Gene plays a key role in the Data‑to‑Decisions (D2D) program, driving cross‑functional approaches to semantic layers, data stewardship, and machine‑interpretable data products that bridge business, science, and technology. He collaborates closely with R&D, IT, and governance communities to move AI initiatives from isolated pilots to sustainable, enterprise‑ready capabilities.

Dr. Peter Mattson

MLCommons Association, Google

Title

MedPerf: large, diverse AI tests that protect patient privacy and model IP

Abstract

AI is transitioning from research to deployment, and this transition is gated our ability to build and verify highly reliable systems — especially in healthcare. The MLCommons MedPerf project is pioneering a combination of federated evaluation (sending models to data owners, thereby preserving patient privacy and avoiding complicated data licensing) and confidential compute (using TEEs to protect model owner IP).

About Dr. Peter Mattson

Peter Mattson founded and is President of MLCommons and is a senior staff engineer at Google. He founded and was General Chair of the MLPerf consortium that preceded it. Previously, he founded the Programming Systems and Applications Group at Nvidia Research, was VP of software infrastructure for Stream Processors Inc (SPI), and was a managing engineer at Reservoir Labs. His research focuses on understanding machine learning models and data through quantitative metrics and analysis. Peter holds a PhD and MS from Stanford University and a BS from the University of Washington.

 

Prof. Dr. Jürgen Bajorath

University of Bonn, b-it, Lamarr Institute for Machine Learning and Artificial Intelligence

Title

Foundation Models and Artificial Intelligence Agents for the Life Sciences and Medicine

Abstract

Scientific foundation models are large-scale artificial intelligence (AI) systems, including large language models (LLMs) and other architectures, trained on extensive and heterogeneous domain-specific data sets. In biology, drug discovery, and medicine, these models are gaining increasing attention as tools for information synthesis, hypothesis generation, or research planning. Furthermore, open-source and proprietary LLM architectures, with or without domain-specific adaptation, frequently serve as core components of agentic AI systems including increasingly semi-autonomous scientific AI agents and prototypic virtual laboratory frameworks, as illustrated by recent case studies. In medicine, foundation models and AI agents are expected to influence clinical practice at multiple levels such as patient data integration, workflow optimization, or decision support. However, the further development and broader adoption of early-stage agentic AI frameworks in high-risk domains such as drug discovery and medicine depends on rigorous validation, regulatory compliance, and the resolution of ethical, security, and safety challenges.

About Prof. Dr. Jürgen Bajorath

Jürgen Bajorath is Chair of Life Science Informatics at the b-it Institute of the University of Bonn and Chair of AI in the Life Sciences and Health at the Lamarr Institute for Machine Learning and Artificial Intelligence, a national AI competence center. He also is an Affiliate Professor at the University of Washington, Seattle, and an External Professor at the Nara Institute of Science and Technology in Japan. His current work focuses on the development of AI models for drug discovery and design, explainable AI, and AI systems for interdisciplinary research.

Christian Günster

WIdO – AOK Research Institute

Title

Transformer-based foundation models for longitudinal pattern recognition in routine data from the statutory health insurance

Abstract

In the ClaimsBERT research project, Fraunhofer SCAI and the AOK Resarch Institute (WIdO) are developing a foundation model based on statutory health insurance claims data to identify early warning signs of serious illnesses and the need for long-term care. Unlike traditional scoring models developed for specific indications, this approach aims to create a generalizable foundation model that can be adapted to a wide variety of clinical endpoints.

About Christian Günster

Christian Günster is a health services researcher and a holder of a degree in mathematics. Since 2016, he has headed the Quality and Health Services Research Department at the AOK Scientific Institute (WIdO) in Berlin, and has held various leadership positions at the institute since 1990. Moreover, he contributes his expertise to several expert committees, including the Health Research Data Center at the Federal Institute for Drugs and Medical Devices (BfArM) and the Institute for Quality Assurance and Transparency in Health Care (IQTIG). He is the editor of the Health Care Report, a book series featuring empirical analyses of routine data on health care topics.

Dr. Ashar Ahmad

Grünenthal GmbH

Title

Agentic Workflow Applications in Computational and Data Science within Drug Development

Abstract

Agentic coding tools, i.e. AI systems capable of autonomously planning, executing and updating long running, multi-step computational tasks have rapidly reshaped software engineering practice since their emergence in 2025. Tools such as Anthropic's Claude Code and OpenAI's Codex CLI have seen broad adoption across the technology sector, yet their uptake in pharmaceutical R&D remains limited owing to validation requirements, regulatory scrutiny and broadly the conservative nature of Drug Development.

This talk examines where agentic workflows are nonetheless gaining meaningful traction across the computational and statistical functions in R&D. Drawing on first-hand experience working at the interface of Data Science, Pharmacometrics, and Bioinformatics, I discuss practical examples spanning automated Pharmacometrics modelling task and accelerated construction of bioinformatics workflows. These examples illuminate both the productivity gains achievable and where human oversight remains necessary.

I close by mapping the opportunity space: where agentic tools can augment analytical tasks for scientists as well as lower the barrier to cross-functional collaboration between these functions and data science.

About Dr. Ashar Ahmad

Dr. Ashar Ahmad has a multidisciplinary background with studies in Chemical Engineering and Computer Science. Between 2014 and 2018 he worked at b-it in Prof. Dr. Holger Fröhlich's lab on Statistical Machine Learning methods and contributed to translational research projects at the University Clinic in Bonn. After receiving his PhD, he joined UCB Pharma as a Post Doctoral Scientist working in the Translational Medicine department. Since 2021, he has been working as a Data Scientist, Associate Director at Grünenthal GmbH in the Drug Development department driving AI and Data Science use-cases across various functions in Global R&D.

Prof. Dr. Stefan Feuerriegel

LMU Munich

Title

Causal machine learning for predicting treatment outcomes

Abstract

Causal machine learning (Causal ML) offers flexible, data-driven methods for predicting treatment outcomes including efficacy and toxicity, thereby supporting the assessment and safety of drugs. A key benefit of causal ML is that it allows for estimating individualized treatment effects, so that clinical decision-making can be personalized to individual patient profiles. Causal ML can be used in combination with both clinical trial data and real-world data, such as clinical registries and electronic health records, but caution is needed to avoid biased or incorrect predictions.

About Prof. Dr. Stefan Feuerriegel

Stefan Feuerriegel is a full professor and director at Institute of AI in Management (LMU Munich), and a PI at the Munich Center for Machine Learning (MCML). Previously, he was an assistant professor at ETH Zurich. In his research, Stefan develops, implements, and evaluates Artificial Intelligence technologies to improve decision-making. His group is particularly interested in understanding the effects of treatments, so that treatment decisions can be optimized ("causal ML"). His team brings causal ML methods into practice with companies but also medical professionals. Stefan's team publishes regularly in computer science (e.g., NeurIPS, ICML, ICLR), general science (e.g., Nature Communications), and medical outlets (e.g., Nature Medicine, Lancet Digital Health, NEJM AI).

Dr. Steve Gardener

PrecisionLife Ltd

Title

Discovering and Validating Clinically Actionable Biology in Complex Diseases

Abstract

Precision and preventative medicine are predicated on understanding the causal biological drivers of disease risk and resilience, and predicting the likely symptoms, prognoses, and drug responses of individual patients. This is especially challenging in complex diseases, which account for 80% of healthcare spending. This talk will describe a discovery to bedside approach integrating AI methodologies, multi-omic datasets, clinical validation, and personalized testing to optimize drug discovery and healthcare applications.

About Dr. Steve Gardener

Steve has 35 years’ experience building world-class teams, products and companies in precision medicine, drug discovery and computational biology in the UK, EU and USA.

He has developed world-leading AI enabled genomics, analytics, drug discovery, precision medicine, and diagnostics tools for life science and healthcare including several patented inventions. He was Global Director of Research Informatics for Astra and involved in the early Human Genome project. He has worked with multiple pharma companies on 30+ drug discovery and development projects.

Steve is Chair of the UK BioIndustry Association’s Data, AI and Genomics Advisory Committee, Scientific Advisor to Our Future Health, Industry Advisory Board member for UK Biobank and the Health Data Research Service, and Expert in Residence at Oxford University. He advises a range of disease charities, national biobanks and research institutes on data, AI and precision medicine strategy.

Nikita Makarov

Valinor Discovery, Helmholtz Munich

Title

LLM-based digital twins for clinical trials

Abstract

This presentation explores the use case of generative AI digital twins within pharmaceutical clinical trials, highlighting our latest advancements in LLM-based modeling. We will showcase our DT-GPT model, as well as our latest framework TwinWeaver and pan-cancer foundation model, GDT.

About Nikita Makarov

Nikita Makarov is a senior scientist at Valinor Discovery, where he focuses on multimodal patient predictions. He completed his PhD specializing in generative AI and digital twins for clinical trials & real-world data, in a collaboration between Roche, Helmholtz Munich, and the Ludwig Maximilian University of Munich. During his time at Roche and Genentech, Nikita’s research centered on the continuous pre-training of large language models (LLMs) to construct patient-level foundation models and deploying custom LLMs across various portfolio projects. His contributions to the field earned him the 2024 Roche Inventor Silver Medal and the 2025 Roche IMPACT award, which he shared with his team. He continues to advance this research at Valinor, pushing the boundaries of how multi-modality can enhance predictive modeling in healthcare.

Dr. Oliver Frings

Siemens Healthineers

Title

Digital Twins in Healthcare – From Predictive Models to Active Reasoning

Abstract

Digital twins are emerging as a key enabler for precision medicine, driven by advances in healthcare AI, imaging, and computational modeling. This session explores the concept of patient twinning at Siemens Healthineers, focusing on the creation of personalized digital replicas from multimodal clinical data. By integrating data-driven and mechanistic models, these twins provide a framework to support clinical decision-making and therapy planning. Finally, the presentation situates digital twins within the broader AI-driven healthcare ecosystem, emphasizing their potential to translate research innovation into tangible clinical impact.

About Dr. Oliver Frings

Dr. Oliver Frings is the Head of Computational Modeling within the AI Germany department at Siemens Healthineers, where he specializes in the advancement of patient twinning and precision medicine. With nearly a decade of experience at the intersection of technology and healthcare, he has previously held various leadership roles in product management and clinical collaborations, focusing on oncology and clinical decision support systems. He holds a Ph.D. in Computational Biology from Stockholm University.