2025 CDISC + TMF Europe Interchange Program

Find the full 2025 EU Interchange Program Here
Looking for TMF Agenda Content? Click Here.
Program is preliminary and subject to change.
Session Category
Registration Desk Open
Lobby
Welcome Coffee
International Foyer, Floor -2
Session 1: Opening Plenary
International Ballroom, Floor -2
CDISC recognizes the importance of amplifying patient voices and provides a vital platform where experiences, challenges, and insights are not only shared but valued.It’s inspiring that brilliant minds gathered in this event, committed to advancing clinical research, optimizing the use of data, and improving lives.
Collaboration is the cornerstone of meaningful progress in research. Patients are more than subjects of research; we are partners. Our insights must guide study design, data interpretation, and regulatory discussions.
Living with a rare disease is challenging for the patient and their family. Still, rare diseases bring patient advocacy to a new level, investing in progress, cooperation, and co-creation. This leads to innovation and inclusivity in care, clinical research, and medicines but also in data standardization and data use.
Standardization and interoperability is essential—not just for efficiency but for equity, ensuring diverse patient voices are represented and data is accessible and accurate.
CDISC’s commitment to streamlined standards accelerates therapies, enhances research reliability, and promotes inclusivity.
I invite you to join me in this ongoing conversation, where together, we can bridge the gap between research and real-life experiences to drive impactful, compassionate innovation.
Morning Break
International and Geneva Foyers, Floors -1 & -2
Session 2: Track A, B, C - The European Landscape of Clinical Research and Health Care
International Ballroom, Floor -2
Are there any new activities in leveraging data and artificial intelligence (AI) to support public health in the European Union (EU)? How ongoing data submission and standardisation activities at the European Medicines Agency (EMA) are evolving?
In this presentation the newly formed joint HMA-EMA Network Data Steering Group focusing on data that the European medicines regulatory network receives, analyses or offers advice for will be introduced. Additionally, recent updates under DARWIN EU®, the Data Analysis and Real-World Interrogation Network, EMA’s clinical study data (individual patient data in electronic structured format, e.g. CDISC SDTM) project and the proof-of-concept study for Standards for Exchange of Non-clinical Data (SEND) packages in centralised procedures will also be covered.
The International Council for Harmonisation’s (ICH) M11 guidelines signify a major advancement in clinical trial design by promoting standardization in protocol preparation and data exchange. The purpose of this new harmonised guideline is to introduce the clinical protocol template and the technical specification to ensure that protocols are consistently prepared and shared in a harmonised data exchange format that meets regulatory authorities’ requirements. The ICH M11 Clinical Electronic Structured Harmonised Protocol Template offers comprehensive organization of clinical protocols with standardized content, including both required and optional elements. Furthermore, the Technical Specification (TS) accepted by all ICH regulatory authorities outlines the conformance, cardinality, and technical attributes essential for the interoperable electronic exchange of protocol content. This initiative aims to create an open, non-proprietary standard that facilitates the electronic exchange of clinical protocol information, enhancing efficiency in clinical research. Halsey Nick will explore these pivotal advancements at the CDISC EU 2025 conference in Geneva.
With the new legislation in US: 21st Centuries Cures Act (2016), Meaningful Use Act (2016) & CMS Interoperability and Prior Authorization Final Rule (CMS-0057-F) (2024), EU: EHDS (2025) regulation, we see an increased focus to close the data access challenges both for clinical care and research. This presentation will have a primary EU focus on initiatives funded by the EU that are supporting the interoperability between 27 member states to achieve the EHDS regulation that came into effect 27th of March 2025.
- Nick Halsey, EMA
- Jesper Kjaer, Novo Nordisk
- Eftychia-Eirini Psarelli, EMA
Session 2D+E: The Future of TMF (TMF Track)
Europe Ballroom, Floor -2
An introduction to the TMF part of the CDISC Interchange.
An insight into the impact of ICH E6 R3 on every aspect of the trial master file - from sponsor to investigator to archive. Your opportunity to ask questions!
The TMF Reference Model has brought structure and consistency to clinical research — now, it’s evolving into something even greater: a global standard. This session will unveil the vision for V4. Learn how familiar elements like numbering and structure remain, while powerful changes like Record Groups, Record Types, and multi-level filing pave the way for seamless interoperability and regulatory alignment.
This isn’t just a model update — it’s an industry-wide transformation driven by a diverse, global community. Join us to explore what’s coming, why it matters, and how you can be part of building the future of TMF.
Lunch
International & Geneva Foyers, Floors -1 & -2
Poster Session (During Lunch Break)
Geneva Rooms, Floor -1
Session 3A: Digital Data Flow
Zurich, Floor -2
The Unified Study Definition Model (USDM) has evolved significantly since its inception in the summer of 2021. During this period, we've witnessed the introduction of ICH M11, with Version 4 of the model scheduled for release in the coming weeks. CDISC is now collaborating with HL7 through the Vulcan project to enable protocols conforming with the M11 template standard to be exchanged using HL7 FHIR resources.
This presentation will provide an update on the USDM and its latest release, detail the ongoing work with ICH and Vulcan. Additionally, we'll examine the adoption of USDM by TransCelerate member organizations and explore how M11, USDM & FHIR came together as part of the FDA's PRISM pilot.
Imagine you are asked to build a digital study in 15 minutes based on a PDF protocol. Impossible? Sounds like magic? See the impossible made possible.
This presentation will show how it can be done. Using a technology demonstrator, we will convert a PDF Protocol document into a complete digital study setup using the combined power of AI, Biomedical Concepts (BCs) and the Unified Study Data Model (USDM).
Last year, we used the CDISC Pilot study in USDM Excel format to demonstrate the vision of end-to-end automation. This time we will use a protocol in PDF to create a Schedule of Activities (SoA) in USDM using Generative AI. We will then be able to take you on a study journey, enrol subjects and enter data. By extending the USDM model with Clinical Recording Model (CRM) and its relationship to SDTM, we will show how the entered data is displayed in SDTM, without the use of mapping, and show traceability back to how the data was collected.
In addition, the same model can be used to generate the define.xml, SDTM trial design datasets, annotated CRF artefacts and recommend data forms from a forms library. To emphasize the importance of standards, in particular using a digitised protocol, we will perform the demonstration using several protocols, some not known to us beforehand. Please note, the technology demonstrator is not a commercial product, it is there to show and make real a potential automated future.
The implementation of the Unified Study Definition Model (USDM) in Novo Nordisk StudyBuilder has unveiled new cross-functional insights and dependencies in study start-up and downstream data processes. This presentation will highlight the importance of balancing standards and innovation and propose ways to further enhance collaboration between USDM and the pharmaceutical industry to address burning issues. Emphasis will be placed on how to balance current standardization efforts and at the same time pave the way for future innovative studies.
Additionally, we will address the challenges faced by new stakeholders and metadata users who are accustomed to highly detailed processes and documents and explore how the Digital Data Flow can help driving innovation in these areas
Leveraging USDM streamlines the creation and configuration of clinical trial databases and related systems. We have seen study build automation has been explored from different angles. For instance, Oracle has collaborated with COSA, specifically OpenStudyBuilder, and during CDISC 360 with Transcelerate and Nurocor to execute full study build automation in Electronic Data Capture (EDC). But what happens after the initial setup is done and the study goes live? Protocol amendments have increased substantially in the last 10 years. Implementing these amendments in parallel and downstream systems is burdensome and time-consuming. Also, it involves heavy change management, as they can impact data collection, operational adjustments and even changes to clinical practices – most if not all requiring additional trial staff training.
In this presentation, we take a closer look at navigating post go-live changes when studies are setup through automation using USDM / M11 and list out identified challenges for EDC and beyond.
Session 3B: Artificial Intelligence
Londres, Floor -2
Data Standards teams or Subject Matter Experts (SME’s) often face the same recurring challenge: addressing numerous questions about the implementation of SDTM and company-specific standards. These queries consume significant time and resources. Locating the rationale for past decisions or clarifying an approach often requires consulting multiple resources.
Enter ‘SANDY’ (Standards ANswers Do it Yourself): an AI-powered chatbot designed to alleviate this burden. While it may not replace the Data Standards team, SANDY enhances efficiency by quickly retrieving relevant information from a vast array of documents and providing direct references to the sources.
In this presentation we will share the story of SANDY’s development to a Minimal Viable Product (MVP): from selecting the ideal large language model (LLM) and setting up a vector database to creating a functional, user-friendly chatbot. We’ll discuss the different challenges and limitations we encountered and the innovative solutions we have implemented, together with an external partner, to overcome these hurdles. We’ll also explore how we trained the model and iteratively improved response quality and accuracy. The session will conclude with a live demonstration, including pre-designed questions.
Artificial Intelligence (AI) is transforming clinical trial research, and this presentation explores its integration into the CDISC Open Rules project. I will demonstrate a custom-trained GPT-powered chatbot embedded in the CDISC Open Rules editor, designed to assist users in creating and validating CDISC Open Rules efficiently—without programming expertise.
The session will cover the chatbot’s architecture, highlighting prompt engineering’s role in generating high-quality outputs. I’ll guide attendees through document preparation for accurate, context-aware results and showcase real-world examples of rule creation and validation. The AI-driven chatbot automates rule drafting, test data generation, and interactive support, reducing manual effort while enhancing efficiency.
This presentation will inspire attendees—whether data managers, statisticians, or other clinical trial professionals—to embrace AI-driven solutions in their work. By demonstrating a practical application of a GPT-powered chatbot, I’ll provide actionable insights to enhance workflows, improve efficiency, and foster innovation.
Clinical data standards have traditionally supported regulatory submissions by ensuring data quality, consistency, and interoperability. However, the evolving landscape of clinical research is shifting from purpose-specific data to reusable data, enabling the use of advanced data analytics involving among others Deep Learning, Big Data Analytics and NLP. This transition highlights the growing importance of standardized datasets in AI-driven clinical research.
This presentation will explore how current data standards facilitate AI applications in clinical research, including predictive modeling, drug repurposing, and clinical evidence generation. We will also discuss key challenges in data standardization for AI, such as integrating unstructured and multi-modal data. Finally, we will highlight areas requiring further development and propose the incorporation of specific extensions that enhance the metadata framework and facilitate interoperability, while ensuring that CDISC standards continue to support the growing role of AI in clinical research and regulatory submission.
This presentation showcases how we leverage pre-trained transformer-based NLP models to enhance the reuse of CDISC standards, ensuring consistency and fostering innovation in clinical trials.
As clinical trial data grows in complexity, we must manage standards for specialized datasets like biomarkers and genomics. Unlike early standardization efforts focused on safety and efficacy, today’s challenges require more sophisticated approaches. However, our current retrieval tools, reliant on pattern matching, struggle with terminology variability and fail to efficiently identify existing standards. This results in excessive manual effort or duplication of metadata, leading to inconsistencies across studies.
To address these challenges, we must rethink information retrieval and develop tools that provide seamless access to existing standards. We present an innovative approach that improves retrieval within our clinical data standards library, reducing redundancy and ensuring better metadata management.
Session 3C: Innovation Showcase

New York, Floor -2

TCS ADDTM platform accelerates the speed-to-market for the life sciences industry across the entire clinical R&D value chain and helps make clinical trials more agile and safe. It embraces and adopts novel digital approaches to streamline data complexity and enables the next level of drug development through its preventive and augmented approach. The platform is powered by our proprietary cognitive intelligence engine data-driven smart analytics, and IoT that provides superior business value to the life sciences industry. The digital health platform leverages the best of cloud architecture and personalized user experience design in compliance with quality guidelines and privacy regulations.
Compliance with CDISC data standards is mandatory for clinical trial submissions to the FDA, PMDA and MHRA. However, adopting CDISC standards isn’t just a necessity. It’s an important investment that enables more meaningful research, and deeper data insights. Complying with CDISC data standards is an ongoing challenge for many organizations.
This presentation explores best practice processes to support and achieve successful implementation and compliance with regulatory requirements. We examine the concept of ‘designing studies with the end in mind,’ through early standards adoption. Essentially, how implementing industry standards from the start of a study, and designing a compliant trial upfront, rather than leaving compliance to the end, is the ultimate blueprint for best practice. We also examine the essential role of software in creating and maintaining clinical metadata standards.
In conclusion, we demonstrate through a case study how end-to-end standards implementation can be leveraged to not only achieve compliance, but also facilitate greater quality and consistency, as well as faster delivery of submission deliverables.
CDISC Dataset-JSON is an emerging standard with transformative potential, but without the right tools, its benefits remain hidden. Just as the Apollo program accelerated technological leaps, this standard can help to redefine the way we review clinical data in our industry.
This presentation introduces an open-source viewer designed to unlock Dataset-JSON's potential. It explores how features like intuitive navigation, data streaming and easy filtering can shift the paradigm: no more "needle in a haystack" struggles, just purposeful exploration of the data. Looking ahead, this presentation will discuss next-generation capabilities of the viewers — embedded validation with CDISC CORE and interactive analytics — to make data review faster, easier, and more actionable.
Session 3D: Technology in TMF Management (TMF Track)

Munich & Paris, Floor -2

In a world where technology is changing faster than ever, it can be tough for Trial Master File (TMF) teams‚ who work under strict rules and oversight‚ to keep up and actually see the benefits of new technology. This presentation looks at practical ways to make the most of cutting-edge solutions and approaches while staying compliant. We will examine real use cases leveraging Generative AI, Voice assistants, NLP and more. Topics include:
1) Seek Out New Ways of Working and Foster a Culture of Experimentation
2) Overcoming Operational Hurdles in a Regulated Environment
3) Measuring Efficiency and Driving Continuous Improvement This approach encourages teams to view challenges as opportunities to innovate, with the support of colleagues who are open to testing and refining new ideas.
By following these steps, TMF teams can confidently harness today‚ fast-moving tech to boost efficiency without breaking the rules. Attendees should leave feeling excited about the opportunity of bringing new technologies into their organizations, a better understanding of these technologies and how they can be used, and finally some practical adoption techniques to ensure successful implementation in their organization.
In the digital age of Trial Master File (TMF) management, Veeva eTMF is a leading platform for ensuring compliance, accuracy, and efficiency. Despite its widespread use, variability exists in how TMF content is filed, processed, and maintained. Analyzing trends and processes from sponsors and CROs using Veeva eTMF provides insights to identify best practices, inefficiencies, and opportunities for improvement. This session will explore TMF data trends, focusing on document creation, collaboration, centralized vs. decentralized filing, AI’s impact on TMF quality, and performance metrics for different models (FSP vs. FSO) and across Reference Model zones. By analyzing TMF data, companies can gain new insights to improve decision-making, drive operational efficiency, and enhance document quality across clinical trials.
TMF metrics and KPIs have long been a core part of ensuring inspection readiness. But as clinical trials evolve, the value of TMF data goes far beyond just compliance.
In this session, we'll explore how these metrics can play a much bigger role in driving trial innovation and optimization.
We'll start by taking a look at the metrics and KPIs most commonly used to achieve inspection readiness. Things like quality, completeness, and timeliness are essential benchmarks, but they're not without their challenges. We'll highlight where organizations typically run into roadblocks and what can be done to overcome them.
From there, we'll shift gears to explore more advanced ways of using TMF data. The Session will show how analyzing TMF data differently can help companies embrace a riskbased approach, reduce QC cycle times and decrease overall TMF management efforts.
Finally, we'll push the boundaries of what's traditionally expected from TMF metrics. We'll look at emerging use cases, such as how TMF data can support predictive analytics to improve trial conduct to unlock hidden value from their TMF data, beyond inspection readiness.
This session will challenge attendees to think differently about TMF metrics and KPIs. By shifting the focus from compliance to innovation, organizations can improve trial oversight, enhance decision-making, and ultimately optimize how trials are conducted.
At the end of this session, participants will:
- Understand the role of TMF metrics and KPIs beyond inspection readiness and how they can be leveraged for trial innovation and operational improvements.
- Learn practical ways to use TMF data insights to drive efficiencies, improve collaboration, reduce cycle times, and proactively manage risks.
- Explore emerging use cases for TMF data in areas such as predictive analytics, risk-based monitoring, and vendor management to optimize clinical trial conduct
The TMF is comprised of a variety of clinical systems; each of them considered TMF repositories. TMF repositories are to be validated per the published EMA guideline titled “Guideline on the content, management and archiving of the clinical trial master file (paper and/or electronic)” released in December 2018.The TMF Reference Model added the Computer System Validation tab with version 3.0 in 2015. The table supports the organization of documentation associated with validation of clinical systems utilized in a clinical. Since the EMA GCP Inspectors Working Group published the “Guideline on computerised systems and electronic data in clinical trials” in March 2023, the TMF RM Computer System Validation tab has become that much more visible as it highlights the expectations of the guideline. This presentation is important for all TMF management professionals and clinical systems professionals who perform validation or assurance related activities so they can ensure that their companies comply with the EMA guidelines.
Session 3E: TMF Culture and Engagement (TMF Track)
Copenhague & Lisbonne, Floor -2
The unusual combination of having numerous pharmaceutical companies within impossibly close geographical proximity has sparked something very interesting in Denmark, namely intercompany networks. One of these being the The Danish TMF Network.
The Danish TMF Network has grown from a handful of people asking each other about the newfangled “electronic TMF” at a TMF conference to more than 30 members from over 15 Danish pharmaceutical companies, discussing topics such as recent inspection trends, process improvements and TMF engagement amongst non-TMF’ers.
The network provides a safe space to share ideas without the fear of judgement and a place to have any questions answered, as there is more than 300 years of combined TMF experience among the members.
From the unlikely origin story to the process improvements the group has fostered within the individual companies, this presentation hopes to inspire other TMF’ers to unite across companies.
Panelists:
- Melissa De Swaef, argenx
- Georgiana Brahy, Parexel
- Liz Farrell, Agios
All clinical trial parties understand the importance of making new treatments available to patients globally. However, many overlook the significance of the TMF and its evolution from a paper archive into a valuable information resource. Maintaining an inspection-ready TMF at all times minimizes workload during inspections and provides a single source of truth for all study-related information.
TMF success goes beyond technology and process optimization. It is fundamentally shaped by the organization’s culture. A cohesive culture strategy that engages both end users and functional area leads is key to long-term TMF management success. A strong TMF culture creates awareness of its value and supports continuous inspection readiness.
This panel will explore the benefits and essential elements of a strong TMF culture, as well as how to establish it within an organization, including cascading it to external partners involved in a study.
Building an internal inspection preparation program within your TMF team can significantly enhance inspection success. This involves collaborating with various functions to understand their TMF processes and supporting an "Always Inspection Ready" (AIR) mantra with your study teams.
This presentation will explore how to partner effectively with your study manager to ensure adherence to TMF Plans, address overdue documentation issues, and identify areas for improvement. The program will also account for study-associated risks, providing a comprehensive approach to readiness.
We will discuss strategies to enable AIR, promote a positive TMF culture, perform test-drives with various functions and ensure the TMF accurately reflects the study. This proactive approach, starting well before a potential inspection, ensures your team is prepared, allowing you to pace yourself and peak at the finish line. Aim to elevate your readiness with a robust inspection preparation program.
- Vittoria Sparacio, Novartis
- Torsten Stemmler, BfArM
- Hobson Lopes, Regeneron
- James Martin, Syneos Health
Afternoon Break
International and Geneva Foyers, Floors -1 & -2
Session 4A: CDISC 360i
Zurich, Floor -2
CDISC 360i is driving the shift to a fully digital standards ecosystem, eliminating traditional silos and enhancing data interoperability. This session will provide an overview of the 360i technical roadmap, focusing on the implementation of the Unified Study Definitions Model (USDM) and advancements in biomedical concepts, rules, and data exchange standards. Learn how linked, machine-readable standards are enabling seamless data flow from study design through regulatory submission. Discover how collaboration with industry stakeholders and the adoption of automation and AI are accelerating clinical research. Join us to explore how 360i is driving greater efficiency across the clinical development lifecycle.
This presentation provides a deep dive into the design and implementation of the activity concept within OpenStudyBuilder, showcasing its role in the broader context of clinical study automation as envisioned by CDISC 360i. We will compare OpenStudyBuilder's graph-based model with CDISC Biomedical Concepts, focusing on the rationale behind design choices, including structural adaptations and enhancements. Additionally, we will discuss challenges encountered during development, highlighting lessons learned and future opportunities to refine the approach.
GSK is pioneering a CDISC-native, end-to-end (E2E) knowledge graph approach to accelerate clinical trials, aligning with CDISC 360i and open-source initiatives like Pharmaverse. Their strategy involves: 1) unifying models by linking an implementation ontology based on ODMv2 to Analyses and USDM study definition for seamless protocol design and implementation; 2) developing a reusable analysis framework separating definitions from outputs for faster study delivery; and 3) committing to E2E data capture and automation from study design to results. GSK invites industry collaboration to shape and refine CDISC 360i, fostering a more interoperable clinical data ecosystem for faster drug development.
Session 4B: CDISC Foundational
Londres, Floor -2
The study data tabulation model (SDTM) provides methods to define the complex interconnection of data as dataset/record relationships through the Related Records, RELREC, special-purpose dataset. In AstraZeneca, in the CVRM therapeutic area, we have initiated work with the RELREC dataset to standardize, simplify, guide, and possibly automate the handling of these relationships.
- What values are appropriate to ensure unique links?
- What are the shortcomings of designing linking variables based on each RAW dataset as a standalone entity?
- How can a sponsor define and maintain relationships that function across clinical studies?
This presentation will target our approach to address these questions. The strategy we have adopted to maintain useful data relationships, even when one is not in full control of all the sponsor standards or study needs. Furthermore, we explore possible automation or managing related records and stress test RELREC implementation.
In the current regulatory landscape, harmonizing clinical trial data according to industry standards is essential at every stage from data collection to analysis, including non-CRF data which can make up to 70% of trial data. Using CDISC-based controlled terminologies is key for achieving consistent data across studies. However, one-dimensional codelists do not ensure sufficient data harmonization, especially when converting unstructured scientific documentation into standardized formats.
This presentation outlines a Value Level Metadata (VLM)-based solution for non-CRF data collection at Roche. Initially, manually created VLM reports accessed through a metadata repository are expanded both scientifically and technically, integrating into machine-readable metadata workflows. Using a generic VLM ontology and meta-programming approach, Roche aims to automate authoring tools and data transfer specifications, enhancing automation and machine readability. Challenges of maintaining extensive sets of VLM and proposed technical solutions for corner cases, along with benefits for the generation of Biomedical Concepts are explored.
The CDISC Protocol Deviation (PD) Sub‐Team has updated the Protocol Deviation (DV) domain in SDTMIG 4.0 with a new variable DVCLASI (Classification of Protocol Deviation).
DVCLASI could be used to address the request from the draft FDA "Protocol Deviations for Clinical Investigations of Drugs, Biological Products, and Devices" (FDA PD Guidance) section III.B.2 " include a variable in the DV domain that provides the sponsor's determination of whether the protocol deviation was important."
Open topics from the PD sub‐team include:
- DVCLASI controlled terminology based on ICH E6 R3 and the draft FDA PD Guidance
- Development of the DV Codetable for DVDECOD and DVCAT
- PD levels of site, study, country
- EMA Serious Breaches
Session 4C: Academia
New York, Floor -2
Many pharmaceutical companies and Contract Research Organizations have established their internal systems and processes to comply with CDISC standards in Japan. However, the CDISC standard has yet to be widely spread among Academia. One reason is that Academia does not have enough opportunities to be directly involved in the regulatory submission process. However, they would like to obtain knowledge and skills of CDISC Standards. We have established a specific team within the CDISC Japan User Group (CJUG) SDTM team, which mainly consists of clinical research support staff affiliated with Academia, starting in December 2022. This team's primary purpose is to develop professionals capable of implementing CDISC Standards in Academia by enabling beginners without prior experience to acquire CDISC Standards knowledge and skills through mock clinical trials protocol and the creation of SDTM datasets. This presentation outlines specific activities and achievements in Japan, illustrating how to implement CDISC standards in Academia.
Current and emerging CDISC standards have an important role in the successful long-term retention, preservation and access to clinical data. Use of CDISC standards during the retention period can support both internationally recognised digital preservation good practice and regulatory requirements such as ALCOA+ data integrity. Yet this seems to be little recognised in the community and is rarely used to justify their use. This presentation will: (a) review retention and archiving requirements, for example as described in ICH E6 (R3); (b) present the long-term retention and preservation benefits of CDISC open-standards for data and metadata; (c) show how CDISC standards align with long-term digital preservation good practice, for example from NARA and the DPC; and (d) discuss how planning for long-term retention and data management from the outset of a trial can reduce both costs and risks over the long-term when trial data and records are archived and preserved.
Research Electronic Data Capture (REDCap) is a free, user-friendly web-based interface which requires no background knowledge or technical experience to use, designed specifically for use by academic and public health, non-profit institutions. The REDCap consortium consists of thousands of institutions and millions of users and studies, representing the potential for a huge pool of data that could be tapped into to support clinical trials. Yet there are many challenges of academia adopting CDISC standards for research. In recognition of that, CDISC has partnered with REDCap to help bridge the gap. This review of a healthy volunteer research study, delves more deeply into the REDCap and CDISC connection, as well as outlining the methods, surprises and challenges of mapping the resulting data into SDTM. In conclusion, with patience, REDCap data can be successfully mapped but additional outreach to academia on the existence and use of standards may be beneficial.
Session 4D: Risk Based Approaches (TMF Track)
Munich & Paris, Floor -2
Regulators around the globe are encouraging Sponsors and related stakeholders to take a risk-based approach to support their clinical research work. But what does this really mean? The TMF Reference Model Steering Committee supported a Risk initiative in late 2023 to examine risk management principles and provide support to the TMF area on considerations for the industy. This session will discuss the initiative and its deliverables and identify some key messages from the white paper that is being released at this conference.
Regulators around the globe are encouraging Sponsors and related stakeholders to take a risk-based approach to support their clinical research work. But what does this look like and how to document and manage? The TMF Reference Model Risk initiative developed a tool to support Risk Management in the area of the TMF. This session will discuss how to use the tool and developing a plan for mitigation of risk.
- Karen Roy, CDISC / Epista
- Torsten Stemmler, BFarm
- Joanne Malia, Regeneron
- Paul Fenton Carter, Montrium
Session 4E: Fundamentals of TMF (TMF Track)
Copenhague & Lisbonne, Floor -2
The TMF is more than just a regulatory requirement—it’s a key tool for running efficient and compliant clinical trials. This presentation highlights how treating the TMF as a strategic asset, rather than just a checklist, can improve trial operations and outcomes.
Key topics include best practices for using eTMF data performance metrics to drive continuous improvement. Attendees will learn how a well-managed TMF helps meet regulatory requirements, speeds up trial timelines, and strengthens compliance—ultimately setting organizations up for long-term success.
The TMF Reference Model is known for providing standardized nomenclature and structure for our TMFs. Being a well-adopted tool in the industry many sponsors use it as foundation for standardization. But what else is it, or what else can it be? Imagining the full potential of the model it can fulfil several roles – referred to as “identities” - from a source of information for multiple stakeholder groups, a source of truth in managing processes, a foundation for automation and record exchange to a facilitator of necessary x-functional communication. All those roles/identities and their potential will be described and pre-requisites for each role will be suggested/recommended. The aim of the topic is to provide ideas how a complex Excel Structure can be used to “bring TMF topics to life”.
The TMF Completeness continues to be at the forefront of an Inspection Ready TMF. The methodology used to assess TMF Completeness needs to be carefully designed to ensure we don’t miss out on aspects that an eTMF application won’t be able to consider, and then supplement it with processes.
Understanding the eTMF application’s capabilities and limitations is the first step in designing the methodology. Once designed, setting up efficient processes using the CIMPD framework to identify and plug the TMF Completeness gaps are extremely important.
Identifying the right resources and the right time to identify the TMF Completeness gaps, along with the optimal utilisation of the eTMF application features, significantly impacts the profitability of a study along with its quality.
Interchange Evening Networking Event (MUST be Registered for the Evening Event to Attend)

Uptown Geneva

Registration Desk Open
Lobby
Welcome Coffee
International Foyer, Floor -2
Session 5A: CDISC Open Rules
Zurich, Floor -2
The integration of CDISC CORE in a statistical compute environment, such as SAS, allows analysts to take advantage of applying Conformance Rules to clinical submission domains. The process involves expressing current CDISC Conformance Rules in a common specification format, which is then loaded into the CDISC Library. Each Conformance Rule requires the development of an executable component to facilitate its application. Combining the functionalities of Python based CDISC CORE with the capabilities of SAS enable analysts to work with an analytics language of their choice to create the validation reports.
To deliver high-quality datasets fast in this evolving industry, it is crucial to continue to ensure compliance with relevant data conformance rules and regulatory requirements.
SGS invests in the CDISC Open Rules project which aims to deliver executable data conformance rules for each foundational standard. We’ve integrated the Rule Editor and created custom rules; however fully implementing CDISC Open Rules in-house requires significant effort. Collaboration between end users and developers is crucial for successful implementation, despite challenges like misaligned expectations and communication gaps.
We will share our experiences, detailing tools, processes, and validation procedures for implementing CDISC Open Rules. This will emphasize the importance of teamwork and open dialogue with CDISC. In addition, we will address common challenges and share our strategy to foster cross-departmental collaboration and streamline implementation.
Overall, we want to provide insights into overcoming implementation challenges and highlight the benefits of adopting CDISC Open Rules industry wide.
In November 2023 FDA and CDISC started a three-year RCA (research collaboration agreement). The purpose of the RCA is the development and maintenance of FDA business rules as part of the CORE open-source project. The CORE volunteers will create specifications that will become machine executable by writing the code in YAML and storing the rules in the CDISC Library. For the community it means a single version of the rules that is not interpretable in different ways and for FDA it means that all stakeholders will use the same rules, independent of the application used to run them.
In this co-presentation we want to show the process and development of the rules. Furthermore, we will show the community how to implement these rules in existing software and how companies can develop their own set of (especially quality assurance) rules, and add these to an existing CORE implementation.
Session 5B: Analysis Results Standard
Londres, Floor -2
In April 2024, CDISC published the Analysis Results Standard (ARS) to support automation, consistency, traceability, and reuse of results data. To promote the adoption and implementation of ARS, CDISC has partnered with Clymb Clinical to instantiate the first version ARS-compliant packages in the eTFL Portal. Each of the twelve packages contains an analysis overview, design considerations, TFL preview, as well as a download containing ADaM Dataset and Metadata, ARS Metadata, Analysis Results Dataset, and display.
This presentation will provide an overview of the eTFL portal, associated assets, and future development.
Test-Driven Development (TDD) is a design strategy that aids in collaboratively developing robust software, such as R Shiny data visualization apps. TDD involves writing tests for each feature agreed upon with stakeholders before coding.
For apps generating tables, listings, and figures, it is best to separate statistical calculations (business logic) from output (service logic), minimizing code in the service part, which is harder to test. Business subcomponents produce the results datasets read by service subcomponents to create displays.
To streamline the design of results datasets and enhance collaboration between programmers and testers, we use the CDISC Analysis Results Standards (ARS). This presentation will demonstrate how ARS simplifies the TDD process, ensuring efficient and effective development.
The Analysis Result Standards (ARS) are set to transform the way Tables, Figures, and Listings (TFLs) are generated in clinical studies. Today, an excessive number of TFLs are created—many of which are never utilized in the Clinical Study Report (CSR)—leading to inefficiencies, wasted resources, and increased timelines.
With ARS standards and ARDS model established, clinical datasets (SDTM and ADaM) are first structured into Analysis Results Data Standards (ARDS) before TFLs are generated. This ensures that only the most relevant and high-impact TFLs are developed, eliminating redundancy and streamlining the reporting process. The result? Faster study execution, improved compliance, and significant cost savings.
This session will feature an exclusive case study of ARS implementation in a leading pharmaceutical company, detailing the remarkable benefits accomplished. Attendees will gain first-hand insights into how ARS is reshaping clinical reporting standards and driving a paradigm shift in the industry.
Session 5C: Real World Data
New York, Floor -2
Our industry has been mapping data from one standard to another for decades. This includes mapping of clinical trial data from collection to submission standards as well as mapping other study data like real-world data utilized for regulatory purposes. While standards such as CDISC and OMOP exist for the data itself, there is no industry standard for documenting the mapping process and metadata. Consequently, there are many different ways of documenting, noting and storing the mapping details. In most cases this information is not computer-readable and the actual mapping is a non-standardized, case by case task, translating the mapping specification into computer programs.
This presentation shows how we store the mapping information to ensure data lineage and FAIRness, and to improve the efficiency in the mapping process. We will illustrate this by examples of mapping real-world data with an open source tool we are developing.
The use of External Control Arms (ECA) in clinical trials is increasing, particularly for rare diseases where typical Randomized CTs may be difficult. Recent FDA guidance emphasizes both the potential and challenges of ECAs, emphasizing on data reliability, bias mitigation, adherence to CDISC SDTM and ADaM, statistical approaches such as propensity score matching, and regulatory communication. Additionally, CDISC and PHUSE have released guidance on integrating Real-World Data (RWD) into CDISC datasets.
In this presentation, we will summarize key insights from these documents regarding the use of RWD for ECAs and showcase two case studies on integrating ECA data into CDISC-compliant datasets
(1) constructing an ECA using natural history studies and past RCTs and
(2) leveraging publicly available RWD and RCT sources, such as the Critical‐path Alzheimer's Disease database. We will discuss data integration, conformance challenges, and regulatory engagement, offering lessons for future rare disease studies.
Interoperability remains a key challenge in the digitalisation of healthcare, keeping separate health care and the everyday practice of medicine, from the “so called” secondary use of health data for research and public health. Within the framework of the European Health Data Space (EHDS), the xShare project empowers citizens to share computable health data through the “Yellow Button.” As an incubator of the EHDS Standards and Policy Hub, xShare drives harmonization of specifications across stakeholders. The International Patient Summary (IPS) provides a minimal clinical dataset for unscheduled care. Building on the IPS, the IPS for Research (IPS+R) introduces metadata and semantics to bridge healthcare with research and public health, enhancing data reuse potential. Advanced business use cases liked to the patient’s role in clinical research are presented along with the call of early adopters of the “yellow” button.
Session 5D: TMF Interoperability (TMF Track)
Munich & Paris, Floor -2
Packages for Clinical Trial Applications (CTA) submissions under EU-CTR are complex and the content may come from several sources such as the Trial Master File (TMF), the Regulatory Information Management (RIM) system, and others. To ensure the quality and completeness of the submission packages, sponsors must:
- Track the compilation accurately, gathering and organizing various records.
- Conduct structured data collection.
- Implement systematic tracking of the upload process to ensure successful record transfer.
- Adhere to specific nomenclature standards for clarity and uniformity in documentation.
Putting the records together in the internal submission platform, which can be part of the RIM system, plays a critical role in compiling these packages. This can be achieved through physical data transfer or by linking. The talk will present a successful approach with linking.
Panelists:
- Anne-Nöelle Charles, GSK
- Jay Smith, TransPerfect
- Jamie Toth, BeiGene
- Jaime Chang, Biogen
A panel discussion with SMEs (pharma, CROs, Vendors) that have (partially) successfully implemented TMF Interoperability:
- Seamless Data Exchange: The different systems involved in clinical trials (e.g., eTMF, Regulatory, Safety, supply, EDC... Not only eTMF and CTMS which is pretty common!) can share data effortlessly without the need for extensive manual re-entry or reconciliation. Data flow smoothly between systems, ensuring that updates in one system are reflected in the others.
- Standardization: Data formats and structures are consistent across systems. This standardization facilitates easier data mapping, integration, and reporting.
- Real-time Access: Stakeholders, including sponsors, CROs, and regulatory authorities, can access the most current and complete TMF data in real-time, facilitating better decision-making and faster responses to issues, whether the TMF system is used (not only eTMF).
- Enhanced Collaboration: Multiple stakeholders, including internal teams and external partners, can collaborate more effectively. Document sharing, review, and approval processes are streamlined, reducing delays and improving communication.
- Comprehensive Reporting: Integrated systems provide comprehensive reporting and analytics capabilities, enabling better oversight, monitoring, and management of TMF as a whole. This includes dashboards, key performance indicators (KPIs), and other tools to track progress and identify issues. As a result, the TMF metrics (completeness, timeliness, quality) reflect the entire TMF, not only primary eTMF.
- User-friendly Interface: The interoperable system should have an intuitive and user-friendly interface that allows users to easily navigate and manage documents, workflows, and data without extensive training
Session 5E: TMF Management (TMF Track)
Copenhague & Lisbonne, Floor -2
The transition of Trial Master Files (TMFs) in ongoing studies, often termed "rescue" studies, presents a complex process crucial for ensuring study continuity.
This process may occur due to changes in sponsor ownership or shifts between Clinical Research Organizations (CROs). Effective TMF transitions require meticulous planning and execution, encompassing several key areas. These include technical review, focusing on transfer methods and audit trail management; timeline considerations, emphasizing coordination and resource planning; comprehensive risk assessment and mitigation strategies; alignment and mapping of TMF content with current operational frameworks; and development of robust review strategies for transferred content.
Additionally, learning from past transitions and ensuring smooth completion of the transfer process are vital. By addressing these critical aspects, stakeholders can effectively navigate the complexities of rescue studies, maintaining the integrity and continuity of clinical trials during TMF transitions. This approach equips professionals with the knowledge necessary to manage these challenging scenarios successfully.
During mergers and acquisitions, the Trial Master File (TMF) is a critical deliverable for a successful transition. This presentation provides essential tools for a seamless TMF transfer.
Key stages include:
1. Blueprint: Understand TMF components' importance in acquisitions
2. Framework Construction: Learn to organize and validate TMF documents for transfer readiness
3. Connecting Pieces: Explore communication and collaboration strategies for smooth TMF handovers, ensuring a unified transition.
4. Avoiding Failures: Identify common challenges in TMF acquisition and strategies to overcome them, maintaining TMF integrity.
5. Grand Finale: Celebrate TMF integration and explore leveraging it for future success.
Whether experienced or new to acquisitions, gain actionable insights and confidence for successful TMF transfers.
Ensuring the necessary documentation is in place and up to date is not usually a programmer’s favorite task, especially when what is required in the guidelines is a bit vague. In this presentation, a programmer will present key considerations for Biometrics CROs to ensure that their parts of the TMF are managed proactively and timely to maintain inspection readiness.
A cross-functional team in Cytel was brought together to create the essential records process, ensuring it was as easy as possible to implement for the teams working on the projects. We used the TMF Reference Model as the basis for our Project Specific Essential Records Filing Plan, to provide details of exactly what documents were needed for each function.
We will cover what was straight forward and what was more challenging and what we have learned on the journey so far.
Morning Break
International & Geneva Foyers, Floors -1 & -2
Session 6A: Regulatory Submissions
Zurich, Floor -2
In a significant collaboration, seven leading vaccine companies — AstraZeneca, GlaxoSmithKline, Johnson & Johnson, Merck, Moderna, Pfizer, and Sanofi — have formed the Vaccines Industry Standards Group (VISG). Over the past two years, this initiative has focused on harmonizing interpretations of regulatory submission guidance and recurrent feedback, as well as CDISC data standards. The group recognizes that aligning the understanding of requirements — such as participant diary data collection and the submission of reactogenicity and efficacy data — accelerates time to market and benefits global health.
This unified approach could facilitate future collaboration with Health Authorities and CDISC, aiming to update the CDISC Vaccines TAUG to meet current Health Authorities' expectations, thereby ensuring clarity and consistency in submission standards.
Our collaborative model can serve as a blueprint for other therapeutic areas within the pharmaceutical industry, demonstrating how organizations can work together to streamline regulatory processes while maintaining a competitive edge in product innovation.
The integrated summary of safety (ISS) is a critical component of a submission to the FDA regulatory authority. For the ISS, data from different studies are pooled and harmonised to conduct the integrated analyses. Different strategies can be used to create the integrated datasets.
The ISS may be accompanied by integrated SDTM/ADaM DefineXML and integrated Reviewer's Guides (icSDRG and iADRG) to provide additional context and information about the integrated SDTM and ADaM datasets.
Based on a use case, we’ll explain in this presentation the approach we took to create CDISC compliant integrated datasets. Furthermore, we’ll share our experiences regarding the creation of an SDTM and ADaM Define-XML for integrated datasets as well as the icSDRG and the iADRG.
The FDA's Real-Time Oncology Review (RTOR) program accelerates the review of oncology clinical trials by allowing for the early submission of top-line results and datasets. In return, RTOR requires the submission of Analysis Data Model (ADaM) datasets which closely follow data specifications provided by the FDA Oncology Center of Excellence (OCE) and Office of Oncologic Diseases (OOD) Safety Team.
This presentation explores challenges related to the implementation of these ADaM specifications, with special focus on the non-standard Adverse Events Analysis Dataset for Cytokine Release Syndrome (CRS) and Neurotoxicity (NT) – ADCRSNT. We will describe the specifications the FDA provides for ADCRSNT; the additional data that sponsors need to prepare to support the analysis of CRS and NT events; and the innovative solutions employed at AstraZeneca to develop robust standards for this uniquely challenging dataset. Our discussion will highlight the importance of a well-documented approach to ensure seamless compliance with RTOR guidance.
Session 6B: Standards in Action
Londres, Floor -2
AZ Standard Output Library (AZSOL) is AstraZeneca’s standard for Tables, Figures, and Listings (TFLs), featuring templates for consistent design of outputs. Integrated with AZSOL General Principles and Basic Layouts, it ensures clarity and comparability of study results. ADaM annotations in AZSOL provide standard traceability supporting programming activities. AZSOL streamlines regulatory submissions, data review, and automation integration, offering consistent and structured clinical data outputs.
The library is integrated with an automation tool called MOSAIC Biometrics, enabling statisticians to effectively create TFL shells while ensuring adherence to AZ standards through automatic monitoring. Maintained by the AZSOL team within the Clinical Data Standards (CDS) group, AZSOL contributes to an end-to-end standardization of data across AstraZeneca. AZSOL and MOSAIC Biometrics enable future automation in TFL delivery using ADaM annotations and machine-readable metadata.
The application of CDISC standards in Pharmacokinetic Non-Compartmental Analysis (NCA) is essential to ensure standardization and better reproducibility of the datasets reporting PK concentrations and parameters. This presentation will outline best practices for executing CDISC-compliant PK NCA analysis, offering a clear understanding of how to optimize PK analysis workflows.
A key focus of this presentation is to provide insights on planning a robust programming framework to facilitate seamless analysis reruns, particularly between interim and final analyses, while adeptly handling both Single Ascending Dose (SAD) and Multiple Ascending Dose (MAD) studies. Strategic tips on the derivation of PK-related parameters and supporting variables within the workflow hierarchy will also be tackled along with the intricacies of programming SDTM RELREC with a special focus on scenarios involving manually derived PK parameters. Furthermore, it will also be explored whether the use of the ADNCA can serve as a replacement for the traditional ADPC dataset.
The power of multi-omics, proteomics, transcriptomics, genomics, metabolomics..., in unlocking biomarkers’ potentials in drug discovery and development has gained significance in clinical science and pharma industry. This has been witnessed by the increasing uptake of multi-omics data in clinical studies in recent years.
Fitting omics data into SDTM, however, is nothing but straightforward. Apart from the examples and assumptions for Genomics Findings (GF) domain in Study Data Tabulation Model Implementation Guide, there is limited information on how other omics data are to be reported. In this presentation, we will share Novo Nordisk’s journey in overcoming technical and process hurdles and finding our way to a viable end-to-end, from collection to reporting, solution that supports analysis needs.
Session 6C: ADaM
New York, Floor -2
Estimands, a concept established in ICH E9 (R1), are increasingly used in clinical trials and required by regulatory authorities. To provide guidance on the implementation of estimands and intercurrent events (ICEs) in ADaM programming, we developed an example trial. This trial included multiple ICEs per subject and different estimands.
We also included different imputation methods to analyze these estimands. We will present the definition of estimands, including ICEs, used in this example trial and then explain the programmatic implementation in more detail, i.e. how to structure ICE datasets and what variables are needed. Furthermore, we present how to set up the estimand-related efficacy ADaM dataset with additional variables and records allowing for thorough analysis of estimands.
ADaM datasets are essential for clinical study analyses. Their structure and derivation algorithms are often documented in ADaM specifications before programming begins. However, at this stage, key documents may still be evolving, and clinical data unavailable, potentially leading to incomplete or inaccurate specifications.
An alternative approach without predefined specifications involves conducting a testrun analysis using actual study data, guided by a central ADaM model aligned with CDISC standards. Statistical programmers develop the datasets based on a stable SAP version while an independent validator performs parallel derivations, ensuring unbiased results. Differences are addressed through discussion and refinement. Submission-ready metadata are generated in define.xml format alongside this first testrun.
Dataset structure and codelists are extracted directly from the ADaM datasets, while the developer adds study-specific derivations. By integrating dataset development, validation, and metadata generation into a single workflow, this approach supports the creation of high-quality ADaM datasets while adapting to evolving study requirements.
The variability in data representation across companies presents significant challenges in reviewing and analyzing ADaM datasets. Standardizing pharmacokinetic (PK) data for analysis with software like Phoenix WinNonlin is crucial. The ADaM Implementation Guide (IG) for Non-compartmental Analysis (NCA) Input Data addresses this by using the Basic Data Structure (BDS) with a subclass for NCA. This guide introduces new dosage-based flags (e.g., NCAXFL, NCAwXRS, PKSUMXF, METABFL) that enhance existing BDS variables, allowing for more precise PK analysis. By detailing necessary variables and standardizing naming conventions, the guide streamlines the analysis process. While we are still exploring the complexities of the ADNCA IG, we have gained valuable insights and practical experience.
Our exploration has provided us with valuable knowledge and practical experience, which we believe can be beneficial to others navigating similar challenges. We are eager to contribute to the broader discussion and help advance the collective understanding of this IG.
Session 6D: AI in TMF Management (TMF Track)

Munich & Paris, Floor -2

As artificial intelligence (AI) continues to revolutionize various industries, its integration into the
field of Trial Master File (TMF) management illustrates both technological advancement and the
enduring value of human contribution. While AI can offer unmatched efficiency, data
processing, and predictive capabilities, it is the distinct personality traits of people that make
human involvement indispensable and will secure the place of the TMF geek for a long time to
come. Through collected data, this presentation will explore the personality traits that make us
both unique and valuable in the TMF world, and how the collaboration between AI and humans
can help foster an environment where technology enhances human potential, rather than
replaces it, ensuring we continue to remain victorious.
Not technology savvy, no problem, as you are just a few clicks away from having your very own TMF chatbot.
Discover the possibility of using an AI-powered chatbot as a virtual assistant designed to streamline access to critical documentation and procedural information specific to your organization. By leveraging key TMF reference materials, procedural guidelines, and indexes/reference models, the chatbot provides real-time answers to user queries, reducing the need for manual searches through extensive files.
- Yen Phan, elderbrook solutions
- Martin Rother, Daquma
- Martin Hausten, Boehringer Ingelheim
Session 6E: Partnerships in TMF Management (TMF Track)
Copenhague & Lisbonne, Floor -2
Emerging clinical trial sponsors often rely on Clinical Research Organizations (CROs) or other vendors for TMF management but remain responsible for oversight and compliance. Biotech organizations tend to underestimate the importance of early TMF oversight, leading to costly remediation at trial closeout. This session highlights the distinction between TMF management and oversight while offering practical ways to align Sponsor-CRO expectations. From Request for Proposal (RFP) to Archive, attendees will learn the value of integrating TMF requirements into study risk assessments, contracts, budgets, procedures, and governance reviews to prevent compliance issues, budget renegotiations, and relationship conflicts. By proactively assessing TMF needs, sponsors and CROs can reduce risks, ensure audit readiness, and avoid last-minute resource burdens.
The electronic Trial Master File is a cornerstone of clinical trial management, serving as a centralized repository for all essential documents required to ensure regulatory compliance and good clinical practice. User security is crucial, given the diverse stakeholders involved. The presentation highlights how security profiles and permissions are tailored to user roles, enhancing operational control and compliance. Customizable training ensures users are proficient, and access management procedures minimize risks, such as automating the classification of blinded documents and restricting unauthorized sharing. Security also extends to a robust document quality control process, ensuring that integrity and accessibility of trial documents are consistently maintained. A dedicated helpdesk is available to assist users, and continuous improvement efforts help keep the eTMF inspection-ready, ensuring trial documentation remains secure and compliant.
At end of study, CRO to Sponsor eTMF transfers consumed significant time and effort from internal IT teams, Clinical Teams and Validation and QA departments.
This case study outlines how argenx introduced a robust eTMF Migration Factory solution that:
reduced internal IT effort - by introducing automated QC checks and verifications
reduced clinical team effort – by using already agreed patterns and automating eTMF transfer verifications
reduced validation and quality effort – by introducing technology which could re-use and re-execute agreed and validated migration business logic.
Overall, the results enabled argenx clinical teams (and others) to save significant valuable time to focus on core business activities.
Lunch
International & Geneva Foyers, Floors -1 & -2
Session 7, Track A & B: AC/BC - Highway to Automation
Zurich & Londres, Floor -2
CDISC Biomedical Concepts (BCs) are structured, standardized units of knowledge that can be used to enhance data consistency and facilitate automation in clinical research. They are designed to fill gaps in existing standards by adding semantics, variable relationships, and detailed metadata needed to support the development of digital workflows in clinical research. This presentation will provide an update on the progress of BC development as well as the role that BCs play in CDISC’s new 360i initiative, a project that is aimed transforming the way we develop and use standards within clinical research creating connected and interoperable information enabling automation, enhancing data integrity, and accelerating innovation.
CDISC Biomedical Concepts (BCs) provide standardized templates for clinical observations, but the current BC library's limited coverage hinders widespread adoption. Creating new BCs manually requires significant expertise and effort. We present an AI-powered solution combining Large Language Models (LLMs) with the NCI Thesaurus to accelerate BC creation. Our three-stage pipeline automatically extracts candidate BCs from Therapeutic Area User Guides (TAUGs), matches them against the NCI Thesaurus using vector similarity, and transforms them into draft BC definitions using the Data Element Concept template. Developed in collaboration between Lindus Health and CDISC, this system was tested using the Breast Cancer TAUG. The pipeline is therapeutic area agnostic and requires minimal effort to process other TAUGs, potentially enabling rapid expansion of the BC library while maintaining quality through expert validation. This approach advances CDISC's vision of creating more connected standards.
In January 2025, a CDISC working group was established to define and model Analysis Concepts, an important step toward enabling end-to-end automation within the CDISC 360i framework.
The current USDM (Unified Study Definitions Model) defines objectives, endpoints, and schedule of activities (SoA), including Biomedical Concepts (BC). While these Biomedical Concepts facilitate downstream automation of study setup and data collection, a significant gap remains in the metadata required for derived and analyzed data and their relationship to USDM-defined endpoints. As USDM currently supports eProtocol creation, a natural extension toward supporting the electronic Statistical Analysis Plan (eSAP) is required.
This presentation will provide a status update on the working group's progress in describing use cases, defining and modelling Analysis Concepts, highlighting key developments and future directions in this standardization effort.
The potential benefits of biomedical concepts have been touted for many years, and as an industry we are finally on the cusp of implementing them more widely. At GSK, we have been experimenting around how a proto-concept, typically in the form of CDISC terminology, can be used as a bridge between entities in our protocols, collection and SDTM value-level definitions. More recently, we are very actively pursuing an automation agenda which includes protocol digitisation and widescale deployment of fully metadata-drive analysis results and analysis output creation, which further presses the need for biomedical concepts, and additionally highlights the need for the industry to align on concept models for study design and analysis. This presentation will share examples of work so far, and some of the upcoming challenges which we hope to address through industry collaboration around CDISC 360i.
- Bess LeRoy, CDISC
- Amiel Kollek, Lindus Health
- Kirsten Langendorf, data4knowledge ApS
- Warwick Benger, GSK
- Igor Klaver, GSK
Session 7C: Applied Standards Governance
New York, Floor -2
In this presentation, we introduce an innovative method for standard library navigation in clinical trials, leveraging the define.xml tool to enhance data accessibility and user interaction. Our approach redefines the traditional library structure by incorporating both structured and unstructured data into a single, user-friendly interface. This method focuses on the concept of individual standard elements, providing a comprehensive, end-to-end representation of clinical trial topics . By integrating all necessary components of the library into one cohesive system, we enable seamless navigation and immediate access to critical trial information. This advancement not only simplifies library access but also accelerates trial setup and execution, representing a significant leap forward in clinical data handling. Join us to explore how this approach can transform standard library access and improve efficiency in clinical trials.
Other than ensuring consistent data quality and facilitating uniform processes, standardisation is also the primary driving force of operational efficiency in terms of data collection, review and reporting. By leveraging CDISC standards (CDASH and SDTM), biomedical concepts and controlled terminologies, Novo Nordisk strives to attain 100% standardisation with full data lineage, from EDC to SDTM, and optimise operational efficiency.
In this presentation, we will share the thought process and learnings of our journey to 100% standardisation and showcase the implementation of EDC raw data to SDTM automation and how it helps to streamline and accelerate data review processes as well as obtaining insights from historical data.
A recent PHUSE White Paper, reporting the outcome of an Industry survey, highlighted significant variability in data governance structures among sponsors and CROs, as well as challenges in governing the implementation of standards. These and other challenges are particularly pronounced in CROs, where governance must balance regulatory compliance with sponsor and study-specific requirements, and preferences.
At Cytel, we implemented a light yet effective governance structure supported by cross-functional teams. A ticketing system, inspired by IT support frameworks, enables efficient issue resolution, continuous improvement, and knowledge sharing. Subject Matter Experts provide specialized support, while dashboards track ticket trends, identifying recurring issues and misconceptions. This enables proactive updates to processes, internal guidance, templates, and standards.
This presentation will offer practical insights into the complexities of CRO data governance, showcasing how a structured yet flexible approach—backed by efficient tools and collaboration—streamlines operations, enhances quality, and ensures compliance with industry standards.
At GSK, we recognize that standardization in clinical research is essential. To simplify access to clinical data standards, we developed the Data Standards Browser (DSB), a user-friendly platform. The DSB integrates metadata from the Metadata Repository (MDR) and offers read-only access to a broader range of standards and user guidance. It eliminates the need to navigate multiple databases by providing advanced search features to quickly find necessary standards, filtering results based on specific criteria. As a GxP system, it ensures regulatory compliance and high security for sensitive information.
We used Design Thinking and Agile methodologies to deliver this project accurately and on time. The Design Thinking approach involved Empathize, Define, Ideate, Prototype, and Test stages to understand user challenges and create innovative solutions. Agile methodology allowed continuous improvement based on real-time feedback. In summary, the DSB enhances productivity and process efficiency by addressing common issues in accessing clinical data standards.
Session 7D+E: The Future of TMF (TMF Track)
Europe Ballroom, Floor -2
An overview of the CDISC ISF Reference Model, including how it was derived, what it contains, and progress to date. The needs for an ISF standard will be outlined and then a discussion with the Regulator on the importance of an ISF Standard being created. Questions from the audience will be encouraged.
In this session we will review the different initiatives and deliverables that are planned for the next 3 years within the TMF Reference Model. We will be focusing not only on v4, but also beyond v4 and the longer-term vision and objectives of the model. We will touch on the move towards record types, alignment with ICH E6 R3, development and expansion of the metadata standard and the development of process based models and other initiatives that are planned
This session will provide TMF community members with a beginner’s guide to the goals and current progress with digital data flow (DDF) and the ICH M11 digital protocol standard. We will also show how the TMF standards will intersect with DDF as we design for the digital TMF.
TMF Oversight remains a key activity for organizations to demonstrate control, clarity and tell their story of their trial. How will is this key activity affected by our clinical systems moving more digital. What opportunity is there to do things better? What new risks does that introduce? This panel will discuss the future of TMF Oversight from the perspective of a Sponsor, CRO and a Vendor.
Panelists:
- Nick Hargaden, Moderna
- Heather Childs, PPD
- Jim Horstmann, Veeva
Afternoon Break
International & Geneva Foyers, Floors -1 & -2
Session 8: Closing Plenary
International Ballroom, Floor -2
WHO published pivotal new guidance in September 2024. Based on a global World Health Assembly resolution on Strengthening Clinical Trials, WHO provides concrete recommendations on how to reform clinical trials to better address patient needs, develop safe and effective interventions for under-represented populations, and improve efficiency in clinical trial design and approval processes. The speaker will outline the new framework and the potential role of the CDISC community.