2024 CDISC + TMF Europe Interchange Program

Program is subject to change.
All times in Europe Central Time
Session 1: Opening Plenary and Keynote Presentation
Europa 5 + 6
Over the last two decades CDISC has become the global standard required by regulatory agencies and widely used by clinical research to collect, transform, and analyze clinical trial data. While CDISC has made significant achievements in establishing the standards, what has gotten us to today will not get us to where we need to be tomorrow. We have reached the ceiling in what we can accomplish with the standards without embracing significant transformation in how we instantiate and connect our content.
We will share our vision with the CDISC community for how we can expand and connect the standards to enable automation across clinical research while ensuring we focus on the consumers of our standards. This starts with us expanding and closing the gaps in the journey we began with CDISC 360 to align, connect, and publish easy to use connected standards, and build a framework that can make this vision a reality.
Within the CDISC vision presentation, you heard our vision for how we can expand and connect the standards and enable automation across clinical research information. This journey started with the CDISC 360 concept and now must be expanded, and the gaps filled to realize our vision. The deployment of this vision will require patience, investment, and a mindset change across the CDISC Community. Based on our 10-year vision, we have identified a three-year roadmap to continue that journey.
This presentation will share an overview of what we plan to accomplish over the next three years to continue the approach to enrich the standards, enable automation, and engage our community in the transformation journey.
In 2022, the International Council on the Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) M11 Working Group published their draft guideline for a Clinical Electronic Structured Harmonized Protocol (CeSHarp) with a template and technical specifications. Two of the core principles of the template included “defining content for electronic exchange” and “design for content reuse.” Design principles for the technical specifications were notable for identifying the need for “developing a model based on specification.” These guidelines may be the catalyst to align many ongoing efforts in an impactful way.
The ongoing collaboration between CDISC and TransCelerate using an open-source approach to develop a Unified Study Definitions Model (USDM) will issue its third release in April. The USDM is delivered as a data model (i.e., names, attributes, cardinality, relationships) with controlled terminology definitions, an implementation guide, and a reference architecture with application programming interface (API) specifications. The scope of the USDM includes study definition information found in protocol documents such as general study information—phase, therapeutic area, indication—study design, schedule of activities and assessments, workflow and eligibility, and biomedical concepts to specify protocol-required data.
In June, 2023, CDISC and Vulcan, an HL7 FHIR Accelerator, jointly announced via a press release their intention to utilize the USDM to accelerate the development of the content model envisioned as part of the ICH M11 project. This effort is supported by a Joint Leadership Forum that includes CDISC, ICH, TransCelerate, and the Vulcan Accelerator.
This presentation will focus on the “So What?” by discussing the real opportunity to bring together the collective know-how and the previously established building blocks. This is a classic example of innovation through collaboration and a path to true digitalization of the clinical research. It is an opportunity to go digital first by intent. The potential impact on relevant stakeholders will be presented.
Morning Break
Session 2A: Foundational Standards
Europa 5
The FDA Center for Tobacco Product’s mission is to protect Americans from tobacco-related disease and death by regulating the manufacture, distribution, and marketing of tobacco products and by educating the public, especially young people, about tobacco products and the dangers their use poses to themselves and others. To achieve this mission CTP performs science-based application review in addition to compliance outreach, enforcement, regulation and guidance formulation and other product regulation activities.
This presentation will describe the culmination of work as part of the collaborative project commenced by CTP and CDISC to develop nonclinical and clinical data standards for tobacco studies to speed regulatory review and decision making. Innovative approaches to standardization developed for the Tobacco Implementation Guide (TIG) v1.0 will be highlighted with relationships between this guide and established CDISC standards. Lessons learned from this initiative and initial steps post publication will also be discussed.
Analysis results play a crucial role in the drug development process, providing essential information for regulatory submission and decision‐making. However, the current state of analysis results reporting is suboptimal, with limited standardization, lack of automation, and poor traceability. Currently, analysis results (tables, figures, and listings) are often presented in static, PDF‐based reports that are difficult to navigate and vary between sponsors. Moreover, these reports are expensive to generate and offer limited reusability. To address these issues, the CDISC Analysis Results team has developed a standard to support consistency, traceability, and reuse of results data. This presentation will provide an overview of how to get started using the new standard.
Data are at the core of evidence-based research and practice. This talk will give an overview of the use of SDTM standards for antimicrobial resistance (AMR) data and share lessons learned through this work.
Infectious Diseases Data Observatory (IDDO) is working with clinical data to facilitate sustainable and coherent data reuse. In collaboration with the GRAM2 Project, IDDO has been working on developing a new curation pathway for AMR.
The majority of individual patient data (IPD) is curated within Microbiology (MB) and Microbiology Susceptibility (MS) domains. For aggregated data, we used a draft of Site Information (SI) domain and developed IDDO Controlled Terminology for SIPARMCD, SIPARM, SIVAL and SIVALU. Furthermore, we developed a way to record both HAI (hospital-acquired infection) and CAI (community-acquired infection), using a new variable MBINFCAT.
Currently, there is a limited number of AMR data available for reuse. IDDO is working with GRAM2 to address this gap and facilitate a better quality of data globally.
Session 2B: AI in Standards
Europa 6
In the rapidly evolving landscape of clinical research, the need for efficient and accurate interpretation of regulatory documents is paramount. We will explore a novel approach to this challenge, termed ‘Regulatory Intelligence’, which leverages Large Language Models (LLMs) to extract and standardize information from such documents.
The Clinical Data Interchange Standards Consortium (CDISC) provides a comprehensive set of guidelines and standards for the collection, exchange, reporting, and archiving of clinical research data. While these standards are crucial for ensuring data quality and interoperability, their breadth and depth can be daunting, particularly for organizations with limited resources or experience.
We will discuss how LLMs can be used to assist in interpreting and applying CDISC standards. By extracting and standardizing information from regulatory documents, LLMs can guide organizations through the compliance process more efficiently and effectively. This approach not only reduces the burden of compliance but also enhances the quality and consistency of data reporting.
Clinical trial data is a valuable resource for improving trial design and accelerating research. However, much data remains locked in free-text formats across sources like clinicaltrials.gov, which has outcome data for over 60,000 completed studies. Large language models present an opportunity to unlock this data and transform it into structured, queryable information. This presentation describes an approach that uses AI to map outcome data containing numerical, categorical and free-text columns to standardized endpoint definitions like CDISC Biomedical Concepts. This creates a structured dataset, connects historical data to emerging standards and models, and enables new use cases. Researchers can search outcomes by domain or metric to find precedents to inform trial design. Data can be aggregated for meta-research and benchmarking, and predictive modeling on this harmonized data could optimize future trials. By transforming free-text outcomes into structured endpoints mapped to standards, AI can bring legacy clinical trial data back to life and accelerate research through data-driven trial design.
This session promises a deep dive into the transformative realm of generative AI applications within the highly regulated pharmaceutical landscape.
Explore the forefront of innovation as we unravel the potential of generative AI in deciphering complex Study Data Tabulation Model (SDTM) and Analysis Data Model (ADaM). Through compelling use cases, we'll showcase how these advancements don't just streamline intricate data analyses but also hold the key to unprecedented benefits for patients. The presentation goes beyond the technicalities, shedding light on the delicate balance of compliance and ethical considerations inherent in the pharmaceutical industry.
To enrich the discourse, we'll share invaluable insights gleaned from initial Proof of Concept (PoC) learnings, providing a practical and enlightening perspective. Join us in shaping a responsible course for clinical breakthroughs, where the synergy of generative AI, compliance, and ethics paves the way for a new era in pharmaceutical excellence.
Session 2C: Navigating Submission Pathways
Berlin 1-3
To incorporate the patient's voice in drug development and evaluation, FDA is developing a series of methodological guidance on Patient-Focused Drug Development (PFDD), about the collection of patient experience data, identification of the most important to patients, fit-for-purpose COA selection and use as endpoint, and the increasing roles of such data and related information in drug development and benefit-risk assessment. FDA also issued some related technical specifications guidance documents for submission requirements. The presentation will summarize the PFDD related guidance in the past several years and highlight the impact on data submission and the communication with the authority agencies.
It was January 2014 the FDA issued first version of its Technical Conformance Guide for public review. The final version (2.0) was then released December 2014, a pivotal moment occurred when the FDA stopped the clock, providing sponsors with a two-year window adapting their methods of creating clinical dataset packages. This adaptation was necessary to comply with the FDA's required data standards for any study commencing after December 16th, 2016.
Fast forward through approximately 30 versions, with latest version 5.6 released last December, the guidance has undergone significant changes. These changes include more pages, from 38 to 88, but also substantial alterations in content, so that the requirements for sponsors set forth by the FDA have been consequently impacted.
Between 2021 and 2023, the FDA released 13 versions. If you find yourself fatigued from spotting differences, fear not! This presentation will simplify things, highlighting most substantial changes / new requirements.
To the patient, drug efficacy and safety are equally important, but this is not reflected in the traditional way of analysing safety data. Up to recently safety in clinical trials have often been rudimentary analysed. Several initiatives including FDA now suggest a change, moving from describing the safety data to parameter estimation.
Based on the FDA safety analysis guide 2022, we developed templates and analysis datasets allowing for exposure adjusted analyses; KM% & EAIR, their differences, HR, and CIs.
Following ADaM basic data structure for time-to-event analyses, handling each endpoint as separate event with a binary value for censoring variable (event or no event). By combining several parameters (AE, level of AE, time period and actual event) a unique PARAMCD was created.
All the possible combinations create a vast number of parameters, but with a systematic approach we could automate the process to create the parameters effortlessly.
Session 2D: TMF Inspections (TMF Track)
Zoo 4 + 5
Dr. Torsten Stemmler obtained his doctorate in Biology (focus on Neurobiology and Psychophysics) at Bremen University, Germany in 2011. Then he moved to the RWTH Aachen, were worked as a post-doc on visual perception. He retrained as a Data Manager and developed data base solutions at the University Hospital Aachen. In 2017, he started at the Federal Institute for Drugs and Medical Devices as GCP inspector.
After five years as GCP inspector, he contributed in the regulatory network on developing guidelines, qualification opinions and scientific advice. His main contributions are on trial documentation, electronic data, computer systems, and artificial intelligence.
Artificial intelligence has captured his interest since his time as doctoral candidate, where he was part of a team at Serre Lab that tried to distinguish cognitive states by measuring pupil size (Single-trial decoding of binocular rivalry switches from oculometric and pupil data, Journal of Vison 2011).
• Marion Mays, Jerion Consulting
• Jamie Toth, BeiGene
• Torsten Stemmler, Federal Institute for Drugs and Medical Devices (BfArM)
Lunch & Poster Session
Europa 2 - 4
Transforming data is a critical element of any analysis, yet often there is a lack of consideration for code reproducibility, leading to cumbersome reuse and duplicated effort in future research. To address this, the Infectious Disease Data Observatory (IDDO) have created the {iddoverse} R package, a reproducible, open-source solution for converting IDDO customised SDTM data into analysis datasets, by amalgamating results, selecting variables, pivoting data into a wide format, and providing additional insights before merging domains together, facilitating streamlined dataset creation. It also accommodates modifications to the output, allowing users to specify additional actions or features.
Subsequently, this approach saves researchers time, particularly in Low- and Middle-Income Countries, by eliminating the need for advanced programming skills or intricate knowledge of the CDISC standards. The {iddoverse} promotes data equity and accessibility of SDTM globally, while demonstrating the feasibility of generating standardised analysis datasets using custom implementations of SDTM.
Collaborating with players armed with unique abilities is key to crafting an unbeatable strategy. Regarding standardized clinical trial data, cross-departmental collaboration can help break down departmental siloes while enhancing the quality of study data, leading to earlier availability of effective treatments.
Many organizations have enlisted SDTM programmers with unraveling SDTM validation issues. However, some issues require tracing SDTM data back to its roots to optimize decision-making. Integrating data managers into this process can help slash time to resolution and boost overall data quality.
This poster will provide important considerations on weaving data managers into the SDTM validation process. Suggested workflows will illustrate how to level up your approach to resolving validation issues. Curating appropriate FDA validation rules along with detailed examples will showcase how these are best served by data managers. Lastly, suggested training and ideas to power-up your data managers will equip you to battle issues at the source.
Regulations and guidelines require that Study data needs to be retained and archived after the end of a Study. During this time, GxP Data Integrity needs to be maintained, the data should remain inspection ready, and a clear Data Management Plan (DMP) should be in place. This poster presents a framework for taking a risk-based approach to long-term Data Integrity of clinical data. Topics include relevant regulations and guidelines, long-term data integrity risks and challenges, community resources and standards, and how digital preservation good practice provides a way to deliver ALCOA++ over multi-decade timescales. The poster includes the use of CDISC standards to help ensure data can be understood and used in the future, which forms an important part of a DMP, and aligns with good practice for Long Term Digital Preservation (LTDP) where open and standards-based formats are a cornerstone.
Application Programming Interfaces (APIs) have become a fundamental component of the digital age, enabling different applications to communicate with each other; however, this potential is unknown to the majority of SAS Users.
This paper explores how the SAS procedure: PROC HTTP can handle a connection between two applications, their request procedure, response handling and parsing of data.
The author shows some of the relevant use cases discovered while learning and implementing APIs using SAS, with a focus on how to integrate the CDISC Library and Microsoft Teams in a Statistical Programming workflow.
Furthermore, an advisable and secure methodology of the implementation and management of APIs is defined throughout the author best practices to prevent misconfigured security settings and exploit vulnerabilities.
In conclusion, a thorough understanding is required to obtain maximum value and to achieve the full potential of APIs in a clinical data analytics environment, in addition to their effective implementation and subsequent management.
For Clinical Data Standards at AstraZeneca (AZ), there is a consistent, end-to-end strategy to follow CDISC standards for data collection, tabulation, analysis, and submission. Since 2015, when the CDASH standard was initially introduced at AZ, the focus was to become compliant with the most critical corporate data collection standards (e.g., Exposure, Disposition, Concomitant Medications, Demographics). Moving forward from that point, we have been continuously monitoring CDASH releases along with performing business impact assessments for their implementation.
This poster shows the major milestones for CDASH standard implementation from the AZ perspective ensuring consistent alignment approach for its subsequent releases.
Exploring TMF management parallels orchestrating a harmonious ensemble across diverse generations. We study how TMF culture and engagement align with each generation and our company values: Learning, Customer Focus, Accountability, Commitment, and Tenacity.
Infusing the TMF with mentorship and adaptive training, the goal becomes high-quality outcomes.
During challenges, learning from adversity and prioritizing customers—ensure TMF's success.
Collaboration, refined training, and continuous improvement redefine TMF, inspiring excellence.
Goals? Embedding company values in every TMF execution, leveraging FDA inspection insights, fostering commitment, learning, and customer-centricity. In the TMF world, understanding diverse groups is vital, especially for Gen Z. Delving into TMF culture and engagement emphasizes quick
answers, teamwork, mentorship, and seamless onboarding tailored to TMF life. Embracing differences between younger generations highlights the importance of mundane TMF tasks by proposing transformative ideas which reshape perceptions. Decoding TMF culture and engagement emphasizes its crucial role in maintaining operational efficiency complementing individual styles
In navigating the complex landscape of ADaM metadata, our poster delves into the strategic implementation of standardized rules for assigning Origin across diverse domains.
To gauge the level of understanding regarding the proper assignment of Origin for ADaM variables we carried out a survey within our company, the results of which identified a large variation in interpretation. Faced with this ambiguity, we have proposed the use of a metadata assignment flowchart to ensure precision and consistency throughout our ADaM metadata,
saving time and effort for the user.
We believe this lack of consistency exists not just in our company but is industry wide. We invite you to provide your insights during the conference by taking part in our interactive live survey!
Reactogenicity of the study vaccine is often a primary objective in Vaccine trials. How the reactogenicity data is to be represented in SDTM is described in the CDISC Vaccines Therapeutic Area User Guide v1.1. In addition, the FDA guidance “Submitting Study Datasets for Vaccines to CBER OVRR Technical Specifications Document V2.1” contains several recommendations on how reactogenicity data should be stored in SDTM. At GSK, a suite of SAS conformance checks has been developed to assess if study SDTM data is aligned with FDA CBER OVRR recommendations. The checks are designed to support the review of reactogenicity data in the study SDTM datasets and can be executed at different time points during the trial.
Inconsistencies in the SDTM mapping can be detected at study set-up and a final run may be performed before database lock to ensure the study SDTM data is adequate to be submitted to FDA CBER OVRR.
The OpenStudyBuilder is developed as an outcome of the CDISC 360 project and as a Digital Data Flow (DDF) compliant solution. This open‐source project joined the CDISC Open Source Alliance (COSA). The vision is to make metadata driven study specifications based on concept based standards to be applied in protocol development, downstream system setup and SDTM submission deliverables. It includes an MDR component for managing versioned external and
sponsor‐defined data standards. This poster will showcase the OpenStudyBuilder as a new approach to working with studies that once fully implemented will drive end‐to‐end consistency and more efficient processes ‐ all the way from protocol development and CRF design ‐ to creation of datasets, analysis, reporting, submission to health authorities and public disclosure of study information.
The relationship between Sponsor and CRO is a tale as old as time, two groups that are sometimes at odds with each other but in truth are both working towards the same goal and their success alone can only be achieved by ensuring success together. We will cover 12 rounds of tips and tricks with the ultimate aim to make sure you and your CRO become best of friends and set the foundation for a happy and unified relationship.
Use case: A custom domain for induced pain, going from source to SDTM to ADaM.
Early-phase clinical trials regarding analgesic effects of a novel study drug necessitate induction of pain in healthy volunteers. The Centre for Human Drug Research (CHDR) has developed PainCart, a comprehensive battery of tests to assess efficacy of analgesic compounds by administering a wide variety of pain stimuli.
As this test battery is used in clinical trials that may later be submitted to the FDA, results of PainCart tests need to be converted into SDTM and ADaM datasets. However, there are no known SDTM domains to store this pain stimuli information. Furthermore, therapeutic area user guides (TAUG) regarding pain are related to chronic pain, which is not relevant for studies where pain is induced in healthy volunteers.
To overcome this challenge, we created a custom SDTM domain, called XP. This presentation highlights the process and challenges involved in creating a custom PainCart SDTM domain, and the conversion to ADaM.
In the ever-evolving landscape of data management, the harmonization of data has emerged as a pivotal process, promising increased efficiency and time savings. The combination of metadata-driven automation and purpose-built tools not only accelerates the dataset generation process but also enhance data quality and consistency, laying the foundation for more robust analytics, insights, and automation of these.
In this presentation, we will share the challenges and learnings that we incurred in different methods that were used to generate SDTM data across various scenarios. We will include real-life experiences within metadata-driven automation and tool-based techniques to achieve the goal of data harmonization. Learnings from the data harmonization are focused on annotation process, standard units, define creation, quality check methods along with dataset generation. The mastery of the harmonization art is not just a technical skill but a strategic imperative for organizations navigating the complexities of the modern data landscape.
Good quality, interoperable data from historical trials fills the gap between high‐quality, small‐scale pooling on the one hand, and huge datasets full of rather messy real‐world data on the other. However, ensuring quality of data across multiple studies ‐ and monitoring its evolution ‐ can be very challenging. Submission‐oriented checks, such as those provided by Pinnacle 21, focus on compliance within a single study and are not effective for assuring the general usability of data within a consolidated database comprising many diverse studies. We have developed an approach for monitoring the cross-study data quality using CDISC standards and will discuss the advantages and disadvantages while providing insights into our own experience of processing large numbers of historical trials to provision good quality clinical data for secondary use.
Controlled terminology standards play a crucial role in standardizing study data and achieving semantic interoperability in data exchange. These standards, encompassing definitions, preferred terms, synonyms, codes, and code systems, serve as a foundation for the analysis of clinical or scientific concepts.
With CDISC releasing versions quarterly it is important to know the pivotal role of staying informed about controlled terminology version updates and their impact on the evolving landscape of codes. Understanding this evolution becomes paramount for maintaining data accuracy and relevance.
The poster visually illustrates the evolution of controlled terminology and underscores the critical need for timely implementation in long-running trials to enhance data standardization with use case experience from different studies.
Key words: Controlled Terminology, Data Standardization, Version updates
Metadata Repositories have been notoriously difficult to implement successfully in clinical research despite the availability of numerus commercial offerings. Many sponsors have experienced one or more unsuccessful metadata repository implementations. Based on our experiences at Merck and CDISC, as well as input from numerous other organizations, this presentation will cover:
‐ Why are Metadata Repositories (MDRs) important to the industry?
‐ What are some key requirements of an MDR for sponsors?
‐ Why have metadata repositories been largely unsuccessful so far?
‐ What changes could vendors and sponsors make to increase the MDR success rate?
Converting raw data to SDTM format is a crucial stage in any clinical trial. Currently, SAS is the predominant language employed for this process, requiring considerable human intervention. Although automation has already been used in PHASTAR to generate SAS code, we have devised an alternative approach that generates automated R code, which substantially reduces human involvement in routine coding tasks.Our tool utilizes curated metadata, containing vital information essential for executing the RAW to SDTM derivation process. Subsequently, the tool generates a set of automated functions, facilitating the creation of SDTM datasets with minimal postprocessing requirements. This approach not only streamlines a significant portion of coding tasks but also establishes a standardized data derivation process across various trials.
Session 3A: Digital Health Technologies
Europa 5
In this presentation, an overview of Digital Health Technologies (DHTs) including the advantage and disadvantages will be discussed, as well as various types used. The FDA Guidance on Digital Health Technologies for Remote Data Acquisition in Clinical Investigations published December 2023 and the FDA Draft Guidance on Electronic Systems, Electronic Records, and Electronic Signatures in Clinical Investigations Questions & Answers published March 2023 will be reviewed with a focus on the impact to records and retention. Examples of patients using DHTs will also be shared.
Digital Health Technologies (DHTs) can be defined as an electronic method, system, product, or process that generates, stores, displays, processes and/or uses data within a clinical research or healthcare setting. The advantages of DHTs include the ability to collect rich high-resolution data in real-world settings outside of traditional research settings, such as clinics.
CDISC and the Digital Medicine Society (DiMe) have partnered to enhance interoperability and comparability of data across different DHTs and accelerate innovation in digital health through shared standards and common semantics. To this end, a volunteer team of diverse stakeholders is working to address opportunities for data standardization.
This presentation will provide an overview of current development work, including standards for:
• Key DHT Concepts in clinical research,
• Device Attributes that contextualize collected data,
• Digital Endpoints collected using DHTs, and
• Best Practices for using CDISC standards and DiMe resources with DHTs in clinical research.
Digital endpoints promise to address some of the most relevant challenges in clinical trial design and pipeline strategy. Especially in rare diseases, where additional difficulties are faced, like limited natural history data and sparse biomarker knowledge.At argenx, a global immunology company, committed to improving the lives of people suffering from severe autoimmune diseases, Digital Health Technologies will be assessed in the short term by implementing these in "use cases". The experience gained during these use cases will be documented and shared. The goal is to assess if Digital Health Technologies can become a platform technology for adaptation and implementation across indications within the organization.Digital Health Technologies will be assessed to explore the actual disease burden, identification of unmet needs, and the correlation between relevant endpoints and disease activity in real-world scenarios.As focus of our presentation we would like to share experiences from real use case(s) including the challenges we encounter towards mapping these sensor data (e.g. Physical activity; Sleep; Gait & Balance; Movement Characteristics) into SDTM.
Session 3B: End-to-End Implementation
Europa 6
GSK has been working with a Meta Data Registry/Repository (MDR) in different forms. It supports different business processes and is aligned with industry standards and regulatory requirements.
The need for a more modern solution became apparent. Metadata driven data transformation for clinical trials as well as a transparent end-to-end data chain is needed to meet new expectations.
This resulted in a journey to deliver a practical short-term tactical solution, while in parallel working on a long-term strategic solution.
We will describe different routes from off-the-shelf to custom built tools. For the short-term solution, we were seeking for a light lift and shift approach exploring small improvements. The design of the existing file-based process in a new multi-platform environment posed new challenges around control and traceability.
Next step is lifting our Value Level Definition (VLD) from generation of define.xml, Value Level Definitions and vendor specifications to CDISC Biomedical Concepts.
Session 3C: CDISC CORE Update & Workshop
Berlin 1-3
CDISC Conformance Rules are an integral part of the Foundational Standards and serve as the specific guidance to Industry for the correct implementation of the Standards in clinical studies. The overall goal of the CORE Initiative is to provide a governed set of unambiguous and executable Conformance Rules for each Foundational Standard, and to provide an open-source execution engine for the executable Rules which are available from the CDISC Library. This presentation will begin with a brief review of the CDISC CORE program concept --- what it is and why it is important. Progress and status of development of the CDISC Conformance Rules and the CORE Engine will then be covered. The focus then shifts to uptake and implementation to date by the CDISC community including Pharma and Biotech end-users, software vendors offering CORE solutions, and Regulatory Agencies. Finally, the CDISC Conformance Rules governance model will be discussed
This one-hour workshop will be split into 2 parts.
In Part 1, we will guide you using the CORE Rules Engine. This Engine is available on GitHub and can be accessed and downloaded by the entire CDISC community. It is a Command Line Interface (CLI) that allows you to run all published CORE rules on your datasets. We will demonstrate:
• Where to find the CLI
• How to download and utilize it
• The use of different commands
• How to interpret
In Part 2, we will show you how to write a CDISC conformance rule in the CORE Rule Editor and preform unit testing. Together we will:
• rule template and/or discuss a previously written rule
• Create test data for unit testing
• Interpret the testing output
Session 3D: Partnerships in TMF Management (TMF Track)
Zoo 4
In this presentation, I will discuss what I believe are some systematic problems with the use of the eTMF. I will look at the reasons why the eTMF is not always successful in contributing to inspection readiness, outline possible solutions, and discuss possible changes in culture and training.
The partnership between Regeneron and Phlexglobal has evolved over the last several years marked by a steadfast commitment to ensuring the quality and completeness of the Trial Master File to support the needs of the study team as well as maintain inspection readiness at all times. Grounded in shared core values and a joint investment in high quality, their journey progresses beyond routine metrics reviews for mere informational purposes. Instead, they delve deeply into impactful areas for process improvement, revealing pain points, identifying negative trends, and uncovering process gaps.
This comprehensive metrics assessment has provided key stakeholders, extending beyond Regeneron and Phlexglobal, with a transparent view of the overall health of their TMF. This visibility comes with actionable steps to enhance and correct areas as needed, accompanied by a historical perspective of the progress made. Through real-life scenarios outlined in this case study presentation, the strategic emphasis on quality and completeness coupled with baring vulnerabilities is shown to result in a markedly improved TMF landscape for all involved.
Joanne Malia of Regeneron and Janice Cassamajor of Phlexglobal will share lessons learned from this collaborative journey, offering practical use cases and recommendations to help other organizations facing the same challenges.
The TMF landscape is a complex one; an entwined maze of people, processes and technology, that we must all try to navigate in order to become TMF champions – victors of completeness, quality, timeliness and ultimately, inspections! However, hidden behind the scenes are the true ‘champions’ of TMF, the subject matter experts that fight and argue for the cause. Each one helps guide their study team through the Sponsor processes and expectations, while working towards the common goal. Having seen the evolution of the TMF Lead role in the industry, their value is not in doubt – the difficulty is proving it!
• Jason Weinstein, Regeneron
• Jacki Petty, Cencora Pharmalex
• Joanne Malia, Regeneron
• Janice Cassamajor, Cencora Pharmalex
• Dr. Max Horneck, elderbrook solutions GmbH
Session 3E: TMF Culture and Engagement (TMF Track)
Zoo 5
We in the TMF management profession are always looking for new ways to present TMF data and status, help and train users on process and the technology, etc. Ever wondered one way works for one and not the other?
We know it is essential to recognize and appreciate the diversity of perspectives and experiences. We know that people approach their work and responsibilities differently. By acknowledging and respecting our differences, strengths, perspectives, and needs, we create a complete and robust TMF management process that supports the people involved in the goal of an inspection ready TMF on an ongoing basis.
This presentation will review the characteristic needs and behaviors of the latest 5 US generations, relating strategies to facilitate their management of the TMF.
The 5 US generations discussed:
• The Baby Boomer Generation (born 1946-1964)
• Generation X (born 1965-1979)
• Millennials (born 1980-1994)
• Generation Z (born 1995-2012)
• Generation Alpha (born 2013-2025)
Organisations conducting clinical trials face many challenges, and TMF activities are often not prioritised or well resourced. An inspection ready TMF is a vital part of a successful clinical trial. How can you give yourself the strongest chance of success with your TMFs?
There needs to be a strong foundation: fit for purpose technology and effective processes. However, on their own, those are not enough. You need to nurture your organisations’ TMF culture.
What approaches can you take to help to improve the uptake of TMF? How can you embed the right TMF ways of working into the day-to-day fabric of how your organisation operates? How can you win over hearts and minds? This presentation will offer experience, practical advice and hopefully will give you some ideas to take back your organisations, to help you be more successful in managing your TMFs.
In this panel, participants will learn to cultivate a proactive Trial Master File (TMF) culture, crucial for success. Emphasizing clear roles and collaboration, the session will cover training, technology integration, and success metrics. Attendees will explore adapting to regulations, leveraging digital tools, and promoting teamwork through real examples and discussions, preparing them for future TMF challenges.
• Anusha Rameshbabu, Moderna
• Mallorie Sayre, Moderna
• Melissa De Swaef, argenx
• Chris Jones, Novartis
Afternoon Break
Session 4: Regulatory
Europa 5 + 6
The FDA Technical Conformance Guide for CDISC (Clinical Data Interchange Standards Consortium) embodies pivotal standards shaping clinical research data. Helena Sviglin, a prominent figure from the FDA, will be presenting this guide at the CDISC conference. This guide facilitates adherence to regulatory requirements, ensuring consistency and accuracy in clinical trial data submissions. It outlines the technical specifications and validation criteria necessary for compliance with FDA regulations, fostering interoperability and data exchange across diverse platforms and stakeholders. Sviglin's presentation promises insights into the guide's nuances, elucidating its significance in streamlining data collection, management, and analysis processes within the pharmaceutical industry. Attendees can expect to gain a comprehensive understanding of CDISC standards and their integration into regulatory practices, empowering them to navigate the complex landscape of clinical data management with confidence and efficacy. The presentation by Sviglin serves as a cornerstone in fostering collaboration and advancing the adoption of standardized practices to enhance the quality and reliability of clinical research data.
Yuki Ando from the Pharmaceuticals and Medical Devices Agency (PMDA) will present pivotal updates concerning CDISC standards. As a regulatory authority in Japan, PMDA plays a critical role in shaping standards for clinical data interchange. Ando's presentation will provide attendees with insights into the latest developments and revisions in CDISC guidelines as endorsed by PMDA. These updates are instrumental in ensuring alignment with evolving regulatory requirements in Japan, thereby facilitating smoother drug approval processes and enhancing data quality in clinical trials. By elucidating PMDA's perspective and expectations regarding CDISC standards, Ando's presentation will equip attendees with valuable knowledge essential for compliance and successful interactions with regulatory authorities. This session serves as an invaluable opportunity for stakeholders to stay abreast of regulatory changes and foster harmonization in the adoption of CDISC standards, ultimately driving improvements in clinical research practices and drug development endeavors.
Are there any new activities in support of data submission and standardisation at the European Medicines Agency (EMA)? How ongoing data submission and standardisation activities at EMA are evolving?
In this presentation EMA’s proof-of-concept study for Standards for Exchange of Non-clinical Data (SEND) packages in centralised procedures will be introduced. Additionally, recent updates under DARWIN EU®, the Data Analysis and Real-World Interrogation Network and EMA’s clinical trials raw data (individual patient data in electronic structured format, e.g. CDISC SDTM) project will also be covered.
Session 4D: The Impact of Regulations (TMF Track)
Zoo 4
It has been a while since the publication of such guidance as the EMA Guideline on the content, management and archiving of the clinical trial master file (paper and/or electronic) EMA/INS/GCP/856758/2018 which introduced ‘risk based’ and ‘risk proportionate approach’ for the TMF. And there are other guidelines and regulations too that introduce the phrase ‘risk based ’ to clinical trials which in turn could influence TMF strategies e.g. ICH GCP E6 (R2) 2016.
If, like me, you are wondering whether companies have embraced the ‘risk based’ part of these guidelines and regulations and implemented new strategies for their TMFs, let me first tell you what these publications say (it’ll be interactive and fun, honestly!) and then we’ll look at their impact on how companies manage their TMFs (the really interesting bit!).
The European Clinical Trials Regulation (EU CTR) harmonizes the processes for assessment and supervision of clinical trials throughout the European Economic Area (EEA).
EU CTR was implemented with the introduction of the Clinical Trials Information System (CTIS). This is now the single tool for interaction between sponsors and member states for submission of clinical trial applications, approval, and updates on clinical trials.
Separate submission to ethics committees (EC) is not required anymore. EC evaluation is now integral part of the assessment by individual member states concerned. Decision and approval will be done by member states and communicated via CTIS.
This approach has an impact on availability and filing of records which are typically expected in Zone 04 of the TMF Reference Model to document appropriate EC supervision.
The EU CTR was implemented to ”increase transparency and restore the EU's clinical research competitiveness by reducing administrative requisites and streamlining workflows” but does it have the same effect on the TMF process? I want to presentt how Ascendis Pharma implemented the new documentation requirements brought upon the new regulation into its budding TMF processes. It will describe how processes were updated due to redactions, naming conventions, and the Clinical Trials Application System (CTIS).
At first glance, the TMF Management team did not expect that the EU CTR would affect the TMF process at Ascendis. After all, the TMF is only mentioned briefly in two articles of the regulation, articles 57 and 58 - which is not a lot when you consider that the regulation is 84 pages long; and what is mentioned is nothing really all that groundbreaking.
It’s the good old, “Sponsor shall keep a clinical master trial … that contain essential documents”, there’s some language regarding data quality, and the classic “Readily available and directly accessible.” These aspects aren't really new for anyone who has ever worked with the TMF. There’s also the requirements about archiving the Trial Master File. Now we have to store the TMF for at least 25 years and during this time it has to be readily available, accessible and of course legible and other requirements for the archived TMF.
All in all, looking at the EU CTR in the eyes of an eTMF Manager, the new EU CTR looks pretty harmless. However, on a theoretical level but it doesn’t really give many details on how to do any of these things practically; which is the same for all regulations/directives that preceeded it. This is where all of us, I have experienced, run into challenges. Because what does the change under the EU CTR really mean on a practical, tangible level? Why can we not get any practical examples and guidelines when this in fact is something all that work with the TMF struggle with?
The presentation will bring forth actual examples from our processes and also input from the Danish EU CTR Netwrok and the Danish TMF Forum.
Session 4E: Technology in TMF Management (TMF Track)
Zoo 5
IQVIA has identified that there are many attributes to Intelligent Document Review that involves digitization, classification, and extraction not only for ETMF but also for other parts of the business such as content flow in site folders and in regulatory filings. We will discuss those and the findings as there are many areas of the business that find document content of interest and important for their digitization aspirations and efficiency gains. Whilst it starts with the ability to recognize any document flowing into your eco-system, it quickly extends to the ability to use SaaS to auto-review the content for quality, recognize sections within the document and run models to extract insights and next best actions whilst maintaining trusted auditability for any regulatory agency to accept. Those SaaS tools are often requiring connection to CDISC as a core element to standardize from unstructured content into meaningful and interpretable actions.
Potential standards such as CRISI, if correctly defined and implemented, have the potential to change the game for TMF Health and completeness. Study Management has its own data flows that include EDC, IXRS, RBQM and document-based processes in RIM and Medical Writing. An eTMF system must sit between these systems and both electronic and document processes at each Clinical Site in order to be truly effective. For the TMF documents to be as contemporaneous as possible, the archive must be aware of the multitude of milestones and events that have occurred to ensure automated workflows are triggered on time. Since these systems are often multi-vendor, the right standards that promote clean integrations between heterogenous systems can make all the difference.
In this discussion, the audience will hear about challenges with different standards, and on-the-ground experiences with integrations between clinical systems.
Maintaining data integrity throughout the trial lifecycle is challenging. In the ever-changing and fast-moving world of IT, nothing seems to last for 5 years let alone the 25 years required by regulators for eTMF retention. The regulators mandate the ALCOA+ principles are followed for long-term retention of clinical data, but the current regulations and guidelines say little about how to achieve this in practice.
We believe the key to success is to:
a. prepare for archiving throughout the whole data lifecycle
b. employ recognised standards such as the eTMF Reference Model
c. take a quality and risk-based approach at all stages including archiving
d. apply recognised digital preservation good practice
e. embody this in a well thought out Data Management Plan / eTMF plan
Session 5A: CORE Implementation

Europa 5

The CDISC Open Rules Engine (CORE) project combines machine-readable CDISC conformance rules with open-source software for rule execution. This initiative holds promise for a new industry standard for clinical data validation.
Here we present a Novo Nordisk perspective on the “optimal” data validation solution and our evaluation of the challenges and benefits we foresee in adopting CORE. We lay out a provisional road map for the integration of CORE in an automated SDTM generation flow on a SAS Viya platform. Furthermore, we reflect on how to take advantage of the validation tool in conduct and submission of clinical studies as well as on the tool enhancements that can bring us closer to the “optimal” solution.
The Secure Data Team, an unblinded team within SGS Data Management, daily handles numerous data transfers and particularly during critical periods, delivery of these SDTM compliant datasets is defined by a tight turnaround time. Consequently, the team relies on tools that can convert and validate high quality datasets efficiently.
We started enhancing our dataset validation tool and combined this with our experience and knowledge from the CORE project. We will share our journey of identifying the requirements for our new validation tool, including the transition to the cloud, and aligning them with the capabilities of CORE. We'll show that we can create our own custom rules with the Rule Developer and discuss the challenges we faced.
We are also defining our own SGS pathway for the future requirements/implementations and in collaboration with CDISC, we're making significant strides towards establishing an open, single source of truth for conformance rules and consistency.
This presentation outlines the implementation of CDISC (Clinical Data Interchange Standards Consortium) Conformance Rules within the SAS Life Science Analytics Framework (LSAF) at argenx, a global immunology company dedicated to addressing severe autoimmune diseases. The CDISC Conformance Rules are crucial guidelines for industry compliance with Foundational Standards in clinical studies. The regulated environment in the life sciences sector necessitates adherence to specific rules and standards to ensure product quality, safety, and efficacy. The presentation details the challenges faced and insights gained during the CDISC CORE engine and rules implementation, emphasizing practical lessons for those seeking to adopt CDISC CORE in their studies. The argenx team emphasizes collaboration and knowledge sharing to promote widespread adoption of CDISC CORE within the industry, contributing to best practices in clinical studies data management and regulatory compliance.
Session 5B: COSA
Europa 6
The rising popularity of open-source within the pharmaceutical sector presents a multitude of opportunities. However, next to its benefits there are uncertainties surrounding permissible usage. Did you know that source code in a paper defaults to copyright protection? Moreover, the prevalence of copy-left licenses in R raises questions about the obligation for R-based software solutions to also adopt copy-left open-source practices.
This presentation aims to demystify the legal aspects of leveraging open-source in pharmaceutical solutions while ensuring compliance with the respective open-source licenses. It will offer insights, shedding light on how to appropriately utilize open-source solution through deriving or enhancing to a commercial or open-source solution. Attendees will gain a comprehensive understanding of how to effectively harness open-source tools while adhering to legal frameworks, fostering innovation and maximizing the potential of open-source.
This talk explores code automation of multiple languages using a single source of metadata. In particular, we focus on SDTM automation using SAS and R code and the benefits of having both available to the study team. Achieving the same result from different languages enhances confidence in the process and offers alternatives to traditional validation methods.
There has been a tendency for an all or nothing approach to open-source, with teams consumed by which route is ‘best’. We will show with a metadata driven approach leveraging CDISC standards, hard choices are not essential.
In partnership with CDISC DDF and COSA, Novo Nordisk is pioneering the development of StudyBuilder. The major driver is to create a seamless, reusable metadata flow to support creation of submission data as well as documents from end to end.
Everybody can agree to this vision. But how to navigate in a world of standards, authority requirements and untapped opportunities. In practice this is complicated due to several reasons:
Linking backwards: Integrating SDTM standards with the creative and scientific writing process is neither straightforward nor easy to implement. Cultural differences: The organization-wide experiences around metadata usage vary greatly among different stakeholders. Standardization: Libraries are now evolving into broader concepts, leveraging several terminologies, dictionaries, and the use of syntax templates to broaden the use of metadata.
Our presentation aims to discuss the challenges, solutions, and benefits encountered towards establishing an end-to-end study metadata flow.
Session 5C: Analysis Results Standards Workshop
Berlin 1-3
Analysis results play a crucial role in the drug development process, providing essential information for regulatory submission and decision-making. However, the current state of analysis results reporting is suboptimal, with limited standardization, lack of automation, and poor traceability. Currently, analysis results (tables, figures, and listings) are often presented in static, PDF-based reports that are difficult to navigate and vary between sponsors. Moreover, these reports are expensive to generate and offer limited reusability. To address these issues, the CDISC Analysis Results Standard (ARS) team has developed a logical model to support consistency, traceability, and reuse of results data. This workshop will provide an in-depth overview of the ARS model and practical examples illustrating the implementation of the model using common safety displays.
Session 5D: Risk Based Approaches (TMF Track)
Zoo 4
From a periodic check performed on a random sampling of the documents (e.g., 10%) to an Inspection Ready completeness check focusing on the study’s significant events and critical processes. How we are using functional expertise and first-hand knowledge of the various study team members to perform checks as an inspector would do it, enabling them to explain the story of the trial through the TMF.
The presentation will provide the concept, the benefits, the challenges of the GSK model.
The landscape in the clinical trial industry has been changing for some time, open-source tools and languages are becoming more prevalent, giving programmers more choice than ever before. While open-source brings new and exciting development opportunities, transition from current workflows can be problematic.
Sponsors are obliged to control Trial Master File (TMF) record quality for in-house and outsourced TMFs. The lack of industry-wide standards or specific guidance for a risk-based approach often leads to challenges. Adopting an established method originating from productions for sampling and quality control like ISO 2859-1 / ANSI ASQ Z1.4-2008 can be supportive. The methodology enables sponsors to use consistent principles which apply independently of any process landscape and Reference Model used, for in-house and outsourced TMFs. It consists of an acceptance sampling system which includes switching rules on a continuing stream of elements undergoing QC. It provides tightened, normal, and reduced plans determining the proportion of nonconformities within a defined lot or series. Considering a thorough check of the few selected sample records this risk-based method creates reviews which can be managed in a cost-effective way and allows to focus on the relevant measures improving TMF record quality.
Determining how to manage records for different types of studies in your eTMF system is crucial for efficient trial and resource management, compliance with regulatory requirements, and ensuring the integrity of data and processes. A risk-based approach for different types of studies within your TMF system should be guided by the need to ensure data integrity, traceability, compliance, and the overall management of the clinical trial. Once an organization has decided to establish a risk-based approach to TMF processes, implementing them both in written procedures and in system configurations is key to deriving value. This session will help walk people through the process of identifying, documenting, socializing and implementing a ‘risk-based’ approach to TMF management as it pertains to how records are managed and how oversight is done in their eTMF system at their organization. It will also address technology considerations for this process.
Session 5E: TMF Essentials (TMF Track)
Zoo 5
Many people think of the TMF Reference Model as just a TMF index but it is so much more! When leveraged properly, the TMF Reference Model can act as a powerful TMF Management tool throughout the study lifecycle. Come to this session to learn about the different ways the TMF Reference Model can be used during study start-up, conduct, and closeout/archival to improve overall TMF management and ensure continued inspection readiness.
As Biotech companies mature and run clinical trials, they will have significant benefit from introducing the TMF Reference Model as a framework for working with clinical trial related documents. This presentation highlights the benefits of adopting the TMF Reference Model and illustrates the pitfalls or risks of not having a structured approach to managing clinical documentation.
The risks and pitfalls include:
Document mismanagement - documents are scattered across several systems/tools (Teams, Win.Exp., SharePoint etc) Inconsistency and lack of standardization – leading to confusion among document users, wrongfully naming of documents and hindering efficient document retrieval Regulatory non-compliance – Challenges in demonstrating oversight and keeping data integrity Challenging collaboration – external contributors, CROs or potential partners will have difficulties in navigating document portfolio
By applying the TMF Reference Model all the risks and pitfalls can be mitigated, and further benefits can be achieved.
Risk based quality checks have been a focus in a number of regulatory guidances on TMF. An overall risk based approach is expected from the regulators as evidenced in the significant presence within the newest draft of ICH E6 - R3. This presentation will discuss practical approaches to risk based Quality Checks (QC) and the relationship of risk based QC to increased overall quality of the TMF.
Morning Break
Session 6A: Digital Data Flow
Europa 5
The lifecycle of a clinical trial begins with the development of the protocol and ends with the closeout of the clinical trial and all clinical content and data trace their origins to the clinical trial protocol.
The ICH M11 Clinical electronic Structured Harmonized Protocol (CeSHarP) initiative includes the 1) Guideline, 2) Protocol template, and 3) Technical specification to promote development of structured and unstructured protocol content, application of international clinical data standards, and interoperable/compatible electronic exchange across regulatory regions.
This presentation will focus on the collaboration of CDISC with ICH M11 to
• Create controlled terminology, code lists and content nomenclature
• Define a Content model to represent content agnostic of an exchange standard
• Determine conformance rules for the M11 model
• Define mappings between the M11 model, CDISC Standards and Artifacts
CDISC is also engaging in a joint project with the Vulcan FHIR accelerator to deliver an electronic exchange standard for the ICH M11.
Over the last two years CDISC, in collaboration with Transcelerate, have been working on the Digital Data Flow (DDF) initiative. This initiative aims to “modernize clinical trials by enabling a digital workflow to allow for the automated creation of study assets and configuration of study systems to support clinical trial execution.”. The work is focused on the protocol and associated study designs and manifests itself in a new CDISC standard, the Unified Study Definitions Model (USDM), and an open-source implementation of the USDM known as the Study Definitions Repository (SDR).
Now coming to the end of the second phase, with the third phase about to commence, the DDF project delivers a new standard that allows for the digitization of study designs and the foundation of the digital protocol.
This presentation will detail:
The work performed in phases one and two.
The work planned as part of phase three.
The use cases supported by the model.
How the model/standard can enable protocol creation, automated data flow and interoperability between systems.
How the model/standard can be deployed and implemented today.
The lifecycle of a clinical trial begins with the development of the protocol and ends with the closeout of the clinical trial and all clinical content and data trace their origins to the clinical trial protocol.
The TransCelerate Digital Data Flow (DDF) initiative has led to a major shift across the industry. As a result, clinical development executives are considering fundamental process changes given that digitalized protocol information becomes reusable and actionable.
Nurocor has been a participating vendor in DDF since the first Hackathon in 2020. At the DDF Discovery Day in 2023, Nurocor demonstrated its Digitalized Protocol capabilities as an upstream study definition platform provider that covers study design, protocol elements, objectives and endpoints, eligibility, interventions, the schedule of activities, and workflow driven integration with SDR. In 2024 “The Art of the Possible” has become reality with full customer implementations.
This presentation covers the following topics:
Describe the rapidly changing landscape around digitalized clinical development Share innovative ideas to integrate standards, processes and workflow via cloud platforms Highlight customer implementations along with tangible benefits and value proposition Explore the future of digitalized clinical development
Session 6B: Pivoting JSON
Europa 6
The Dataset-JSON Pilot is a PHUSE / CDISC / FDA project to evaluate "Dataset-JSON as an alternative transport format for regulatory submissions". At this stage the testing has been completed and the teams are finalizing their reports on the (1) pilot findings, (2) business case, (3) strategy and future implementation, and (4) technical implementation. This presentation highlights the most significant takeaways and findings as well as anticipated next steps from the pilot.
The new CDISC Dataset-JSON, as a replacement for SAS Transport, leads to many new opportunities.
It not only allows for non-ASCII characters (important for PMDA and NMPA), and lifts all SAS-Transport limitations, but also allows embedding images, and even full movies (e.g. MRI) into the dataset themselves.
It also allows us to embed the real source record (e.g. FHIR EHR-records, HL7 lab messages) into the SDTM itself.
The greatest opportunity however is that we can finally move from file-driven exchange and submission to an API-driven processes, which allows us to speed up the process considerably. It is estimated that this can lead to 1-2 years earlier market authorization.
These opportunities will be demonstrated during the presentation.
The new Analysis Results Standard (ARS) extends the coverage of CDISC standards from analysis inputs (ADaM datasets) to the representation of analysis results, each associated with comprehensive Analysis Results Metadata (ARM).
We have developed a SAS tool, ARD Generator, which makes use of the ARS to automate the generation of results.
The tool ingests ARM in JSON format and maps its deep hierarchical structure to SAS datasets. It then uses a macro library to perform the specified analyses and writes results to an ARS-compliant Analysis Result Dataset (ARD).
The tool is highly modular with much re-use of code. It largely automates the analysis process, as once the relevant macros exist, all the information needed to apply them can be taken from the ARM.
The presentation will illustrate the code using worked examples and discuss how ARD Generator fits in to our workflow as an academic clinical trials unit.
Session 6C: Submission Experience
Berlin 1-3
The creation of an e-submission package for an integrated summary of safety (ISS) presents us with various challenges. What data should the analyses be based on? How to achieve traceability? Which guidelines need to be followed? How should the documentation look like and what else needs to be considered?
Based on the experiences of mainanalytics, this presentation shows a path through the jungle, how an ISS can be prepared ready for submission to the FDA. References to existing guidelines and templates are also provided.
Following our recent NMPA submission experience, we would like to virtually take you to China to discover the requirements set by the Chinese authority and to share the specific strategy applied by Chiesi in terms of regulatory data package submission. For each data package component, we will analyze the specific requirements set by NMPA and the eventual differences from the other authorities, focusing on the approach followed for the translation of the different elements into Chinese. At the end of our virtual journey, we will then explore the received regulatory requests following site inspections, the main difficulties we faced and how these have been overcome and the take home messages. Among these, we learned that information sharing with multiple internal trainings to maximize lessons learnt from previous China regulatory experiences, cross-functional teamwork and planning ahead are they key aspects for a successful submission.
The Uppsala Monitoring Centre is the WHO Collaborating Centre for International Drug Monitoring which offers scientific, technical, and operational support to WHO Programme for International Drug Monitoring. As well as being the custodian of the global adverse drug reaction database, Vigibase, it is also the provider of the global drug dictionary WHODrug Global. WHODrug Global contains standardized information on drug names, ingredients, strengths, dosage forms, and indication classifications. Regulatory authorities require the use of WHODrug when submitting drug information at various stages of a drug’s life cycle. Learn how to retrieve the correct data from WHODrug for inclusion in the Concomitant medication (CM) domain to be fully compliant with the CDSIC SDTM standard.
Session 6D: TMF Management Through Metrics (TMF Track)
Zoo 4
The integration of Trial Master File (TMF) principles into the "Loop Diuretics and Weight Change in Heart Failure: A Meta-Analysis" study at the University of Oxford represents an innovative enhancement to research integrity and reliability. TMF's focus on detailed documentation, data integrity, and stringent quality control aligns with the needs of meta-analysis to ensure accurate assessments of loop diuretics' effects on heart failure patients. By adopting TMF's standardized documentation and quality control measures, the study aims to mitigate biases and errors, thus bolstering the credibility of its findings. Additionally, TMF's emphasis on regulatory compliance and ethical research practices ensures the careful handling of sensitive patient data. The application of TMF principles promises to improve the reproducibility, transparency, and accuracy of research outcomes, setting a new standard for methodological rigor and ethical excellence in heart failure pharmacotherapy research.
We all know how Completeness Timeliness and Quality are the basics necessities as the universally accepted minimum standard for TMF metrics.
But this is not just about providing the 3 high level indicators that by themselves are not actionable.
How do study teams, functional areas, and others responsible for TMF oversight know where to target their efforts to assure compliance and inspection readiness?
They need fit-for purpose tools enabling them to deep dive, to identify the opportunities for improvement at study or at transversal level and therefore to take informed actions. How did we implement such tools at GSK, what is our TMF performance dialogue model?
This is what we propose to share in this presentation.
The TMF Plan provides a roadmap for the processes and procedures you have defined to nsure a high-quality Trial Master File. It provides definitions of key elements such as required training, key metrics (which may be study-specific), TMF review requirements, systems of record for TMF elements, and more.
For the TMF Plan to be effective, it must be kept up to date, and it must be monitored periodically to assess adherence, understand challenges, and ensure it is supporting the goal of producing a high-quality TMF. This requires definition of a framework that combines business process and automation to monitor and assess adherence to, and effectiveness of, the plan.
Session 6E: Enhancing Quality (TMF Track)
Zoo 5
In the ever-evolving realm of clinical trials, the success of Trial Master File management hinges on adaptability and efficiency. This presentation advocates a paradigm shift from perceiving TMF as a static repository to embracing it as a dynamic wellspring of vital information. An exploration into the agile approach reveals strategies that align with the demands of contemporary clinical research and regulatory landscapes.
By redefining the TMF as a data collection rather than a document repository, this session proposes innovative processes. Imagine automated edit checks fueled by document-specific metadata and a process model seamlessly intertwining the TMF lifecycle with the clinical trial journey. Delving into key principles of agile methodologies, this transformative journey enhances confidence in Inspection Readiness. The compliance of your TMF is fortified through diverse quality checks, ranging from single document QC to holistic assessments encompassing completeness, quality, and timeliness. Embark on this expedition to reshape your processes and embrace the future of agile, data-centric TMF management.
An introduction to the CDISC ISF Reference Model Initiative that just launched 16-Apr-2024. The rationale and history for why it is needed will be shared, as well as perspective from a Site.
In the dynamic field of data science, the migration and archival of Trial Master File (TMF) data present significant challenges that can impact project timelines, budgets, and overall data integrity.
This talk delves into the complexities of transferring TMF data from Contract Research Organizations (CROs) to Sponsors or between Sponsors, offering insights from real-life case studies.
We explore strategies for preparing IT and business teams for upcoming migrations, ensuring alignment, and avoiding common pitfalls.
Key topics include the dos and don'ts of migrating active studies, aligning reference models between CROs and Sponsors, meeting data quality requirements, and establishing repeatable processes for study transfers.
The session aims to equip attendees with practical knowledge to streamline TMF data migrations, ensuring consistency, completeness, and harmonization with existing master data records.
Lunch
Europa 2 - 4
Session 7A: Digital Data Flow
Europa 5
In the realm of research and academia, the diversity of study protocols presents a significant challenge for seamless collaboration and data interoperability. Researchers often encounter difficulties in comprehending and integrating information from disparate protocols, hindering the progress of scientific inquiry. This proposal outlines a POC project aimed at creating a Unified Studies Definition Model (USDM) by translating diverse human-readable protocols into a standardized format. The envisaged solution combines the versatility of Excel spreadsheets and the efficiency of Natural Language Processing (NLP) techniques to bridge the gap between heterogeneous study designs.The primary objective of this project is to enhance the clarity, accessibility, and interoperability of research protocols by developing a standardized, machine-readable representation through the Unified Studies Definition Model.
Methodology:
1.Protocol Translation: Employing NLP algorithms, we will develop a system to automatically translate human-readable study protocols into a structured format. This involves extracting key information such as study objectives, methodologies, inclusion/exclusion criteria, and outcomes.
2.Excel Integration: Excel's ubiquity and user-friendly interface make it an ideal platform for implementing the Unified Studies Definition Model. The translated protocols will be integrated into Excel templates, providing a familiar environment for researchers to input and manipulate data. The templates will adhere to a standardized structure defined by the USDM, ensuring consistency across diverse research domains.
Expected Outcomes USDM:
1.Improved Collaboration: The Unified Studies Definition Model will promote collaboration by providing a standardized framework for communicating and sharing study protocols, thus fostering a more efficient and transparent research environment.
2.Enhanced Data Interoperability: The standardized structure of the USDM will enable seamless integration and comparison of data across studies, promoting meta-analyses and systematic reviews.
3.Time and Resource Efficiency: By automating the translation of protocols and utilizing a familiar platform like Excel, researchers will save time and resources that would otherwise be spent on manual data extraction and interpretation.
4.Increased Reproducibility: The transparency and standardization introduced by the USDM will enhance the reproducibility of studies, contributing to the overall reliability of scientific research.
Clinical trial protocols play a pivotal role in ensuring the success of studies. Currently, the reliance on English documents rather than standardized data introduces the risk of errors. Common issues include gaps in functionality, misconfigured CRFs, and scheduling problems. Efforts such as USDM, ICH M11, and DDF seek to address this challenge by emphasizing study configuration as the basis for all trial artifacts. However, implementing this shift may seem unfamiliar to protocol writers and requires immediate feedback to validate its effectiveness.
This presentation will illustrate a system designed to address these challenges. It will demonstrate how technology leverages industry standards to create real-time protocols and study configurations, aiming to eliminate misconfiguration and enhance collaboration within study teams.
The CDISC Unified Study Definitions Model (USDM), Biomedical Concepts, and the implementation of end-to-end study data automation is hard to envision from model diagrams and conceptual drawings. To make the USDM vision tangible, we will demonstrate:
1. Create a digital study protocol including the study design and the schedule of assessments (SoA).
2. Drive data capture artefacts from the SoA
3. Load data: bulk loads, human entered and from EHR sources (FHIR)
4. Demonstrate the automated generation of SDTM.
5. Show how submission ready artefacts (aCRFs and define.xml) can be generated.
6. Touch upon how data anonymisation to allow for data sharing (FAIR data) can be accommodated.
Focus is on what is possible and show how USDM, BCs, and SDTM can be brought together in a seamless manner, allow for the move away from a siloed and processed focused way of working to a data-centric world of seamless integrated standards.
The Clinical Data Interchange Standards Consortium (CDISC) developed the Unified Study Data Model (USDM) to standardize the exchange, submission, and archiving of clinical research data. The USDM is flexible and can accommodate various study designs, data types, and collection methods.
The OpenStudyBuilder is an open-source tool that simplifies the creation and management of clinical study metadata. By generating metadata in the USDM format, the OpenStudyBuilder makes it easier to organize and document study data. The standardized format ensures consistency across study sites and facilitates data analysis and comparison.
We will demonstrate how metadata can be converted to USDM format for a defined study. Using USDM in conjunction with other CDISC standards in the OpenStudyBuilder ensures data collection, analysis, and reporting are consistent, improving the quality and efficiency of clinical research.
- Rob DiCicco, TransCelerate BioPharma
- Peter Van Reusel, CDISC
- Dave Iberson-Hurst, CDISC
- Frederik Malfait, Nurocor
- Jasmine Kestemont, Innovion
Session 7B: ADaM
Europa 6
The use of Hierarchical Composite Endpoints (HCEs) in clinical trials is steadily increasing, but their integration into Analysis Data Model (ADaM) is currently hindered by limited coverage in CDISC standards. Although existing implementation guides provide insights into incorporating HCEs into analysis datasets, a lack of specific rules and guidelines tailored to HCEs remains.
A shorter introductory segment provides an examination of the significance of HCEs in clinical trials and what type of analysis have been and will be used within the scope of HCEs. The main topic of the presentation is the implementation of HCEs in ADaM. It offers a contextual overview of their relevance, examines current implementations in analysis data, and emphasizes the need for tailored rules. This will be visualized by a few examples. Additionally, the discussion encompasses how future CDISC updates could simplify and endorse the utilization of HCEs in ADaM.
The Addendum on Estimands and Sensitivity Analysis in Clinical Trials (ICH E9 (R1)) has been or is in the process of being adopted by Health Authorities. All clinical studies will be expected to implement the estimands framework. It covers the important statistical considerations for implementation and includes some discussions on trial design and conduct; however, the technical implementation in the data flow, including regulatory submissions deliverables (SDTM, ADaM, cSDRG, ADRG), was not in scope. Consequently, current data standards needed to be evaluated to ensure sufficient support. The PHUSE Optimizing the Use of Data Standards working group started a project titled “Implementation of Estimands (ICH E9 (R1)) using Data Standards” in 2022 to address that gap and publish a white paper containing best practices to align approaches across implementers. This presentation will introduce the key outcomes from the white paper, with a focus on ADaM implementation.
Reporting Exposure-Adjusted Incidence Rate (EAIR) can be part of an Integrated Summary of Safety (ISS) submission since the regulatory agency can be interested on the drug exposure duration of the subjects until a certain adverse event (AE) occurred. The programming, however, can be a challenging task, especially since the subject population in an ISS is huge and various AE categories must be reported. The adverse events analysis dataset (ADAE) can be used directly to report results in your table utilizing a macro, or alternatively a time to event analysis dataset (ADTTE) can be additionally created to facilitate easy reporting of the EAIR. This paper will talk about how to setup ADAE and ADTTE to support the EAIR analysis and why this approach is more efficient. Furthermore, some challenges related to integrated database development and reporting will be discussed alongside tips and best practices to overcome these challenges effectively.
With this presentation we would like to present ADaM in a unique and innovative perspective, by guiding through practical examples the audience, “catering” both novice and advanced users. The presentation begins with a straightforward illustration, visually walking the audience through hypothetical Statistical Analysis Plan (SAP) outputs, assist with the identification of the most suitable ADaM class/structure and selecting the right derived variables for analysis-ready and traceable ADaM datasets.
Building upon this foundation, the session delves into more complex scenarios, showcasing techniques such as the creation of phantom records for intricate baseline identification and handling complex study endpoints. We'll also explore approaches for identifying analysis periods in studies with complex designs, such as cross-over and multi-treatment therapy.
Novice users will gain a structured introduction to ADaM principles, understanding fundamental concepts and best practices. Concurrently, advanced users will delve into optimizing ADaM implementation for varied study designs, gaining valuable insights.
Session 7C: Implementation Challenges
Berlin 1-3
Platform trials represent a class of master protocol design that evaluates multiple targeted therapies in a single disease setting. In such trials, each specific study treatment is described in a separate module. This modular approach involves one study with different modules, each having its treatment schedule, visit windows, and reference timepoints. Each module is considered as one sub-study, analyzed separately to achieve common final objective and endpoints.
These studies present a unique challenge as the data diversity grows over time. Each module starts at different times, introducing new code lists or conditions that were not initially considered in the corresponding analysis variables. Therefore, it is crucial to define SDTM and ADaM structures in a flexible and robust setting capable of covering all possible data scenarios from the study's onset.
In this presentation we will share our experiences implementing SDTM and ADAM during one modular study
Data collection of adjudicated events and findings has been challenging due to the limited guidance from standards organizations and regulatory agencies. Leading to differences in data collection and reporting approaches over the years.
The presentation will cover key points in collecting the finding and events adjudication data and the inputs used to create mapping. Review of the historical data mapping findings adjudication process was examined by the company leading up to the new solution. Using the structure and process included in the PHUSE paper Best Practices for Submission of Event Adjudication Data, Version Date 18-Oct-2019, the team looked at the adjudication findings to improve the data collection and reporting. Findings about collection for the adjudication findings map to the FACE domain and required the creation of a custom domain of XC Adjudication Findings involving cross functional collaboration between internal and external parties ensuring full traceability from end to end.
The FDA in the Study Data Technical Conformance Guide (TCG) V5.4 and 5.6 have requested sponsors to report laboratory test results in SI units in the LB domain and conventional units in a custom domain with the code of LC structured identically to LB.
In this paper we will review the two domain approach to submitting standard laboratory units and challenges that a sponsor may face including different sort order between the domains caused by variable changes in the different standard unit types.
Navigating the Labyrinth of Controlled Terminology: Challenges and Opportunities in Integrating and Managing Diverse Terminologies for Broad Use of Metadata
The use of controlled terminology (CT) is expanding gradually to cater for protocol metadata and to facilitate broad use of metadata. However, organizations that develop CT vary in their recommendations. This abstract aim to highlight some concrete examples of the challenges related to the use of different CT versions across organizations:
• The same C code in CDISC CT has multiple submission values.
• A given term in one CT version is mapped to several terms in another CT version.
• Organizations use the word ‘study’ and ‘trial’ interchangeably.
Such examples challenge the creation of a CT management system, that should be able to include and integrate different CT versions, to deliver ‘one-source of truth’ for broad use of metadata. As the different organizations don´t develop CT in the same direction, a single publicly available and clear model for ‘one source of truth’, would be of even greater value.
Session 7 D&E: Data-Driven TMF (TMF Track)
Zoo 4+5
The Digital Data Flow (DDF) initiative combined with ICH M11 CeSHarP promise to revolutionise the way that clinical systems interoperate. In this session we will discuss how DDF, ICH M11 and other CDISC standardization initiatives could help the TMF more effectively tell the story of the trial while also being able to demonstrate compliance with the protocol, SOPs, and regulatory requirements: How DDF would drive more effective management of the TMF as a whole, and ultimately better inspection readiness by connecting the eTMF to the rich insights contained in other clinical systems. ICH M11 also promises to improve our inspection readiness using comprehensive and actionable study design information and more effective evaluation of completeness against the study protocol. Finally, the TMF Reference Model as part of the CDISC family of standards provides possibilities for information exchange and a more data driven approach to TMF management and inspection readiness.
- Aaron Grant, Phlexglobal
- John Blunden, NNIT
- Martin Hausten, Boehringer Ingelheim
- Virendra Alate, ICON
The TMF Reference Model Exchange Mechanism (EMS) serves as an emerging standard for TMF interchange between organizations. It is important when refreshing the standard to bring a holistic view of the benefits and challenges the industry currently faces when moving TMF content.
This conference panel brings together key stakeholders for success including representatives from a Contract Research Organization, sponsor organization, vendor, and migration specialist, to delve into the transformative advantages offered by the TMF Reference Model Exchange Mechanism.
Attendees can expect a comprehensive discussion that explores benefits to the EMS such as simplified export, faster transfer times and import efficiencies. However with any standard, barriers to adoption will be shared from the various different perspectives of the stakeholders in the group. By bringing together diverse expertise, this panel aims to provide a holistic understanding of how this innovative approach positively impacts the entire TMF, fostering collaboration and improving quality for all.