Program of events is subject to change.

1 Apr 2020

Session 1: Opening Plenary and Welcome Address

9:00 - 10:00
9:00 - 9:15

Opening Remarks & Welcome Address

Joerg Dillert, E3C Chair
9:15 - 10:00

Keynote Presentation: IMI PIONEER - Prostate cancer diagnosis and treatment enhancement through the power of big data in Europe

Prof. James N'Dow, University of Aberdeen
Prof. James N'Dow is coordinator for PIONEER, a €12M IMI funded Prostate Cancer Big Data for Better Outcomes (BD4BO) research program. Prostate cancer is the second most common cancer in men, and accounts for 9 % of all cancer deaths in men. Currently, it is hard to predict which patients will respond best to different treatments, and which patients can be managed safely without undergoing treatment. The aim of PIONEER is to use big data to address key knowledge gaps related to the screening, diagnosis and treatment of prostate cancer patients. To do this, IMI PIONEER will standardise and integrate existing ‘big data’ from sources such as clinical trials and electronic health records. The project will draw on these data to identify ways to improve prostate cancer outcomes and health system efficiency. The project will apply FAIR (‘findable, accessible, interoperable, and reusable’) principles to the data. Ultimately, the project results should feed back into clinical centres so that patients can benefit from the best possible care.

Session 2: Medical Devices

Session Chair: Silvia Faini, LivaNova
10:00 - 11:10
10:00 - 10:20

CDISC Standards for Medical Devices: Historical Perspective and Current Status

Carey Smoak, S-Cubed
Work on SDTM domains for medical devices was begun in 2006. Seven SDTM domains were published in 2012 to accommodate medical device data. Minor updates to these seven SDTM domains were published in 2018. These seven SDTM domains are intended for use by medical device companies in getting their products approved/cleared by regulatory authorities and for use by pharmaceutical companies to put ancillary device data. As evidenced by the Therapeutic Area User Guides, pharmaceutical companies are using these seven SDTM domains for ancillary devices. However, adoption of these seven SDTM domains by medical device companies seems to be happening rather slowly. The Centers for Radiologic Health and Devices (CDRH) recently published a document titled \ ‘Providing Regulatory Submissions for Medical Devices in Electronic Format.’ In 1999, the FDA published a similar document for pharmaceutical products which was the beginning of the development of CDISC standards for the pharmaceutical industry. While CDRH has not made a statement that they are moving towards the requirement of CDISC standards for medical device submissions the publishing of this document is a step in the right direction. Officially, CDRH accepts data in any format, including CDISC-conforming data.
10:20 - 10:40

When Medical Devices Unleash SDTM Power

Roxane Debrus, Terumo Europe NV and Silvia Faini, LivaNova
As per CDISC SDTMIG-MD, 7 additional SDTM domains have been created specifically for studies using a medical device. Based on the existing publications, most of the cases where these domains are implemented are drug studies in which a medical device is used. The medical devices either deliver the investigational drug (e.g. Insulin pumps) or assess/measure the impact/effect of the drug (e.g. diagnostic devices such as implanted glucometer). But how to use them in medical device studies where no drugs are involved? Where the main objective is the assessment of the safety and/or the efficacy of the device itself. Even though it is not mandatory to deliver a CDSIC SDTM compliant database for FDA or PMDA submissions yet, some Medical Device companies have started designing the database of their new studies based on the CDISC standards. LivaNova and Terumo are two examples of global leader device companies who took this initiative. By combining experiences, 2 real business cases can be presented: one from LivaNova with a study on implantable heart valves, and a second from Terumo with a study on an implantable a cardiac/peripheral stent. The presentation will focus on the challenges faced and the decisions taken during the mapping and implementation of the CDISC standards. This will hopefully broaden the scope of these 7 additional SDTM domains and bring to light the possibilities they hold.
10:40 - 11:00

SDTM and Activity Trackers: Early Experiences and Challenges

Martin Gram and Gianluca Mortari, Novo Nordisk
Wearable technologies play a growing role in the drug development industry and there is a large potential to pursue novel endpoints that has clinical relevance within many different disease areas. This type of data is unlike traditional clinical data both in terms of data volume and data transparency. At the same time standards for physical activity data is relatively immature. This presentation will share some of our experiences and learnings we have gathered while preparing our first generation of SDTM data based on an FDA class II wrist worn activity tracker. We will discuss: - How to define valid and reliable endpoints for physical activity. - How to handle sleep and non-wear. - Data structure, choice of device domains, sponsor defined domains and associated variables. - The impact of data volume on system control and data generation. - Is SDTM fit for purpose when dealing with data from wearable devices.
11:00 - 11:10

Q&A

Session 3: CDISC Foundational

Session Chair: Angelo Tinazzi, Cytel
11:10 - 12:20
11:10 - 11:30

Deconstructing SDTM - Finding the Hidden Gems

Johannes Ulander, S-Cubed
The CDISC SDTM standard has been with us for many years now and thousands of SDTM datasets, aCRF's and define-xml's are created and submitted every year. Since producing SDTM deliverables is now an everyday task the maturity and experience in the industry must have grown, but according to FDA there are quality issues and variability between submissions which makes the Clinical Data Study Reviewers Guide (cSDRG) the most important delivery, which is there to explain when we are not able to conform to the standard! You would expect quality to improve and cSDRG's to become smaller as we get better at adopting to the standard, yet this does not seem to happen. So what is the problem with SDTM? This presentation will deconstruct SDTM and increase your understanding of the standard and help you produce better quality SDTM datasets, aCRF's and define-xml. (Maybe even how you will be able to automate it with the click of a button.)
11:30 - 11:50

Efficacy ADaM datasets applicable to all solid tumor studies

Letizia Nidiaci, Rossi C., Scartoni S., Menarini Ricerche S.p.A.
In the recent years oncology is an important area of interest for the drug development. Oncology trials are very heterogeneous and complex and ADaM guidelines cover only partially the entire scenario. This work intends to give an overview of how manage efficacy data for solid tumor. The typical primary efficacy endpoints are: Overall Survival (OS), Objective Response Rate (ORR), Time to Progression (TTP) and/or Progression Free Survival (PFS) and the standard way to evaluate the tumor status are the RECIST criteria (Response Evaluation Criteria in Solid Tumor). CDISC provides clear specifications for the creation of tumor datasets in the SDTM domains such as Tumor Identification (TU), Tumor Results (TR) and Response (RS). CDISC in agreement with ADaM methodology provides explicit indications on the structure of the main ADaM datasets by tumor type. In study where the protocol allows to enroll patients with different type of tumor, such as first in human, basket trail and master protocol, is not available yet a specific guideline. Our aim is to focus on the creation of the ADaM datasets for primary efficacy endpoint (ADTR, ADRS and ADORS), coming from tumor assessment performed according to RECIST criteria and organized in the specific SDTM datasets. We defined a standard procedure to create the mentioned ADaM datasets focusing on the RECIST guideline independently from the type of tumor. This methodology could organize efficacy parameters detected through RECIST criteria in ADaM datasets for all studies in solid tumor encompassing one single or multiple type of tumor.
11:50 - 12:10

Building a Team for Legacy Data Conversion

Lieke Gijsbers, OCS Life Sciences and Jasmine Kestemont, Innovion
This presentation highlights the experiences and the lessons learned from a legacy data conversion project. The aim of the project was to prepare a study portfolio for submission to the U.S. Food and Drug Administration (FDA) and encompassed the conversion of more than 20 clinical trials to SDTM and ADaM standards. Data in these trials were collected by various CRO’s in various formats, most of which weren’t anywhere near CDISC(-like) formats, some collected over a decade ago. The focus of the presentation will be on the project strategy and approaches taken to conduct the conversion in an efficient and consistent way, the challenge of keeping the team motivated during scale up and full duration of the project.
12:10 - 12:20

Q&A

Lunch Break

12:20 - 12:50

Session 4: Submission Experience

Session Chair: Simon Lundberg, AstraZeneca
12:50 - 14:00
12:50 - 13:10

NMPA Submission (Data Submission in China)

Sujit Khune and Marianne Caramés, Novo Nordisk
The regulatory environment in China has been developing including naming changes. Reforms are building smoother processes for innovative drug development in terms of adopting global standards, increasing review and approval transparency, accelerating review and approval of new drugs. National Medical Products Administration (NMPA) has refined previous regulations to clearly define the requirements to clinical trial operation, multi-regional clinical trial design, biostatistics principles, electronic data capture, data management and statistical analysis reporting and on-site inspection. NMPA has also released guidelines for drug development in terms of communication for drug development and technical evaluation, electronic common technical document implementation, post approval safety surveillance. The Chinese have also released regulations on 'Application Requirements for Clinical Trial Dataset and Related Materials in eCTD', where they encourage the use of CDISC standards. This presentation is based on NMPA's latest development in Data standards and Novo Nordisk's latest submission experience with NMPA.
13:10 - 13:30

Re-mastering the Define-XML and its Brother, the "Reviewer Guide"

Angelo Tinazzi, Cytel Inc
Define-xml and the reviewer guide are two required pieces for all electronic data submissions that include SDTM and ADaM packages for both FDA and PMDA. They are key for the reviewer to understand how you organized the data you collected or how you have analyzed them. The good quality of the define.xml and the reviewer guide is your business card for the agency. With this presentation I would like to discuss what you have to do to make sure your define.xml and reviewer guide are of good quality. Examples are but not limited to the following topics: - When do I need to assign a code-list to a variable or value level metadata - Use of subset code-list - When do I need to create a Value Level Metadata - Good vs bad computational algorithms description - When it is time to put details in the reviewer guide (instead of define-xml) - Consistency between SDTM define-xml and ADaM define-xml Examples of good metadata to describe complex situations will be also discussed as well as common misuse.
13:30 - 13:50

Journey to PMDA Submission (From Legacy Data to SDTM 3.2)

Charlotte Dhont and Oana Pasalau-Cioaba, UCB
A decade of data, from old paper eCRF data, data that has been collected when standards were not yet mandatory, to data that was tabulated into old SDTM versions, is assessed into one project. How do you ensure oversight on all this data which needs to be processed and, at the same time, keep pace with the regulatory authorities’ new requirements? The data journey turned out to be a rollercoaster as a various range of questions emerged throughout the course of the project: How do the standards apply to this data? In which measure is data compliant with the current sponsor standards? How is consistency ensured throughout the project? What are the newest requirements that the regulatory agency is enforcing on the submission? The only constant in the process was recognized as CHANGE. Change dictated the operating strategy: Divide and Conquer. Retrieval of archived data from different filling systems (some of which already retired from use), standard unit conversion or Pinnacle 21 validation rules compliance, downstream impact of SDTM mapping and upstream impact of ADaM review, were some of the weak links identified in the process, which served as starting point for focused discussions with subject matter experts. As soon as data needs were defined, answers flown from validated channels and hands-on common knowledge of the standards translated into re-assembly of the data in the required format. The association of all data is prone to regulatory submissions at one moment in time and good preparation and effective communication pave the way to a successful roadmap illustrate the journey from legacy data to ready-for-submission data.
13:50 - 14:00

Q&A

Session 5: Metadata

Session Chair: Malathi Hari, Larix
14:00 - 15:10
14:00 - 14:20

Lean Protocol: The Case for Standards Driven Digital Protocols

Frederik Malfait, Nurocor
Clinical trials are science driven experiments that require significant planning and operational execution. Study protocols capture the planning, but today that information is mostly contained in paper documents and not readily actionable. TransCelerate's Common Protocol Template harmonizes protocol content, but the working product is still paper based. On the other hand, clinical data standards have addressed how clinical data is collected, tabulated, and reported, but they have taken a fragmented approach and are disconnected from the study protocol. In this presentation, we make the case that a comprehensive solution for end-to-end process automation in clinical research must depend on the capability to use study protocols as part of a fully digital solution, driven from the outset by integrated data standards. We discuss the challenges for creating digital protocols, show why data standards must connect with the protocol, and make the case why clinical data standards must be applied much earlier in the clinical development cycle. Finally, we describe how technology platforms play an essential role in supporting a robust digital solution. Digital protocols driven by integrated data standards enable the automation of clinical trial operations and allow more granular control of the clinical development processes, even before the protocol is final. Thus, a Lean Protocol (TM) process promotes front-loading activities, leading to a significant reduction in cycle times.
14:20 - 14:40

A Perfect Summer: Working Towards Better Terminology

Dave Iberson-Hurst, Assero
In 2013 I produced an excel spreadsheet, powered by Visual Basic Macros, that detailed all the changes across the CDISC terminology since the first version was issued in 2007. By 2015 I was convinced there had to be a better way. This presentation will cover the work undertaken since 2015 - but in particular in the summer of 2019 - to process and merge all terminology issued by CDISC since the first version was posted and the challenges faced in undertaking the work. The presentation will detail: how we have moved from a close alignment with the CDISC excel sheets to a linked data/graph approach the approach taken in the handling of errors, duplicates and discrepancies the management of terminology versions the handling of CDISC’s change instructions the recreating of the source material The presentation will also discuss moving forward from using the spreadsheets as the source material to the use of the CDISC library API to provide more automated updates. The presentation will conclude with lessons learned and how this work can be, and is being, shared with the user community.
14:40 - 15:00

Beyond Biomedical Concepts: How Study Management Concepts Fill the Gaps

Dr. Philippe Verplancke, XClinical GmbH
15:00 - 15:10

Q&A

Session 6: CDISC Foundational, Cont'd

Session Chair: Éanna Kiely, Clinbuild
15:10 - 16:20
15:10 - 15:30

Development and Implementation of the LOINC-SDTM Mapping Table

Ward Puttemans, UCB and Erin Muhlbradt, NIH
On behalf of the Lab Controlled Terminology team we will present the work that has been done to develop the LOINC - SDTM mapping table. We will clarify the requirements and what it means practically for users. Next we will talk about the actual development of the mapping table, how we developed the structure, populated the file and the steps that were taken towards the publication of the file. As a last part of the presentation we will talk about the actual implementation on sponsor level, assessing how the mapping table influences the existing internal lab controlled terminology at sponsor level, changes to validation software and impact on upcoming trials. We will show how the development of the mapping table is a great example of cooperation between sponsors, CROs and government agency.
15:30 - 15:50

What's Up with LOINC and UCUM? From EHR Records to LB Dataset in Just a Few Minutes

Jozef Aerts, XML4Pharma
The FDA has mandated the use of LOINC coding for lab tests in submissions, and recently endorsed UCUM for representing units. What does this mean for CDISC-standards based submissions? The presentation will start with a life demo, retrieving electronic health records from a very large EHR system, and generating complete SDTM-LB datasets "on the fly", including standardizing from US conventional units to SI units, this without any use of conversion tables. This is possible by using a combination of the LOINC codes and UCUM notation for the units, and using a RESTful web service for conventional-SI (and vice versa) unit conversion that we recently developed, and that will soon be deployed by the National Library of Medicine (NLM) to which we donated the source code. The demo application also uses a web service that implements the LOINC to CDISC-LB mapping that was developed by the CDISC-CT Lab team. The demo shows us that in pretty near future, there will be no way around allowing the use of UCUM notation in SDTM and SEND, and to slowly fade out the use of the CDISC "UNIT" codelist. Also, it must be realized that we might be close to the day that the FDA mandates the use of UCUM, so we better be prepared.
15:50 - 16:10

Use case for multiple enrollments

Wafaa Jebert, ichnos sciences
The multiple screenings and enrollments are considered as a challenging topic in data preparation. It is complicated to represent the multiple screening data without breaking few rules in SDTM/ADaM. This is even more tedious to represent the data of multiple enrolment when it is not allowed per protocol. The presentation will be about a specific case encountered in one of the pivotal studies for an FDA submission. The presentation will highlight the use case encountered and how the it was managed to handle the issue in SDTM/ADaM, submission package and statistical analysis. The use case is the following : - A subject was randomized twice sequentially : finished the treatment period and got randomized again and treated - Another subject was randomized two times at the same period in two different centers. In this trial multiple enrollments were not allowed by the protocol, so both cases are not per protocol. The issue, the different solutions considered by the team and the process adopted by the cross functional team will be detailed in the presentation. In addition to the impact on SDTM, ADaM and the statistical analysis will be discussed The presentation will also discuss the latest guidance issued by the FDA to handle multiple enrollment and its pros and cons.
16:10 - 16:20

Q&A

Session 7: First Day Closing Remarks

Session Chair: Joerg Dillert, E3C Chair
16:20 - 16:35
16:20 - 16:35

Updates from the CDISC Board to the Community

Jonathan Zung, CDISC Board Chair
2 Apr 2020

Session 8: Regulatory Session, Part I

Session Chair: Nick De Donder, Business & Decision Life Sciences
9:00 - 10:45
9:00 - 9:30

Use of CDISC SDTM in Large-scale IPD Meta-Analyses

Dr. Christina Reith, Nuffield Department of Population Health, University of Oxford
9:30 - 10:00

Big Data Task Force

Sofia Zastavnik, HMA/EMA
10:00 - 10:30

Regulatory Presentation

Dr. Yuki Ando, PMDA
10:30 - 10:45

Q&A

Q&A will include speakers listed, and Frank Pétavy and Gianmario Candore of the EMA.

Session 9: TAUG

Session Chair: Andrea Rauch, Boehringer-Ingelheim
10:45 - 11:55
10:45 - 11:05

Let Food be thy Medicine and Medicine be thy Food

Fatima Kassim and Louella Schoemacher, OCS Life Sciences
Dear diary, This morning I ate two sandwiches with jam. I drank one glass of grapefruit juice and one glass of water to take my medication. The person who wrote this diary is exposed both to nutrition and medication. However, the way these data would be collected and standardised in clinical trials is very different. In the pharmaceutical industry, rules are very strict and all submissions to the FDA, EMA and PMDA are either required or advised to follow CDISC standards to be accepted. However, in nutrition rules are often not that strictly implemented and data that is collected can be innately so different that standardisation is one tough cookie. In September of 2019, the Nutrition TAUG version 1.0 was published. This user guide describes how to use CDISC standards to represent data pertaining to nutrition studies. There is an inherent difference between the way nutritional data and pharmaceutical data are collected, handled and reviewed within the life sciences industry and by regulatory authorities. This presentation will focus on how these differences come into play when trying to standardise data across the industry.
11:05 - 11:25

SDTM: Let's read outside the Bible

Sandra Latorre, Business & Decision Life Sciences
Over the past years the development of the CDISC models has been accelerated. The SDTM model and the SDTMIG are the basis for CDISC users who want to create SDTM datasets. Despite the amount of information present in these documents, we are still struggling from time to time to find the right way to map some data. We believe that in order to enhance our SDTM expertise, it is useful to gain knowledge from other standards, mainly CDASH and TAUGs. Indeed, CDISC is actively collaborating with other partners on the development of Therapeutic Area Data Standards. The Therapeutic Area User Guides (TAUG) are intended to give guidance on how data collected for a specific therapeutic area should be mapped to SDTM. However, some interesting generic information can be extracted from these TAUGs, that can be used for other diseases than the one under scope. E.g., we can find in the nutrition TAUG how to handle various types of diaries data and how it could end-up in SDTM, even though we are working on a non-nutrition related disease. Regarding the CDASH Model, although it is known to provide a general framework to collect information on CRFs, it also provides information on how to map collected variables to SDTM. For example, it explains mapping of short physical examination data to Procedures domain (PR), which is not obvious in solely by reading SDTM-IG.
11:25 - 11:45

Application of CDISC Data Standards in Cardiovascular Data

Shilpakala Vasudevan, Ephicacy
The Cardiovascular TAUG (TAUG-CV) is designed to accommodate a number of common Cardiovascular endpoints, including myocardial infarction, ischemic attack, stroke and heart failure. In this presentation, it will be shown how the CDISC standards are implemented for some of these endpoints, by providing examples.
11:45 - 11:55

Q&A

Lunch Break

11:55 - 12:25

Session 10: Tech-Enabled Standards

Session Chair: Johannes Ulander, S-cubed
12:25 - 13:55
12:25 - 12:45

CDISC Library Try-out: From Implementation to Evaluation of the API

Roman Radelicki, SGS Life Sciences
CDISC Library promises linked data and a REST API to deliver CDISC standards metadata for software applications to automate standards-based processes. With my background as a programmer, my curiosity was immediately triggered during the presentation on the CDISC 360 project: a REST API (Application Programming Interface), linked metadata and biomedical concepts! In this presentation, I want to share the journey of the CDISC Library implementation in one of our in-house developed tools. To test and demonstrate the implementation of the CDISC Library, we used our TS (Trial Summary Information) dataset creation tool. This tool was the ideal case study candidate because it is a standalone tool which makes use of the CDISC SDTMIG and CDISC CT data, without a major impact on other tools allowing quick deployment of the CDISC Library. The TS dataset creation tool enables the Clinical Data Manager to create the TS dataset for a trial, necessary for submission to the authorities. The Clinical Data Manager is able to add trial summary parameters which are based on the CDISC controlled terminology. In this presentation, I will explain the tool and handle the API calls in more detail. My thoughts and findings on the use of CDISC Library within the tool will be shared. I will compare the CDISC Library to the former way we handled the CDISC SHARE data. Finally, I will also evaluate what might still be missing and future opportunities for improvement.
12:45 - 13:05

Creating and Maintaining Open Source: Why and How?

Katja Glass, Katja Glass Consulting
What kind of open source solutions and free tools are available to support CDISC processes? Why is open source very successful in other branches, but not in Pharma? What can we do to enable more open source solutions and how to do so? How to find what is available? These questions will be clarified and discussed. See which opportunities open source can bring to us and get a better understanding on how open source could work.
13:05 - 13:25

CDISC 360 Metadata-Driven Data Transformation Engine for Automation and Transparency

Gregory Steffens, AbbVie
This presentation describes the metadata design and data transformation engine (DTE) software approach, that is being used in the CDISC 360 project. The objectives of automation and transparency of data flow are attained with a robust metadata design and accompanying software. The metadata design is unchanged for all data states and standards, it can be used to store any data standard, any study or integrated data specification and any data flow. The software is also unchanged across data states and studies and "knows" how to do the data transformations and access derivations to accomplish data flows described in the metadata. The scope of application of this metadata-driven DTE includes a wide variety of relational data flows beyond current CDISC, such a DTS (data transfer specification) and DRP (data review plan). The user will describe what data flow to implement and the DTE will implement the code to perform the data flow. Much less study-level programming will be necessary. Implementing such a design, of metadata and software, delivers automation and transparency today. Furthermore, it establishes a foundation for future development, in such technologies as AI and machine learning, to manage metadata-resident standards and project data specifications. This revolutionary change in the way we specify and perform data and reporting, with an evolutionary implementation, is described. An evolutionary step from inaccessible document-based data standards, to an industry-level standard of metadata design for storing transparent standards and data specifications is taking its next step now. It will become clear that this more robust metadata content requires the publication format to go beyond the simple define file, which still has the feel of its origins in the old pdf document format and follows the xml schema design rather than optimizing with human-friendly presentation formats. Industry-standard software and metadata must replace the study-by-study coding process we use today. Study programming is opaque, time consuming and of uneven quality. The days of the opaque CTR method of coding (copy-tweak-run) are being replaced by a transparent DTE. This DTE is intelligent enough to implement any data flow described by metadata, separating the description of data flow, to machine and human, from the mechanical code syntax to implement the data flow. The revolution to a lesser amount of project-specific programming and more metadata management requires the kind of evolutionary steps being demonstrated in the CDISC 360 project. Our data standards must evolve to include these industry-level standards of metadata design and shared DTE software, in order to deliver the transparency, speed, quality and data privacy in data analyses. CDISC can evolve its scope of industry-level standards to metadata and software in order to lead to the future.
13:25 - 13:45

A vision of the invisible a new SDTM Validation Tool

Anamaria Calai, Cmed Clinical Services
This presentation will describe an SDTM Validation Tool created by Cmed to expand the capabilities of the Pinnacle 21 Validator Tool's checks. Limitations of the checks performed by Pinnacle 21 and individual client interpretation of CDISC SDTM standards is countered by applying an additional layer of validation to secure the highest quality deliverables. The Cmed Validation Tool is not intended to replace the Pinnacle 21 Validator Tool or other similar tools, but rather to enhance their power. The creation of the Validation Tool was based on an assessment of manual checks which would remain following a programmatic QC and validation, as well as on possible issues which could be caused by programming and data collection/entry. Three areas were identified, where the validation could be easily automated and would have the biggest impact on the quality and efficiency. The Validation Tool uses individually programmed checks for each domain and implements cross-domain checks to provide additional value. The validation follows a very dynamic, non-study specific approach. An important aspect of the Validation Tool is the comparison with the RAW data; being able to detect any common programming/specifications issues. Moreover, using the Validation Tool to automate the self-checks during programming, as opposed to running it on the final datasets, reduces the time and effort needed to QC the datasets, and the resulting re-work. The overall scope of the Validation Tool is an in-house SAS macro to detect SDTM inconsistencies and CDISC standards, and to minimize the time and effort required for QC/validation. The Validation Tool's strengths are: proactivity, efficiency, specificity, traceability, transparency.
13:45 - 13:55

Q&A

Session 11: RWD

Session Chair: Stijn Rogiers, SAS
13:55 - 15:05
13:55 - 14:15

The IMI EHDEN Project - Cultivating real world data across Europe

Maxim Moinat, The Hyve; Nigel Hughes, Janssen
The European Health Data & Evidence Network (EHDEN) project, funded via the Innovative Medicines Initiative (IMI), is the largest of its kind in Europe working in the domain of RWD/RWE. It is a public private partnership consortium of 22 partners, from 2018 to 2024, led by Erasmus Medical Center (EMC) and Janssen, working to create an open science community symbiotic with the Observational Health Data Science and Informatics (OHDSI) global framework to facilitate observational/RWD-based research at scale and acceleration, without impinging on quality. At its core is the standardisation of RWD via use of the Observational Medical Outcomes Partnership (OMOP) common data model (CDM), standardised analytics and a sustainable research community for the coming decades. Potentially, between EHDEN and OHDSI there are several use cases developed on the boundaries between clinical trials and observational data, and we look forward discussing these with the CDISC community.
14:15 - 14:35

Janssen Autism Knowledge Engine (JAKE) System in Autism Spectrum Disorder

Sarah Bonneux, Janssen
Autism spectrum disorder (ASD) is a developmental disability that can cause significant social, communication and behavioral challenges. Given the high incidence of ASD, the significant unmet medical need, and long term associated morbidity, there are multiple facets of the disorder that could benefit from novel treatments. The JAKE system is an exploratory integrated system of tools and technologies designed to optimize collection of behavior and biosensor data for research purposes in clinical ASD trials. It consists of various components. My JAKE is an interface to an autism personal healthcare record with tools and technologies tailored to individuals with ASD, and their caregivers/parents and healthcare providers. JAKE Sense is an experimental workbench that contains selected biosensors to assess physiological characteristics and behavior related to core symptoms of ASD. My JAKE data is translated in a multiple csv file structure described in the Analytics Data Elements (ADE), which is utilized as specification document for SDTM mapping. Currently, the JAKE system is exploratory, but it may potentially lead to detecting changes in response to treatment and be utilized as endpoint for interventions in ASD.
14:35 - 14:55

Mapping EHR Data to EDC - A Case Study

Suchitra Ramaswamy, Zifo Rnd Solutions
In any clinical trial, the patient data collected as a part of the trial is first stored in the hospital's EHR/EMR system followed by entering the trial protocol mandated data into eCRF's configured in Electronic Data Capture system, leading to double-data entry and concerns for data cleanliness. With some of the leading EHR/EMR providers becoming HL7 FHIR standards compliant coupled with the industry wide adoption of ODM standards for clinical data transfer, it paved the way for integration of EDC systems to EHR systems, thereby eliminating the need for transcribing and re-entering data into multiple systems. Utilizing the true potential of both standards, we created a system, that talks between an EDC and EHR system to get the required clinical data from EHR for a particular subject. With increasing adoption within the industry on the FHIR standards at last, there is some hope in leaving the doctors and the nurses to do what they do best. Taking care of the patients.
14:55 - 15:05

Q&A

Session 12: Regulatory Session, Part II

Session Chair: Sujit Khune, Novo Nordisk
15:05 - 16:50
15:05 - 15:35

Collaboration and a public health crisis

Helena Sviglin, FDA-CDER
15:35 - 16:05

Helpful Tips for Review-Ready NDA/BLA Submissions to FDA CDER

Dr. Matilde Kam, FDA-CDER
16:05 - 16:35

Safety Review: Approach & Tools

Dr. Alan Shapiro, FDA-CDER
16:35 - 16:50

Q&A

Session 13: Closing Plenary

Session Chair: Joerg Dillert, E3C Chair
16:50 - 17:15
16:50 - 17:05

State of the CDISC Union

David R. Bobbitt, President and CEO
17:05 - 17:15

Closing Remarks

Peter Van Reusel, CDISC