Program of events is subject to change.

8 May 2019

Session 1: Opening Plenary & Keynote Address

Joerg Dillert, Oracle, E3C Chair
Pearl I, II & III (Ground Floor)
09:00 – 10:45

Welcome

Joerg Dillert, Oracle, E3C Chair

Welcome Address from the CDISC Board Chair

Dr. Douglas Peddicord, Association of Clinical Research Organizations, CDISC Board Chair

Keynote Presentation: Towards the Internet of FAIR Data

Dr. Luiz Olavo Bonino, Leiden University Medical Centre

Dr. Luiz Olavo Bonino is the International Technology Coordinator of the GO FAIR International Support and Coordination Office and Associate Professor of the BioSemantics group at the Leiden University Medical Centre in Leiden, the Netherlands. His background is in ontology-driven conceptual modelling, semantic interoperability, service-oriented computing, requirements engineering and context-aware computing. In the last 5 years Luiz has led the development of a number of technologies and tools to support making, publishing, indexing, searching, evaluating and annotating FAIR (meta)data.

State of the CDISC Union - Bringing Clarity to Data

David R. Bobbitt, CDISC President and CEO

CDISC Standards Update

Bess LeRoy, CDISC

CDISC standard development teams accomplish a tremendous amount of work over the course of a year including new and updated foundational standards, therapeutic area user guides, controlled terminology, and data exchange standards. This presentation will provide an update on what has been accomplished over the past year, what development teams are working on currently, and a look toward the future of standards development.

Session 2: Second Plenary - CDISC 360

David R. Bobbitt, CDISC President and CEO
Pearl I, II & III (Ground Floor)
11:15 – 13:00

Evolution of the CDISC Standards

Peter Van Reusel, CDISC

Over the past 20 years, CDISC and the CDISC community have established a globally accepted standard for clinical studies.  The CDISC standards have become the de-facto standard between clinical trial sponsors, services providers and regulatory agencies.

These standards have proven their value and use over time, but it is generally acknowledged that the implementation of data standards by various companies did not deliver the return of investment that was initially expected.   The current CDISC foundational standards are normative; they describe the structure and business rules of the datasets and metadata.  The actual data concepts however are not standardized in a meaningful way.

CDISC and the CDISC Community is embarking on an exciting journey to explore how the biomedical concepts can be represented as linked informative standards which also describe the meaning between concepts and the current foundational standards.  This presentation will discuss the scope and the approach of the CDISC 360 project.

CDISC 360 Automation Using the CDISC Library and ODM 2.0 APIs

Sam Hume, CDISC

The CDISC 360 project seeks to demonstrate end-to-end automation across the clinical research data lifecycle using standards metadata enhanced with a conceptual layer. CDISC 360 includes participants developing proof-of-concept tools to apply the standards metadata to drive automation across the three use cases identified in the project scope. CDISC, while not developing tools in support of end-to-end automation, will provide access to the CDISC Library and ODM REST APIs to support software tool development.

This presentation discusses the CDISC 360 project from an architectural perspective, highlighting the roles of the CDISC Library and ODM APIs. The presentation will provide a brief overview of the technical objectives of CDISC 360 to baseline the audience. It will show how the conceptual layer in the CDISC Library model supports automation and reduces implementation inconsistencies. Most of the presentation will demonstrate how the CDISC Library and ODM APIs contribute to process automation.

ODM 2.0 complements the CDISC Library in support of CDISC 360. Currently under development, ODM 2.0 represents a major update to the ODM 1.3.2 standard. ODM 2.0 includes a REST API to support more modern and dynamic modes of data exchange, and it includes support for JSON. This presentation will highlight the differences in the CDISC Library and ODM API roles and technical implementations. While it will be necessary to use the CDISC Library API to develop automation solutions as part of CDISC 360, ODM API usage will be optional.

Semantic Technology and Linked Data for Clinical Research Standards

Frederik Malfait, Nurocor Inc.

The dissemination of CDISC standards for clinical research have gone through significant changes in the last few years, moving from publications in hard to process PDF documents to machine readable download formats, and finally to the availability of CDISC Library, which implements a modern hypermedia API built on the foundations of linked data. It is expected that the results of CDISC 360 will continue this trend to represent standards in a more precise manner based on semantically more meaningful context.

 

In this session we provide background on better ways to represent and disseminate information in those cases where relationships and connected information play an increasingly larger role. We show how the CDISC Library has moved away from table based representations, and instead harnesses the possibilities of knowledge graphs to better reflect the many relationships within the CDISC standards. We also show how a hypermedia API fits in naturally with this approach. Finally, we discuss how regulators, standards organizations, sponsor companies (large and small), and technology vendors, can and must embrace new technology in a common ecosystem to make the application of data standards and initiatives like CDISC 360 a success.

Session 3, Track A - SEND

Nick De Donder, Business & Decision Life Sciences
Pearl I (Ground Floor)
14:00 – 15:30

Preparing for SEND

Seppe Diels, SGS Life Science

According to the FDA Data Standards Catalog submitting data for nonclinical studies in the Standard for Exchange of Nonclinical Data (SEND) electronic format is already supported since 2011. It became a requirement from December 2016 onwards. 

This requirement for SENDIG v3.0 will end in 2019 and will make place for the upgraded version SENDIG v3.1. SENDIG v3.1 is already supported since August 2017. 
In this paper I will explain what it takes for an organization with a broad knowledge of SDTM to be able to handle nonclinical data and provide SEND datasets for submission. To understand SEND datasets, it’s also important to focus on the origin of the Standard. Although it shares the model with its clinical counterpart SDTM, it also shows differences in its implementation. Another difference between clinical and non-clinical studies that I would like to stress is the way in which data are captured. Besides the origin of SEND and how it compares to SDTM, I also want to highlight some additions in the updated implementation guide, since SENDIG v3.1 is already available since July 2016. Finally, I will talk about the validation and submission of SEND datasets. The FDA has published some business and validation rules, which also include SEND specific rules. Are we be able to use Pinnacle21 validation for both SENDIG v3.0 as SENDIG v3.1? And what is expected once the datasets are ready for submission to the Regulatory Authorities? I will answer these questions, while also explaining what to expect from other regulators, like the European Medicines Agency (EMA) and Japanese Pharmaceuticals and Medical Devices Agency (PMDA).

Submitting Tumor.xpt Data Set from Legacy Application to FDA with Confidence

Naira Khatchatrian, CRL

How confident are you that your tumor.xpt data set is complete and accurate for a study to be submitted to the FDA?

 

Tumor Carcinogenicity studies should include the FDA-mandated electronic dataset of tumor findings (tumor.xpt and define files) to allow for a complete review and statistical analysis of the tumor data.  

In order to provide a tumor.xpt data set in SENDIG format for FDA submissions from our legacy application,  some manual intervention and verification were needed. This process was taking considerable time and resources.

We want to share our experience on how we reduced the time and effort required to generate a tumor.xpt data set from the legacy system by using the automatic tool developed in-house, replacing all manual steps and  including business rules provided by our pathologists and  FDA. 

In addition to business rules we included notifications of possible discrepancies, allowing users to review the data and resubmit the script if necessary.

The implementation of the automated tool significantly reduced the risk of errors within very large files by eliminating manual intervention. It also reduced the amount of time for generating tumor.xpt data sets from an average of almost 30 hours to just one hour. 

SEND Presentation

Gitte Frausing, Data Standards Decisions ApS

The purpose of the presentation is to provide the audience with an update on recent advances in the CDISC SEND team and thereby what will be the next in line for industry and regulatory implementation.

The presentation will include a brief overview of the CDISC standards development process and the current roadmap on the progression of CDISC SEND team activities. 
As part of individual team activities, the Proof-of-Concept pilots between FDA and the CDISC SEND team will be discussed, and if available any FDA outcomes included (currently not yet released). Emphasis will be given to initiatives that may impact ongoing implementation of SEND v. 3.1, such as Safety Pharmacology sub-team initiatives.
Other topics of relevance for the industry with focus on 2019 activities will be covered. This includes, but is not limited to a Fit-for-Use pilot with FDA for SEND v. 3.1 and release of CDISC SEND conformance rules.

The CDISC SEND team will meet in April 2019, and any beginning activities with a call for participation will also be included.
Finally, the presentation can be wrapped up with questions from the audience on SEND team activities and progress

Session 3, Track B: Data Governance

Andrea Rauch, Boehringer Ingelheim
Pearl II (Ground Floor)
14:00 – 15:30

How to Manage Changes to CDISC Standards

Shannon Bellaire Danielsen and Mikkel Traun, Novo Nordisk

At Novo Nordisk we have worked on optimizing and standardizing the data collection and reporting in our operational data systems. When external data standard definitions are updated (e.g. CDISC standards), a number of challenges occur in managing these updates in our systems. This presentation will share some of our methods and processes for adapting to changes in the following data standards:

• New version of CDISC CT (E.g. addition of codes and codelists)
• New version of SDTM implementation guides (E.g. addition of new domains)
• Updates or new therapeutic area user guides
• New regulatory guidelines

Before implementation of a new data standard in our systems, all potential issues must be identified, impact assessed and handled. This is key to ensure trials are using the applicable standards when submitting data to authorities that support data integration.

Transition a Data Standards Library from a Simple Version to a Metadata Repository: Lessons Learned

Sandra Latorre, Business & Decision Life Sciences

In this project, we created a CDASH and SDTM library for a sponsor specialized in metabolic diseases. It was the first data standards library for this sponsor and we started with the simplest possible format:

  • MS Word documents for the CRF templates 
  • MS Excel files for the CDASH and SDTM metadata
  • A User Guide to describe and provide support for the use of the library

A governance process was established, including versioning and change requests.
The library was already in use for several months when it was decided to move the library content to a tool that would allow, among other things, to strengthen the governance process with change logs and to build study specific CRF books as well as study specific Define.xml based on a selection of library metadata.
The transition process happened to be more complex than expected. Although we went through a testing period, it was difficult to fully anticipate the impact of this transition without mastering the usage of the Metadata Repository (MDR). In this presentation we will give examples of challenges we encountered, such as how to map excel metadata to MDR metadata in which the structure is very different, impact on library-related SAS programs, on versioning, …
Despite the challenges, the transition to the MDR allows better control over library content, therefore increases quality. 
In this presentation, we will share our experience in transitioning from a library of data standards initially designed as a combination of MS Word and Excel documents to a Metadata Repository that offers more functionalities. It will help companies that consider either implementing an MDR for the first time or switching to a different MDR. 
Though what is valid for one MDR might be different for another one, we can still use these lessons learned to present some generic points to anticipate during such transition to make it as smooth as possible.

CDISC Glossary implementation in the Company

Guido Claes, Janssen Pharmaceutical Companies

The CDISC Glossary is an asset made available by CDISC to health care companies to ensure common understanding of basic terms in the health care industry and mainly focusing on terms related to Clinical Study data shared with authorities. The Glossary contains 700+ business critical terms with definition and it contains also a list of related common acronyms and abbreviations. 

The CDISC Glossary might be still unknown to many collaborators in Pharmaceutical Companies but also experts in data management and medical writing might not be fully aware of the power of referring to the Glossary for a company’s operations.

Related to Clinical Development many functional groups in large companies have glossaries but often with each focusing on the need within that function. The risk for disconnect to other groups and lack of consistency exists. Lack of consistency leads to different interpretations and difficulties when applied to data and digital use of data. 

At Janssen an initiative on Information Governance was launched in 2012 . At that time it was clear that there was no common terminology in Janssen Clinical Development and lack of ownership of terms and definitions. Quality of information retrieval to answer critical questions internally in the company and to respond to questions coming from authorities was compromised. The initiative lead to a shared electronic environment (Collibra Data Governance Center) to manage terms and the agreed terms were made available to a broad user community in Clinical Development.

On the other hand, terms are also included in company’s controlled documents that differ and thus cause unintended semantic distinctions. Other functions outside of Clinical Development (Quality, Regulatory Affairs, Medical Safety,..) also have their interest in the same terms related to clinical research and in other specific terms for their business area.

A major difficulty has been getting agreement among stakeholders at Janssen. The strategy was chosen to refer to an external authoritative source whenever possible and to review existing company glossaries.

Therefore the CDISC Glossary should serve well as a critical reference and because of the global nature of CDISC the Glossary is well positioned to overcome regional and functional differences for these terms. 

In the presentation the opportunities and difficulties for implementation of a common language will be mentioned based on ongoing experience within Janssen Pharmaceutical Companies of Johnson & Johnson.

Session 3, Track C: Use Cases

Malathi Hari, Larix
Pearl III (Ground Floor)
14:00 – 15:30

Building the Plane As You Fly: Transitioning from Functional Area Driven Standards to a Beginning-to-End (B2E) Organizational Approach

Lauren Shinaberry, AbbVie

Like many bio pharma companies, Abbvie has been working with clinical data standards for many years, starting with standard case report forms, then adding tabulation datasets, analysis datasets and most recently, tables, figures and listings.  Historically, the development of these standards was done by independent teams focused on one area of the clinical data flow at a time.  Mirroring CDISCís own history of focus on related-but-stand-alone models to the B2E emphasis today, AbbVie has made it a priority to apply a more holistic view on standards use and development within the Data Sciences and Statistics organization. This new holistic approach is intended to provide infrastructure and processes for efficient development of standards that meet cross-functional, multi-regional business and regulatory needs. 

Including:

  • Alignment across all standards development activities from collection (including RWE/eSource) through reporting
  • Buy-in from all stakeholder communities on conformance to AbbVieís implementation of CDISC standards
  • Continual improvement of standards in use based on metrics
  • Leveraging technology to reduce effort in the standards development process as well as supporting the user community within the organization

In this presentation, you will learn how the project to implement these changes at AbbVie was influenced by CDISCís own approach to standards development over the years.  Discussion points will include: what governance structure was chosen and why, change management during the roll out, how technology can reduce the burden, how ongoing standards development and study activities continued during the transition, considerations for selecting metrics to assess the success of the standards development activities and other lessons learned. 

Pacemaker Guy: De-Mystifying a Business Use Case for SDTM and Medical Device Domains

Carey Smoak, S-cubed

Medical Device Standards can be applied to even the most complicated Medical Device clinical research studies. There are many papers written on how to map certain kinds of data like Exposure Data or Lab Data. But not too much on how to incorporate Medical Device data with these Core SDTM Standards. Considerations were made for simple and complex data points when mapping to the SDTM standards. We learned that just like in biologics; you need to plan for the unexpected even with Medical Device studies. We take you through a subject experience by showing the mappings of the data but also illustrate the procedure(s) and how to visually map the data. The goal is to leave the participant/reader with a curiosity to want to map their own Medical Device data to standards sooner than what the current expectation is. The more the Medical Device Industry 
uses the standards the more we can influence the regulatory agencies and their tools for Medical Device domains. 

Associated Persons Domains and Associated Possibilities

Leah van der Meer, Louella Schoemacher, OCS Life Sciences

Whether it is a subject's parent, an elderly person's caretaker or a patient's donor, characteristics of non-subjects, or associated persons, might influence study outcomes. Hence, there is a necessity to process data about associated persons when converting study data to SDTM. CDISC provides an implementation guide for standardising associated persons data into SDTM format. However, associated persons data is not commonly collected, and the mapping of these data can be challenging: What is the relation with the subject? What is the purpose of mapping these data? Which variables are expected? And what about specific situations such as parent data of twin subjects or a donor with multiple receiving subjects? This presentation describes the basics of mapping associated persons data and shows how associated persons domains are different from 'standard' domains. It also provides real-life examples of mapping associated persons data into an associated persons domain."

Session 4, Track A: Submissions

Simon Lundberg, AstraZeneca
Pearl I (Ground Floor)
16:00 – 18:00

Smoother Submissions - What Is Really Involved in the Preparation of an eSubmission Dataset Package?

Kelly Mewes and Artur Krupa, Roche Products Ltd

As CDISC standards are becoming the established standard for delivery of clinical study data to Regulatory Health Authorities (i.e. FDA, PMDA), is this simply just a case of the creating SDTM and ADAM datasets or is there more to this than meets the eye?

This presentation focuses on what is really involved when creating and submitting a complete clinical study dataset package to the FDA, and the post submission activities. It will cover the different components that we typically deliver, the internal stakeholder interactions that may need to occur within your organisation or with the Health Authorities before and after a submission, some common challenges faced whilst preparing the eSubmission packages and recommendations to make the production smoother.

First BLA Submission as Small Biotech Company: Fail or Flourish

Nico Van Hecke and Elke Vansnick, Ablynx

After many years of hard labour to develop our first biological we face one of the final but daunting endeavours: the submission to FDA. The challenge we faced was streamlining all clinical trial data coming from several CROs with varying approaches to data standards. As part of a small Biotech company which is a rookie in the matter, who would have considered the approach of creating an own SDTM library?

When it has been decided to go for an expertized CRO, this vendor provides the SDTM mapped version of your legacy data together with the SDTM specifications. How does one ensure quality of the CRO deliverables? How to find oneís way in all the documentation and tools: SDTM manual, SDTM IG, CDISC CT, FDA SDTCG, Define.xml, SDRG Completion Guidelines, Pinnacle 21, Value Level Metadata and Where Clauses? CDISC recommendations sometimes differ from the FDA requirements. Eventually the metadata should match the annotated CRF, SDTM and SDRG. Furthermore, data of the clinical trials within the submission package need to be converted to allow pooling and integrated analyses.

Tackling these hurdles while meeting tight timelines was an enriching exercise. Doís and doníts, the importance of consistency,
and having data standards implemented from the start makes life easier. Overload of guidance documentation does not provide an answer to all questions. Nevertheless, we have succeeded in providing a final submission package, which should meet the FDA requirements and subsequently may lead to a successful approval and launch on the US market. 

Understanding the Technical Conformance Guide

Johannes Ulander, A3 Informatics

The importance of the technical conformance guides are increasing and they are getting more detailed for each new release. It all began with the CDER Common Data Standards Issues Document released May 2011 which was created because CDER, who has accepted SDTM datasets since 2004, had observed significant variability in submissions containing standardized electronic clinical data.

This document mainly contained high level advice such as planning for SDTM prior to study conduct to avoid a legacy migration process as it will make it more complicated to adhere to the SDTM Implementation Guide, as well as a very detailed request for adding EPOCH, ELEMENT and ETCD to all subject level observations. The last request was updated in the next version of the document 6 months later with a disclaimer that this is not a requirement as it is associated with implementation challenges for sponsors (and ELEMENT and ETCD were removed.)

Now we all know this document by its new name, the Technical Conformance Guide, and it exists at FDA as well as at PMDA.
With each new release more detail is added and the latest FDA Technical Conformance Guide even adds terminology to use and new domains that do not yet exist; essentially making them standards on their own.

This presentation will present an analysis of some of the more detailed suggestions in FDA Technical Conformance Guides and their implementation challenges covering:
- Introducing new terminology
- Logically skipped items in the QS domain
- Requiring the DV domain
- Adding Treatment Emergent supplemental qualifier to AE
- Managing multiple enrolments in DM

Roundtable Discussion

Speakers and Invited Guests

Session 4, Track B: Transcending Murphy’s Law

Silvia Faini, LivaNova
Pearl II (Ground Floor)
16:00 – 18:00

The Archeology of Legacy Data Conversions

Jasmine Kestemont, Innovion

Though submission of trials in CDISC SDTM format is not mandatory for studies started prior to 17 December 2016, sponsors are increasingly deciding to convert all or part of their legacy non-CDISC standard trials.

While there are many benefits, this often turns out to be a challenging task, sometimes more of an art than a science. What do you do with studies that were set up over a decade ago with no documentation other than a study report? What about data that was manipulated with no clear rationale?

This presentation aims to dig into some use cases, explaining the decision process to convert data or potentially making the decision not to touch data. Examples will include structural and content issues.

The << CDISC Stupidario >> (the CDISC Nonsense)

Angelo Tinazzi, Cytel, Inc

In my professional career I have been exposed to several studies requiring the use of CDISC standards either as programmer lead or CDISC SME. In this capacity I have seen several define.xml and reviewer guides that demonstrate how differently individual users and companies approach the same mapping issue (SDTM) or same analysis "modelling" (ADaM). I've also observed wide variations in the level of details provided for example in a reviewer's guide, or a computational algorithm used to describe a derivation.

Efficacy and safety of your drug are what matter, but lack of traceability, or poor or insufficient documentation might trigger questions and concerns. While this might not impact the overall final outcome of your submission, approval could be delayed if the reviewer starts questioning what you have done by requesting changes, or new deliverables to clarify aspects that were not sufficiently clear in your submission.

In the eighties in Italy the term << Stupidario >> become the object of a book where the author collected some real cases of medical nonsense. Since then the term is commonly used to reference ?a collection of situations in which people demonstrated their <<stupidity>> on a topic, while considering themselves experts on the same?

<<Stupidario>> can be translated to <<Nonsense>>, thus with this presentation I would like to go through the main CDISC <<Nonsense>> I have seen in the CDISC packages I have reviewed; this can range from <<nonsense>> questions to complete misunderstanding of the CDISC IGs.

Define.xml Review: Failing to Plan is Planning to Fail

Frank Senk and Robin Mann, GCE Solutions

Imagine you are in the market to purchase a well-publicized book by a renowned author. You got the book at the nearby bookstore and it has a nice outer cover with beautifully written synopsis on the back. You start going through the Table of Contents to get more idea about the contents of the book. Wait a minute, what is this? You notice that the Table of Contents is wrong with spelling mistakes and incorrect section titles with wrong page information. How would you feel? Would you purchase the book, or you now have doubts about the contents of the book? 

This is for sure the same sort of feeling that Regulatory Authority people have when they are provided with incorrect Define.xml. 

Define.xml is the Table of Contents of the submission package and most useful document that describes the structure and content of the data submitted. A properly created and well defined Define.xml document can improve the efficiency of the Regulatory Review process, making the Regulatory people and submission team happy, whereas a poorly created will hamper the speedy review with lot of cross questions. 

Creating Define.xml is a daunting task and Developers are prone to making mistakes in document. To avoid potential issues, proper review of Define.xml is required before the package is handed over to Regulatory Authorities. This paper discusses the approach for foolproof planning and execution of Define.xml review. This step by step approach can be very handy for detecting even the minutest of the errors.

CDISC Myths and Truths: Creating Clarity

Amy Palmer and Alana St. Clair, CDISC

This presentation would focus on some common questions and misconceptions regarding the implementation of CDISC standards. Specific topics would include:
Why aren't prior labs, medications, and procedures considered medical history according to the SDTMIG?

Can I use SDTM variables in my SDTMIG domain datasets?
SDTM variables classified as "generally not used" in a domain.
LOINC and the CDISC SDTM LB Domain
Representing Race and Ethnicity

Other topics to be considered based on feedback from CAC/membership survey for implementation challenges workshop (planning for Friday after Interchange). If appropriate, some of these topics could be incorporated into this Interchange presentation. 

Session 4, Track C: End-to-End

Éanna Kiely, ClinBuild
Pearl III (Ground Floor)
16:00 – 18:00

Our Journey: Innovative Approach to Planning and Designing Trials in Database

Djenan Ganic, Christina Nowack, intilaris LifeSciences and Bayer AG

Standardization of protocol elements, introduction of an end-to-end process for medical standards and the utilization of TransCelerate’s Common Protocol Template (CTP) form a unified platform for long-term benefits in designing and planning of clinical projects and studies at Bayer. Our journey towards such platform and expected benefits is not an easy one nor straightforward. Instead it is a winding road with lots of challenges and learnings. During this process, we defined a Structured Study Definition (SSD) model for capturing background information about the project and protocol elements of studies planned within the project independent of the way they may be presented in the Clinical Development Plan (CDP) or Clinical Study Protocols (CSPs). At the same time, the model contains enough details to be able to support downstream processes. Protocol information should be available early on in the preparation phase of the CSP and requires no additional interpretation by the downstream process. This will be assured by the integration of the developed SSD model with a Meta Data Repository (MDR). The downstream processes are supported by the provision of the study design in CDISC Trial Design format.   In this presentation, we will show our first experience in utilizing the structured approach to study planning and designing, to standardize protocols and effectively share the relevant protocol data with downstream processes. Furthermore, we discuss the challenges we encounter in this process and share the lessons learnt for the benefits of the wider community.

One Model to Rule Them All

Mikkel Traun, Rasmus Stenholt and Vicky Poulsen, Novo Nordisk A/S

One Model to Rule Them All…and in the Trial Bind Them
Multiple operational data model versions and associated implementation guides, frequent updates of controlled terminologies, and the ever-evolving conformance guides and rules, makes it a challenging task to govern and implement a single data standard. To govern both SDTM and ADaM corporate standards and ensure end-to-end implementation is seemingly an unachievable goal to set. 

We will share the Novo Nordisk’s Master Model approach in data standards governance, how to overcome the hurdles of end-to-end implementation across the organisation, and navigate the conformance requirements for submission.

MDR Requirements for Study Build and Implications for the CDISC 360 project

Philippe Verplancke, XClinical GmbH

Study builders need to know which code lists, variables and forms are already standardized in CDISC and which need to be created as new elements. This involves searching an MDR system to show which standard metadata are available regarding a specific CRF question in a specific therapeutic area. Semantic linking between variables and other metadata elements enables an MDR to show a list of relevant search results. Linking similar concepts helps data managers avoiding redundancies. An MDR should include curation functions to allow senior staff to deprecate or even delete redundant metadata. A robust version control mechanism uses unique identifiers and version numbers on all metadata elements. This ensures newer versions of standard packages include a list of elements that were updated with incremented version numbers while other elements of the new standard just refer to the pre-existing metadata with the same identifiers and the same version number. Implications for the CDISC 360 project: 1) the API for EDC systems connecting to the CDISC Library should include a (semantic) search function. 2) every element of metadata delivered by the CDISC Library should have a unique identifier and a version number that indicates which metadata elements have been updated _inside_ any new CDISC standard update. 3) metadata elements downloaded from the CDISC Library should be CRF-ready, i.e. they should include the correct datatype, a correct link to a code list using the ODM standard and – if applicable – a correct link to a measurement unit using the ODM standard.

Towards a Biomedical Concept Library: Creating and Sharing Biomedical Concepts

Kirsten Langendorf, A3 Informatics

There has been much talk over the last few years about end-to-end (E2E), beginning-to-end (B2E) and ways in which we, as an industry, can achieve these aims. One solution is the use of Biomedical Concepts across the life-cycle to build operational artefacts based upon a set of common definitions. To achieve the desired outcome, we need the actual BCs covering the desired content, for both the safety domains and therapeutic areas. This presentation will report on work focused on investigating whether it is possible to create BCs from existing content like define.xml. We will demonstrate how a semi-automated process using tooling can create BCs. The presentation will show how those that are familiar with CDISC standards can use the process to create new BCs. This presentation will detail: • The approach taken • The process used in creating a BC • The tooling used and the sources of information required including define.xml files • The technology employed within the tooling. • The use of templates to ensure consistency of the content created • The resulting outputs • How this work has been extended to the CDISC TAUGs, i.e. how defining BCs can improve the definitions of TAUGs and make them more useful to the community. • The current state of the work • How the results can be used even by those not using BCs but will find the more complete definitions useful • How the results can and will be shared with the community The presentation will conclude with the status of the work, next steps and potential impact.

Evening Networking Event

19:00 – 22:00
9 May 2019

Session 5, Track A: Foundational

Angelo Tinazzi, Cytel
Pearl II & III (Ground Floor)
09:00 – 10:30

Upgrading Your Library from CDASH v1.1 to CDASH v2.0

Sandra Latorre, Business & Decision Life Sciences

The CDASH standard, although not required by regulatory agencies, is an important foundational standard from CDISC. It addresses the need of a standardization from collection to production of SDTM datasets and many sponsors have now implemented it in their CDISC library.

We will present the enhancements of CDASH and the approach we are taking to update sponsor’s library accordingly. 

After release of CDASH version 1.1, it had not been updated for several years, while SDTM was evolving faster, and this resulted in difficulties when implementing a CDASH standard that was not aligned with the latest SDTM version. Many domains present in SDTM IG 3.2 such as HO, DD…were not represented in CDASH 1.1. 

CDASH new version contains not only content updates but also the version structure has been revised. Similarly to SDTM, the new CDASH standard is published as a combination of a ‘Model’ that provides generic rules and an ‘Implementation Guide’ for general guidance for implementing the model, domain specific metadata, and CRFs examples.

As a result, upgrading a sponsor’s library to the latest CDASH standard should follow a strong governance process, which can be summarized in the following steps: 

  • gap analysis 
  • impact analysis based on sponsor needs (e.g. metadata repository system in use)
  • implementation decision
  • implementation 

We will present the challenges of the gap analysis, from high level changes to metadata attributes.

Finally, we will explore what to expect in the future.

Handling Multiple Enrollments and Screenings Subjects in SDTM: Are We There Yet?

Éanna Kiely, Syneos/ClinBuild

The FDA Technical Conformance Guide (TCG) V4.2 from October 2018 provides guidance on handling subjects with multiple enrollments and multiple screenings in the Demographics domain (section 4.1.1.3 SDTM Domain Specifications DM Domain (Demographics)). SDTMIG 3.3 provides guidance and managing multiple informed consents in the Disposition domain examples. 
In this presentation we will step through worked examples for each of the different types outlined in the TCG for the DM domain and provide DS domain examples for each. We will make an assessment to see if we have enough information to proceed or if the sponsor should communicate with the relevant regulatory review division for further guidance.

As a new member of the SDTM Multiple Subjects Instances (MSI) team I will be bringing some topics and points for consideration from that group to the audience. The MSI team is investigating how to incorporate the FDA’s TCG position into SDTM.

Analysis Result Metadata: Which Details to Include and What Is Missing

Rob Wartenhorst, GSK Vaccines

The analysis result metadata (ARM) is already mandatory for PMDA submissions and might be mandatory for FDA submissions in the near future. You could think of ARM as a burden and complete after the analysis is completed, or have it serve the 2 purposes of traceabilty and program automation and thus populate during TFL development. This paper discusses the latter implementation and the details needed to be available in the ARM to drive programs, for with outputs to use it and what is missing in the ARM.

Session 5, Track B: RWE / Observational

Jozef Aerts, XML4Pharma
Pearl I (Ground Floor)
09:00 – 10:30

Novel SDTM Implementation to Maximise Benefits of Sharing Legacy Data

Kalynn Kennon, Sam Strudwick, Infectious Diseases Data Observatory (IDDO), University of Oxford

The Infectious Diseases Data Observatory (IDDO) accelerates the development of better treatments for poverty-related diseases by generating research evidence through data re-use. We work with researchers based in disease-endemic countries to aggregate, standardise and analyse existing health data to answer questions that can only be addressed through collaborative analysis. The IDDO data-sharing platform accepts clinical, laboratory, observational, and epidemiological data on poverty-related and emerging diseases, regardless of format or origin of the data. To pool such disparate data into a single searchable repository, we must transform all data to one standard. We identified SDTM as the model of choice for our data repository. SDTM is an incredibly powerful tool. As the popularity of data sharing for meta-analysis and secondary analyses increases, this presents both a unique challenge and a unique opportunity for the CDSIC standards. Issues affecting the standardisation of legacy data for new analyses generally stem from the fixed nature of the original data collection. With focus on the inclusion of real world data (RWD) in analyses, the adaptability of data standards is being examined to accommodate novel implementations. By embracing the core of the SDTM model and drawing a balance within the CDISC environment, we hope to capture as much RWD as possible, whilst still adhering to the SDTM rules and the original intent for the standard. This flexible implementation of the SDTM model will also increase the usability of the standard for researchers in disease-endemic countries as well as provide a bridge to a more accessible format of the data being produced and shared by these researchers. This project remains a work-in-progress and will continue to utilise CDISC to advance an accessible SDTM concept that has the power to flexibly transform all varieties of interrelated health data.

CDISC Standards - A Bridge between EHR and EDC Systems

Prathima Surabhi, Nicolas Griffon, Christel Daniel, Karen Fanouillere, Laurent Luttenauer, Mats Sundgren, Nadir Ammour, AstraZeneca

Electronic Health Records to Electronic Data Capture, EHR2EDC(*) is a new consortium project that provides a novel platform for clinical trial sponsors to extract data needed for the trial directly from diverse EHR’s. The project aims to facilitate automatic extraction of subject data allowing the investigators to select the data to be injected in sponsors EDC (Electronic data capture) system. The service platform will ensure reliable exchange and re-use of EHR data for multiple research initiatives while being fully compliant with regulatory requirements (Ex: GxPs, GDPR). In order to use EHR data for population into EDC system, there is a need for a standard that can translate data from EHR to EDC. The target standard of choice is CDISC as it is approved by major regulatory agencies and also widely accepted across health care industry. Structured EHR data are mostly captured using local terminologies, sometimes inspired from HL7. The methods used for identifying most common data elements used in clinical research, confirm their availability in EHR and then map these data elements to CDISC standards, using an HL7 FHIR-based pivotal data model to enable the translation of EHR data into CDISC data points will be discussed. 

 

[*]: The EHR2EDC consortium is a EIT Health funded project, and is a collaborative effort driven by industry partners, hospital partners, an SME and a not-for-profit organisation. https://www.eithealth.eu/ehr2edc

Considerations for Using CDISC Standards in Observational Studies

Bess LeRoy, Jon Neville, CDISC

Historically, CDISC standards have primarily been used for regulatory submissions of clinical trials data in support of approval to market medical products. However, recent expansion of CDISC standards through therapeutic area user guide (TAUG) development and an increase in CDISC visibility has led to the recognition of the value of data standards in other areas of medical research as well. The existing biomedical conceptual content of CDISC standards, described mostly in TAUGs, is study type-agnostic and aligns well with analogous concepts examined from limited comparisons of data collected in observational studies. Despite this alignment, there is still confusion about the suitability of CDISC for this application. By seeking broader input on the unmet needs of the research community and examining more use cases, CDISC aims to develop a considerations document to address issues in implementing standards in these types of studies. Implications for post-marketing surveillance will also be considered.

Session 6, Track A: Rules

Sujit Khune, Novo Nordisk A/S
Pearl II & III (Ground Floor)
11:00 – 12:00

Open Rules for CDISC, Web Services, and the CDISC Library API (Formerly CDISC SHARE API)

Jozef Aerts, XML4Pharma

A new initiative, "Open Rules for CDISC Standards" (ORCS) for defining and executing validation rules is presented. Its reference XQuery implementation makes the rules human-readable and machine-executable. As they are vendor- and programming-language-neutral, they can be implemented in software (in any modern computer language) by anyone. They can however also be automatically transformed into source code for languages such as R, SAS or Java.

The idea is that in future, such "open rules" are published together with the standard itself, in a machine-readable way, e.g. as part of the new "CDISC Library".

This also allows sponsors, CROs and other service providers to extend the set of CDISC rules with their own ones, and to execute these together with the CDISC rules.

Many of these rules require information from the standards themselves. This information can be obtained by queries to the new CDISC Library using RESTful web services, embedded in the rule definition itself. Other information is retrieved starting from the define.xml, which is the "sponsor's truth" about the submission.

At the moment, we are also discussing the use of the "Open Rules" with the FDA and the PMDA. 

"Open Rules for CDISC Standards" allows CDISC to get control back of its validation rules, and to provide the CDISC community with rules that are completely open, have no wiggle room, are as well human-readable as well as machine-executable, can be deployed by any modern software,  and have a reference implementation that can be used for free by anyone.

Raising the bar for data standardization: How to ensure your submission data supports automated review process at FDA and PMDA?

Sergiy Sirichenko, Max Kanevsky, Pinnacle 21

CDISC standards are required for submitting study data to FDA and PMDA, as they enable the use of standard-based review and analysis tools, automating and speeding up the review process. Until recently regulatory review was a manual process with limited need for data standardization, which is why its enforcement has been minimal. The automation of regulatory review is a game changing event with an immense impact on how industry should approach data standardization and submission preparation. In this presentation, we will show examples of specific data standardization issues and how they affect automation at FDA and PMDA. Weíll also review the current state of study data validation at FDA and PMDA, comparing and contrasting requirements, processes, and enforcement.

Session 6, Track B: RWE / Observational

Stijn Rogiers, SAS
Pearl I (Ground Floor)
11:00 – 12:00

Real-World Data: From Observational Research to Clinical Trials

Sonia Araujo, IQVIA

The last 20 years have seen the creation of several common data models in clinical trials and observational research. Those CDMs are rarely considered beyond their stated purpose, or evaluated whether “fit for purpose” for both interventional and non-interventional research.

 

OMOP is a CDM that standardizes the representation of longitudinal patient data for observational research. It can interoperate with other CDMs, and is well placed for dealing with the high data volume of large EHR systems, administrative claims datasets and network observational studies.

 

Real-world data captured in an OMOP dataset can and ought to also be used for clinical trial purposes. Such data can allow sponsors, CROs, investigators and other stakeholders to: ascertain trial feasibility; recruit trial participants, investigators and sites; augment the type and amount of data captured in a clinical trial; or identify extra indications for marketed drugs. 

 

CDISC’s CDASH standard provides a way to collect data at source across studies and sponsors, providing traceability into the SDTM. Both the FDA and PMDA use SDTM as a data submission standard. Hence, a path of data interoperability between OMOP and CDASH (and thus SDTM) should provide a robust use case for having real-world data add value to clinical trials, aiding regulatory processes and patient safety. 

 

This presentation will:

  • introduce OMOP and current observational research use cases
  • discuss how real-world data in OMOP format can add value to clinical trials
  • illustrate how OMOP and CDISC standards can integrate to provide maximum value.

Necessity of Observational Research CDISC Standard for RWD and Public Health Research

Satoshi Ueno, National Institute of Public Health

Recently, Real World Data (RWD) has attracted attention, and the CDISC standards are being considered for the utilization of research data. For the registry, "Registry Model Common Data Elements" was proposed by Global Rare Diseases Registry Data Repository (GRDR) in US, and "Minimum Data Set for Rare Disease Registries" by European Union Committee of Experts on Rare Diseases (EUCERD) in EU ware proposed as registry standards. However, CDISC has no observational research standard, and no specific method has been published yet. In addition, there is no standard concept in public health research, CDISC and their standards are not widespread.

In this presentation, we considered the CDISC standard which is necessary for observational research in RWD and public health research data.

In Japan, medical information is collected using Japanese domestic standards, it is necessary to convert the collected data into analysis data using international standard. The concept of "data collection according to the situation in each country" recommended by CDISC and CDASH contribute to reliability at the time of data collection. Since information on public health research is a valuable information source for promoting medical research including epidemiological research, it is important for researchers to interpret the collected data the same. To promote easy-to-use RWD, it is highly meaningful to use the CDISC standards from data collection, and the spread of the CDISC standards is indispensable for using data in the "same language".

Session 7: Regulatory Presentations

Joerg Dillert, Oracle, E3C Chair
Pearl I, II & III (Ground Floor)
13:00 – 14:15

PMDA Presentation

Dr. Yuki Ando, PMDA

Dr. Ando received a master’s degree in Engineering from Tokyo Science University, and a PhD in Health Science from Osaka University. In 1997, she joined the Pharmaceuticals and Medical Devices Evaluation Center (PMDEC), which was established that year and subsequently transformed into the current PMDA. Currently she is responsible for the biostatistics review and consultation in the new drug and device review offices in PMDA. Additionally, she works as a leader of business part of Advanced Review with Electronic Data Promotion Group, the group which is responsible for the use of patient level electronic study data that are submitted with new drug applications in Japan. She is responsible for promoting CDISC implementation and the use of submitted electronic data in new drug review in PMDA.

EMA Presentation

Dr. Alison Cave, European Medicines Agency

Dr. Cave joined the European Medicines Agency in January 2016 as a Principal Scientific Administrator in the Pharmacovigilance and Epidemiology Department where she leads on developing mechanisms to increase capacity in the use of real world data in medicines regulation. She also co-chairs the HMA-EMA Joint Big Data taskforce which is exploring the regulatory challenges presented by Big Data. She holds a BA Honours degree and PhD from the University of London and has over 20 years of academic research experience in the cardiovascular field.Prior to joining the EMA she was Head of Cellular, Developmental and Physiological Sciencesat the Wellcome Trust and, prior to this, an Expert Scientific Assessorat the UK Medicines and HealthCare products Regulatory Agency.

Session 8: Regulatory Presentations (cont'd)

Joerg Dillert, Oracle, E3C Chair
Pearl I, II & III (Ground Floor)
14:45 – 17:30

CDER Data Standards Updates from Office of Strategic Programs, FDA

Helena Sviglin and Dr. Gideon Scott Gordon, Office of Strategic Programs, Center for Drug Evaluation and Research, FDA

During this presentation, Dr. Gideon Scott Gordon will discuss CDER developments around Real World Evidence (RWE). Helena Sviglin will provide the latest updates to the Study Data Technical Conformance guide, published in March 2019. Q&A with both presenters will follow their presentations.

Perspectives in the Use of Study Data for Regulatory Reviews

Dr. Lilliam Rosario, Dr. Matthew Whittaker, Dr. Alan Shapiro, Helena Sviglin, and Dr. Weiya Zhang, Office of Computational Science, Center for Drug Evaluation and Research, FDA

Do you ever wonder what pharmacologist/toxicology, clinical, or statistical reviewers are looking for during the regulatory review process in the Center for Drug Evaluation and Research? Reviewerspresent their perspectives in the use of study data and analytics for drug and biologics reviews. The topics of this presentation include (1) the use of standardized SEND study data in nonclinical regulatory review (2) the use of standardized study data and analytical tools for clinical safety evaluation and (3) study data traceability, transformation, mapping, consistency, and challenges for clinical reviews.

Winning Poster Presentation & Closing Remarks

Pearl I, II + III (Ground Floor)
17:30 – 17:45