2024 China Interchange Program
Session 1: Opening Plenary
Ballroom A
Session 2: Second Plenary
Grand Ballroom A
Session 3A: Efficient Start & SDTM
Grand Ballroom A
Real world evidence (RWE) study provides a large picture on how a treatment is used and how it performs under real-world conditions. Compared to randomized clinical trials, baseline covariate imbalances potentially may be identified in RWE studies due to the observational nature. Using treatment balancing techniques to handle the selection bias is essential for a valid result. The propensity score is a technique that attempts to estimate the effect of a treatment (exposure) by accounting for the covariates that predict receiving the treatment (exposure).
This presentation summarized one example of how the propensity score analysis was implemented in RWE study, including the derivation of a CDISC ADaM standard based analysis dataset - ADPS (Adam for Propensity Score Analysis), and the subsequent analysis and reporting steps.
SDTM Datasets as the basic data used for statistical analysis, it is very important to realize fast and high quality completion of SDTM Datasets. Based on the SDTM standard and our past project experience, we have formed a series of check lists and developed SDTM Datasets Review Tool to make up for the deficiencies of P21 community. The SDTM Datasets Review tool can verify the consistency between SDTM Datasets and these files based on aCRF annotations, Raw Data, SDTM Program Code, which can greatly shorten the review time, improve the quality of SDTM Datasets, and minimize the rework in the later stage.
Session 3B: End to End Implementation
Grand Ballroom B
Session 4A: Global Regulatory Update & AI
Grand Ballroom A
For aCRF, the current common method involves manually adding comments to the PDF format CRF, which is time-consuming and labor-intensive. To address these limitations, we have developed a new approach using NLP and ML, resulting in the following advancements:
• Automated Mapping: Automatically maps raw data to SDTM, improving accuracy and efficiency.
• Dynamic Annotation Generation: Automatically generates annotations in the PDF based on the mapping relationship.
• Streamlined Workflow: Integrates annotation, SDTM specification, and programming tasks to achieve an automated SDTM.
This tool simplifies the workflow for aCRF authors, improves accuracy and efficiency, and significantly aids in the smooth progression of projects.
Session 4B: Safety Implementation
Grand Ballroom B
The FDA released a draft version of the "Standard Safety Tables and Figures - Integrated Guideline", along with the Excel file for FDA Medical Queries (FMQ) in August 2022. The goal of this guideline is to provide a set of standard tables and figures for safety analysis to be used for regulatory review. This paper presents a comprehensive strategy for increasing awareness and utilization of the FDA Medical Query and Standard Safety Tables and Figures. The strategy consists of five phases, each with specific objectives and tasks. It is based on a thorough gap analysis comparing the current standards at our company with the FDA guidelines.
This presentation will share the ADALGFMQ data structure, specification. Additionally, the presentation will introduce a carefully designed accompanying program that emphasizes standardization, modularization, and ease of use and maintenance.
Safety signal detection is crucial in pharmacovigilance for identifying potential adverse drug reactions and improving patient safety. The FDA Medical Query (FMQ) methodology categorizes related preferred terms (PTs) into standardized groups, with, over 30% of the mockups in FDA “Standard Safety Tables and Figures - Integrated Guideline” are FMQ related analysis. This presentation explores four methodologies for constructing an FMQ analysis dataset, each with its own advantages and challenges, to facilitate a comprehensive understanding of safety signals. The pros and cons of each approach will be discussed, emphasizing the significance of traceability, data management, and regulatory compliance. The implementation of Method 4 will be highlighted, underscoring the importance of clear definitions, robust data integration processes, and comprehensive user training. Attendees will gain valuable insights into best practices for FMQ analysis dataset creation, enabling informed decisions in designing their own data analysis strategies.
Session 5: CDISC Standards Updates & Submission Preparation
Grand Ballroom A
Preparing SDTM and ADaM data packages for submissions can be a daunting task. With all of the different guidance documentation and checks for files and data that are needed for submission, it's easy to get overwhelmed. In this presentation, we'll discuss documentation to reference for submissions to both FDA and PMDA. A list of items to review and cross-checks to be performed, both within and between study data as well as the files included in the submission package, will be provided to ensure a seamless, reviewer-friendly submission. Attendees will be able to leverage this knowledge to enhance their organization's own submission process.
Session 6A: Preparation for Data Submission 2
Grand Ballroom A
In today's pharmaceutical clinical trials, where data compliance and efficiency are increasingly emphasized, effectively managing and optimizing the CDISC Pinnacle verification report process is a challenge for every statistical programmer. This paper introduces a personalized Pinnacle 21 verification report management platform developed using R, SAS, and .NET. The platform not only achieves report archiving and version comparison but also offers features such as issue resolution recommendations, issue resolution tracking, issue data query localization, and multi-project solution summaries. Through this platform, we can significantly improve the efficiency and accuracy of verification report management, ensure data compliance, reduce manual intervention, and achieve process automation. This paper will discuss the design concept, technical implementation, and practical application effects of the platform in detail.
With the increasement of complex trial design, the needs to implement and harmonize end to end CDISC standard CDASH-SDTM-ADaM in a platform trial framework increased. By implementing this end to end standards, data can be easily cleaned, exchanged, and analyzed when basket/umbrella trials are conduct simultaneously, enabling collaboration and data integration. This is aiming to reduce the communication cost, avoid redundant work, pool project data easily.
The presentation will start from identifying the key elements that need to be harmonized for data collection, data tabulation and data analysis. The following elements will be discussed including developing project standard, control terminology for CRF design, project data cleaning plan, project SDTM spec, maintaining project statistical analysis plan, project table of content, project display template and layout of output delivery for different milestones, to ensure consistent and standardized representation of data across different trials.
Session 6B: Cross Function
Grand Ballroom B
This paper is a comprehensive summary based on authors’ real project experience on master protocols when worked as study lead programmer. The scope of this paper included standard definition of master protocols from FDA and public thesis, real case of master protocol design, master protocol team operation model, and corresponding project management. Challenges and solutions are mainly discussed from perspective of different programmer roles such as study programmer (SP), sub-protocol lead programmer (pLP), analysis lead programmer (ALP) and study lead programmer (SLP). The primary purpose of this thesis is aimed to provide a reference example for master protocol operation, implementation and experience sharing, so that our wider industry members can benefit from this summary.
The FDA's release of the Standard Safety Tables and Figures - Integrated Guideline (ST&F) in August 2022 introduced default screening analyses for Drug-Induced Liver Injury (DILI). These analyses include hepatocellular DILI screening plot, cholestatic DILI screening plot, and comparison of patients with maximal treatment-emergent liver test abnormalities.
Population Pharmacokinetic (popPK) dataset is a requirement for drug submission. The purpose of popPK dataset is to support population PK/PD modeling and simulation by creating an accurate and regulatory compliant NONMEM® dataset. Based on FDA requirements and CDISC newly released ADaM popPK Implementation Guide v1.0, we provide a clear understanding of a NONMEM input dataset from SAS® programming perspective.
This paper aims to serve as a reference document for NONMEM programming concepts for popPK analysis in clinical trial. It covers data flowchart, dataset structure and variables, missing data imputation rules, and other essential considerations and challenges.
Session 7A: ADaM Implementation
Grand Ballroom A
RECIST criteria is widely used for efficacy evaluation for solid tumor clinical trials. The primary endpoints like ORR and PFS are usually evaluated based on either investigator reported or independent review committee assessment. We are presenting a data ADRECIST following ADaM standard that is derived purely by programming algorithm using the same RECIST criteria. This presentation will provide an introduction to why ADRECIST, Data derivation and structure, and the applications and benefits, etc. The construction of ADRECIST involves meticulous attention to detail and adherence to stringent RECIST criteria, ensuring the accuracy and reliability of the data within. The structure is designed to facilitate the analysis of treatment outcomes. This dataset can assist in reviewing investigator and independent review reports for clinical validation of response accuracy. By ADRECIST, researchers and clinicians can effectively leverage this dataset to make informed decisions and advancements in the field of oncology.
Laboratory data plays a crucial role in clinical trials, and its results can be used to determine enrollment, evaluate drug safety, and even efficacy. However, due to the wide range of laboratory data sources and the large amount of data, converting laboratory data into deliverable LB and ADLB datasets has always been a challenge faced by the sponsor and its CRO partners, and it is also a key dataset of concern during agency evaluations. The speaker will share the problems encountered during the creation of LB and ADLB datasets and summarize their experiences. Including but not limited to sharing and discussing the following issues:
1. FDA pilot OCE/OOD detailed requirements for the ADLB dataset
(1) The denominator processing and display in the table summary of laboratory test abnormal results in the instruction manual;
(2) Discussion on EVLLBFL variable;
(3) Discussion on AVALU variable;
(4) APHASE vs APERIOD;
(5) OCE/OOD regulations on inspection items for CTCAE levels in both upward and downward directions, as well as some industry practice cases;
(6) Discussion on whether SHIFTy variables need to be added to the ADLB dataset;
2. Discussion on the placement of CTCAE toxicity grade related variables (in SDTM or ADaM)
Session 7B: SDTM Implementation
Grand Ballroom B
The SDTM annotated CRF (aCRF) is a mandatory submission document to create, it visually documents how raw data are mapped from the CRF to SDTM. But how to best format the annotation on CRFs? what is the regulatory expectations and industry recommendations for CRF mapping? With the release of MSG version 2.0 on March of 2021, we can get the answer for best practices from a CDISC standpoint. Based on MSG V2.0, SDTM IG V3.4, ICH eCTD Specification V 3.2.2 as well as Portable-Document-Format-Specifications, this paper will investigate the do’s and don’ts for CRF mapping and pdf setting with several examples, a detailed summary of common confusion for CRF mapping will also be introduced at the end.
At the time of submission, there may be situations that intended data were to be collected, but not actually were. As guided in the Study Data Tabulation Model Metadata Submission Guidelines (SDTM-MSG), it is not necessary to re-annotate the acrf.pdf to indicate that no data were collected. Studies follows different version of Define.XML document have different processes. If a dataset follows Define.XML v2.1, it would be kept in the Define.XML document using the “HasNoData” attribute. Meanwhile, a note would be added in the cSDRG to indicate the situation. However, if the study follows Define.XML v2.0, the dataset would be deleted. In the case that a variable, which has no data collected at the time of submission, no matter the study follows which version of Define.XML document, the variable would be kept, and similar note would be added to the corresponding section in the cSDRG.
Session 8: CDISC, AI & Future
Grand Ballroom A
- Victor Wu C3C, 迪时咨询
- Zhijun (Stanley) Wei, Novartis
- Shijia Wang, Tigermed-MacroStat
- Michael Lai, Innovent
- Geng Li, 广东省中医院