Assistant Professor Published in Esteemed Clinical and Translational Science Journal

zhe he headshotAfter two and a half years of hard work, FSU School of Information Assistant Professor Zhe He’s research was selected for publication in Clinical and Translational Science. The journal is published on behalf of the American Society for Clinical Pharmacology and Therapeutics and has an impact factor of 3.989 — in other words, this journal is cited a large amount of times and therefore highly regarded in the field.

Dr. He’s published paper, “Clinical Trial Generalizability Assessment in the Big Data Era: A Review” is a major result of his work on a National Institute of Health grant titled “Systematic Analysis of Clinical Study Generalizability Assessment Methods with Informatics.”

Read below for a synopsis of “Clinical Trial Generalizability Assessment in the Big Data Era: A Review”:

Clinical studies, especially randomized controlled trials, are essential for generating evidence for clinical practice. However, generalizability is a long‐standing concern when applying trial results to real‐world patients. Generalizability assessment is important, nevertheless, not consistently practiced. In this project, Dr. He and his team at FSU and collaborators at University of Florida performed a systematic review to understand the practice of generalizability assessment. They identified 187 relevant papers from over 3500 papers and systematically organized these studies in a taxonomy with three dimensions: (1) data availability, (2) result outputs, and (3) populations of interest. They further reported disease areas, underrepresented subgroups, and types of data used to profile target populations. They observed an increasing trend of generalizability assessments, but less than 30% of studies reported positive generalizability results. With the wide adoption of electronic health records systems, rich real‐world patient databases are increasingly available for generalizability assessment. However, software tools and packages are still lacking and are not readily available for generalizability assessments. Further, as a priori generalizability can be assessed using only study design information (primarily eligibility criteria), it gives investigators a golden opportunity to adjust the study design before the trial starts. Nevertheless, less than 40% of studies in our review assessed a priori generalizability. Research culture and regulatory policy adaptation are also needed to change the practice of trial design (e.g., relaxing restrictive eligibility criteria) towards better trial generalizability.