Leveraging the Power of GenAI in Real-World Evidence Studies

Leveraging the Power of GenAI in Real-World Evidence Studies

DATE

May 13, 2024

AUTHOR

Dragan | Co-Founder & CTO

Introduction

Generative Artificial Intelligence (GenAI) has emerged as a transformative force in various industries, including healthcare and pharmaceutical research. This technology offers the capability to create new content based on machine learning models. In this blog post, we will dive into the realm of the potential of GenAI, exploring its potential in Real-World Evidence (RWE) studies. We will look at the benefits, challenges and promising use cases of GenAI in RWE studies, shedding light on how it can revolutionize the way clinical trials are conducted. 

Additionally, we will discuss best practices for effectively utilizing GenAI in RWE studies. By leveraging GenAI, researchers and pharmaceutical companies can gain valuable insights, optimize processes and accelerate drug development. Ultimately, this leads to improved patient outcomes.

1. What is GenAI and why is it popular in the medical field?

GenAI, or generative artificial intelligence, refers to a type of AI technology capable of generating new content, such as text, images, code or other forms of digital media. It is based on machine learning models, particularly those known as deep learning networks, which learn from vast amounts of existing data to create new outputs that resemble the training data in structure and theme. GenAI’s popularity has surged across many industries, including the medical field, owing to its capacity for improving diagnostics, tailoring treatment plans and optimizing research processes.

It is commonly known that the healthcare system is under immense pressure. Recent PwC research on medical costs trends projected a 7% increase in healthcare costs by 2024, where workforce shortages and inflation are listed as the two main reasons for it. 

On the other hand, research from Accenture suggests that AI could improve 40% of healthcare providers’ working hours and that 98% of executives around the world agree that GenAI and foundation models will play a crucial role in their organization’s strategy in the next 3-5 years. Along with that, Forbes estimates it could save the U.S. medical sector approximately $200-300 billion each year. In this regard, GenAI is emerging as a crucial technology that can help maintain efficient patient care without increasing operational costs. 

The development of new pharmaceutical products is a lengthy and expensive journey, with the entire process from ideation to launch taking upwards of 12-15 years and costing more than $1 billion

GenAI has the potential to accelerate certain aspects of this complex process, thereby streamlining drug development. In the pharmaceutical sector, GenAI is applied to analyze and learn from unstructured data, such as patient records and medical images, to generate new, similar content. It can simulate complex biological processes or drug interactions, accelerating the pace of medical research and the development of new treatments.

The Boston Consulting Group (BCG) highlights GenAI’s transformative role in biopharma, particularly in reducing drug R&D timelines. Its potential spans the entire value chain, from development to commercialization.

However, the implementation of GenAI into this complex regulation-intensive industry, presents significant challenges. It therefore demands a solid combination of technology and strategy, tailored to the unique needs of each life science process. In the next sections, we will address some of the most promising use cases, the challenges around them as well as some best practices for its implementation.

2. Promising use cases of GenAI in RWE studies

RWE studies are increasingly leveraging the power of Generative Artificial Intelligence (GenAI) to revolutionize various aspects of healthcare and pharmaceutical research. In this section, we will focus on what we believe are the three most promising use cases: 

Use case 1: Data summarization

One of the most significant challenges in RWE studies is dealing with vast amounts of complex data collected from diverse sources. GenAI in RWE studies offers innovative solutions for data summarization, enabling researchers to condense large datasets into concise and meaningful insights. Even on a smaller scale, e.g. a few hundred pages of patient information, summarizing the information for the physician just before or during the patient visit can significantly improve efficiency. Given the current workforce shortage, medical professionals often lack the time to review all patient information in preparation for the next patient consultation. The fact that this is one of the most promising GenAI use cases is evident in the partnership between Microsoft, OpenAI and Epic EHR which, among other things, is supposed to enhance clinician productivity with patient status summarization directly in the EHR during a patient visit.

Use case 2: Automated generation of Clinical Study Reports (CSRs)

GenAI in RWE studies also plays a crucial role in automating the generation of Clinical Study Reports (CSRs), which are essential documents summarizing the results and findings of clinical trials. Traditional methods of manual CSR generation are very time-consuming and labor-intensive, requiring manual data extraction and report completion. GenAI streamlines this process by automatically generating comprehensive CSRs from raw study data, ensuring accuracy, consistency and compliance with regulatory standards. This not only accelerates the pace of research but also enhances the quality and reliability of study reports. In a recent report, BCG predicts that automating medical document generation in clinical development (e.g. protocols, clinical-study reports and regulatory affairs submissions), can reduce medical-writing time by as much as 30%.

Use case 3: GenAI-powered chatbots for enhanced and efficient communication 24/7

In RWE studies, chatbots powered by Generative Artificial Intelligence (GenAI) are emerging as valuable tools for enhancing communication and support for patients and study staff. By leveraging study information materials, chatbots can be trained to provide accurate and up-to-date answers to frequently asked questions, offer guidance on study procedures and assist with data collection. This not only improves the patient experience but also reduces the burden on study staff, allowing them to focus on other study tasks. 

Additionally, chatbots can be designed to provide personalized recommendations and reminders to patients, promoting adherence to study protocols and improving overall study outcomes 24/7. For instance, Sophia, a chatbot created by Novo Nordisk, is accessible to patients with diabetes through their online portal. Sophia addresses patients’ questions about nutrition, educates them about diabetes and supports adopting a healthy lifestyle. 

After observing increased traffic on the company’s website between 11 p.m. and 1 a.m., the Danish pharmaceutical corporation made the decision to use a chatbot. With the chatbot, website visitors can now seek advice outside of regular business hours. As stated by Amy West – NovoNordisk’s Head of US Digital Transformation & Innovation within six months, the chatbot engaged in 11,000 conversations and responded to 27,000 questions.

3. Common challenges for using GenAI in RWE studies

As you can imagine, employing GenAI in RWE studies and clinical research presents a variety of challenges that intersect ethical, legal and technical domains. At the forefront of these challenges are privacy and security concerns. Given the sensitive nature of medical data, ensuring robust safeguards to protect patient privacy is of greatest importance. GenAI models trained on medical data run the risk of inadvertently disclosing personally identifiable information if not properly anonymized or secured. Moreover, as these models become more sophisticated, the potential for malicious actors to exploit vulnerabilities in AI systems increases, posing significant security risks to patient data.

Another crucial aspect advocated by the Stanford Center for Artificial Intelligence and Medical Imaging is the transparency and accountability surrounding AI-generated interventions in patient care. Patients have the right to know if AI played a role in their diagnosis or treatment. However, the opaque nature of AI algorithms and the complexity of their decision-making processes can obscure transparency. Patients may feel uneasy about entrusting their health to algorithms they don’t fully understand, raising questions about informed consent and the right to be informed about the components of their medical care.

Liability and insurance present further challenges in the integration of GenAI into clinical research and practice. Determining accountability in cases where AI systems make errors or produce unexpected outcomes can be legally complex. Who bears responsibility when an AI-driven diagnosis is incorrect or when an AI-generated treatment leads to adverse effects? Moreover, the evolving landscape of liability insurance may struggle to keep pace with the unique risks posed by AI in healthcare, potentially leaving stakeholders underinsured or exposed to financial liability.

Another significant challenge is model hallucination, a phenomenon where GenAI solutions generate outputs that are not supported by research and medical evidence. In clinical research, relying on AI models that produce hallucinated or erroneous results can lead to incorrect conclusions, compromising the integrity of studies and potentially endangering patient safety. Mitigating model hallucination requires ongoing validation and refinement of AI algorithms using real-world clinical data and expert oversight to ensure that generated outputs align with medical knowledge and best practices.

4. Best practices for utilizing GenAI in a medical context

Utilizing GenAI in RWE studies and in the medical context requires adherence to best practices that prioritize data privacy, specialized model selection and ethical considerations. 

One effective strategy to mitigate concerns regarding data security and privacy is locally hosting open-source GenAI models such as Meta’s Localized Learning and Model Aggregation (LLaMA). By keeping sensitive medical data within the confines of local servers or healthcare facilities, organizations can minimize the risk of unauthorized access or data breaches. LLaMA employs a federated learning approach, allowing AI models to be trained collaboratively across multiple institutions without sharing raw data. This preserves patient privacy while still benefiting from collective insights.

To enhance the relevance and accuracy of AI-driven medical interventions, healthcare providers can select models specifically designed for medical purposes, such as Google’s MedPaLM. MedPaLM is a large language model optimized for providing high-quality answers to medical questions. According to a Nature publication from July 2023, MedPaLM became the first AI system to surpass the pass mark (over 60%) in the U.S. Medical Licensing Examination (USMLE)-style questions. 

By leveraging models tailored to medical applications, healthcare providers can ensure that AI systems are attuned to the nuances of medical data and capable of delivering clinically meaningful insights. 

*Source: Google’s MedPaLM website reflecting early exploration of Med-PaLM’s feature capabilities

Incorporating patient consent into the AI-driven decision-making process is crucial to upholding ethical standards and respecting patient autonomy. Obtaining explicit consent from patients before utilizing AI in their care fosters transparency and empowers individuals to make informed decisions about their treatment. 

Moreover, implementing the “four-eyes principle”, which involves verifying AI-generated decisions with a second Large Learning Model (LLM), adds a layer of scrutiny and reduces the likelihood of hallucination errors or biases. Additionally, applying the “four-eyes principle” enhances the accuracy of AI-generated decisions. This principle involves verifying the output of one Large Learning Model (LLM) with that of another, thereby minimizing the risk of errors arising from hallucinations or biases. This approach not only enhances the reliability of AI-driven recommendations but also reinforces accountability and trust in the decision-making process.

Conclusion

In conclusion, the integration of generative AI (GenAI) into real-world evidence (RWE) studies holds tremendous promise for improving healthcare research and decision-making. GenAI’s ability to efficiently process vast amounts of data and extract meaningful insights can significantly enhance the quality and impact of RWE studies. 

Climedo is uniquely positioned to support clients in conducting best-in-class RWE studies with the support of AI, and helping them make the best of new technologies. Our team of highly skilled professionals has the expertise and experience necessary to guide you through every step of the RWE study process, from study design and data collection to analysis and interpretation. By partnering with Climedo, you can leverage the full potential of new technologies such as GenAI, ultimately improving patient outcomes and enabling better healthcare.

Learn more in a personalized demo

Dragan | Co-Founder & CTO

Dragan | Co-Founder & CTO

Climedo

Digital health entrepreneur. Passionate about clean UX and travelling to exotic countries. Creates products with love at Climedo Health.

Envelope Icon

Stay up to date on our insights & events!