Collaborative Intelligence: How Human Expertise and AI Synergize in Healthcare

by Barry P Chaiken, MD

Artificial intelligence (AI) holds immense potential to revolutionize healthcare by improving patient outcomes, reducing costs, and optimizing clinical processes. However, realizing this promise requires a thoughtful approach to developing AI models, particularly regarding the quality and curation of training data. As healthcare executives navigate the rapidly evolving landscape of AI, understanding the crucial role of human expertise in this process is essential to evaluating and choosing which AI applications are appropriate for their organization.

The training data is at the heart of any successful AI implementation in healthcare. These datasets, often comprising patient records, clinical notes, and medical images, form the foundation upon which AI models learn to make predictions and recommendations. However, raw healthcare data is often complex, heterogeneous, and potentially biased. Without proper curation and annotation by subject matter experts, AI models risk perpetuating inaccuracies and biases that could negatively impact patient care.

This is where the expertise of healthcare professionals becomes invaluable. Clinicians, researchers, and other domain experts possess the knowledge and experience to evaluate training data’s relevance, quality, and representativeness. By carefully selecting, cleaning, and annotating datasets, these experts ensure that AI models learn from accurate, unbiased, and clinically meaningful information.

Leverage Human Expertise

Organizations can leverage human expertise to curate data for AI training in various ways. One approach is establishing multidisciplinary guideline development groups with representatives from diverse clinical specialties, geographic regions, and patient populations. These groups can collaborate to define the scope and purpose of the AI model, identify relevant data sources, and establish criteria for data inclusion and exclusion.

The process of annotation, which involves labeling and categorizing data to provide context and meaning for AI models, remains a critical aspect of data creation. Human annotators, often subject matter experts (SME), employ techniques such as named entity recognition to identify key clinical concepts, sentiment analysis to capture the emotional tone of the text, and dependency parsing to understand the relationships between words. These tasks require an expert understanding of medical terms, clinical workflows, administrative workflows, and the nuances of patient care – knowledge that only experienced healthcare professionals can provide.

Organizations can recruit content experts through various channels, including professional networks and partnerships with academic institutions, or outsource the work to existing clinical content development organizations. Ensuring annotators have the necessary domain expertise and are trained in the organization’s specific annotation guidelines and quality control processes is essential.

Ongoing Feedback

Moreover, human experts’ involvement extends beyond the initial data preparation phase. As AI models are developed and refined, ongoing feedback from clinicians and other stakeholders is essential to validate the model’s outputs and ensure their alignment with clinical best practices. This iterative process of evaluation and refinement, guided by human expertise, is crucial to building safe, effective, and trustworthy AI systems.

Reinforcement learning from human feedback (RLHF) represents a promising approach to incorporating human expertise in AI development. RLHF is an innovative technique that involves training AI models through interactions with human evaluators who provide feedback on the model’s outputs. This approach allows the model to learn from the knowledge and judgment of domain experts, ensuring that the AI system aligns with clinical best practices, patient safety, and ethical considerations.

Applying RLHF

RLHF is applied in various ways to enhance the quality and relevance of healthcare AI models. For example, clinicians provide feedback on the model’s diagnostic or treatment recommendations, helping the AI system learn to make more accurate and context-appropriate decisions for clinician users. Similarly, patient representatives can offer insights into the usability and acceptability of patient-centric AI-powered tools, ensuring that they meet the needs and preferences of the patient end-users.

Reward modeling offers a specific technique that is valuable within RLHF. Human evaluators rank or rate the model’s outputs based on their quality or appropriateness. The model then learns to predict these rewards and adjusts its behavior to maximize the predicted rewards. This approach enables the AI system to learn complex tasks,  such as providing empathetic patient communication or adapting to individual patient needs, which are difficult to define with simple objective functions.

Comparative ranking is another RLHF technique in which human evaluators rank multiple AI-generated outputs based on their relative quality or suitability. The model then learns to produce outputs consistently ranked higher by the evaluators. This method helps the AI system understand the nuances and contextual factors influencing clinical decision-making, leading to more refined and appropriate recommendations.

Development Team Diversity

Healthcare executives must also consider the importance of diversity and inclusivity when assembling teams to work on AI projects. Guideline development groups, for example, must be multidisciplinary and include representatives from various clinical specialties, geographic regions, and patient populations. This diversity of perspectives helps ensure that AI models are trained on data that reflects the broad spectrum of patient needs, experiences, and cultural differences.

Transparency and collaboration are equally critical in the development of AI for healthcare. Organizations should establish clear processes for documenting data sources, annotation techniques, and model performance metrics. Sharing this information with regulatory bodies, healthcare providers, and patients helps build trust in AI systems and facilitates safe and effective deployment in clinical settings.

Learning From CPG Processes

Furthermore, healthcare organizations can draw valuable lessons from the established processes to develop clinical practice guidelines (CPGs). CPGs are researched statements developed by SME teams that provide recommendations for clinical care based on the best available evidence. The development of CPGs involves rigorous evidence synthesis, multidisciplinary expert input, and stakeholder consultation. Applying similar principles to the development of AI models helps ensure that they are grounded in the best available evidence and aligned with clinical best practices.

As the healthcare industry continues to embrace AI, executives must prioritize the role of human expertise in every stage of the development process. By investing in multidisciplinary teams, fostering collaboration, and ensuring transparency, organizations can harness the power of AI to transform patient care while mitigating potential risks and biases. The path forward requires a thoughtful, human-centered approach that recognizes the indispensable value of clinical knowledge and experience in shaping the future of healthcare AI.

The development of AI for healthcare is not purely technical; it is deeply human. The expertise of clinicians, researchers, and other subject matter experts is essential to curating high-quality training data, guiding the development of AI models, and ensuring their safe and effective deployment in clinical settings. By prioritizing human expertise, collaboration, and innovative approaches like reinforcement learning from human feedback, healthcare organizations can unlock the transformative potential of AI to improve patient outcomes, optimize clinical processes, and drive innovation in healthcare delivery. As healthcare executives navigate this exciting frontier, keeping human expertise at the center of AI development will be the key to success.

Sources

Fine-Tuning LLMs : Overview, Methods, and Best Practices, Turing.com

De Leo, A., Bloxsome, D. & Bayes, S. Approaches to clinical guideline development in healthcare: a scoping review and document analysis. BMC Health Serv Res 23, 37 (2023)

Who Is Really Training Your AI? The Role of SMEs in AI Data Annotation, Shelly Palmer, May 5, 2024


I look forward to your thoughts, so please submit your comments in this post and subscribe to my bi-weekly newsletter Future-Primed Healthcare on LinkedIn.

Leave a comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.