Responsibly Balance AI Innovation and Efficiency

by Barry P Chaiken, MD

As artificial intelligence (AI) continues evolving rapidly, healthcare leaders grapple with the potential benefits and risks of integrating these powerful tools into patient care. While AI holds immense promise for transforming healthcare delivery, improving outcomes, and reducing costs, we must approach its implementation with caution and foresight to ensure patient outcomes and privacy remain paramount.

A group of current and former OpenAI employees, deeply concerned about the potential harm caused by AI, have taken a stand. They recently published an open letter advocating for greater transparency and protections for whistleblowers in the AI industry. They claim that OpenAI prioritizes profits and growth over safety in its race to build artificial general intelligence (AGI). The group further alleges that OpenAI uses aggressive tactics, including restrictive nondisparagement agreements, to silence workers from raising concerns.

Critical Governance Needed

The recent revelations underscore the critical need for robust governance and oversight of AI development and deployment in healthcare. Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers, cautions, “OpenAI is recklessly racing to be the first to build AGI, and they are putting a priority on profits and growth.” His critique highlights the need for a more balanced approach that prioritizes safety and ethical considerations.

Kokotajlo, who joined OpenAI in 2022 as a governance researcher, has become increasingly concerned about the timeline for AGI development. He initially predicted that AGI might arrive in 2050, but after seeing how quickly AI improved, he shortened his timeline to a 50 percent chance of AGI arriving by 2027. More alarmingly, he believes that the probability that advanced AI will destroy or catastrophically harm humanity is 70 percent.

Despite safety protocols in place at OpenAI, Kokotajlo observed that they rarely seemed to slow down the company’s progress. He became so worried that he suggested to Sam Altman, OpenAI’s CEO, that the company should “pivot to safety” and allocate more resources to guarding against AI’s risks rather than charging ahead to improve its models. Although Altman claimed to agree, Kokotajlo noted that nothing much changed, leading him to quit in April.

Will AI Increase Productivity?

As Baily, Brynjolfsson, and Korinek argue in their 2023 paper, AI, as a general-purpose technology, could impact a wide array of industries, prompting investments in new skills, transforming business processes, and altering the nature of work.

They contend that large language models are emerging as powerful tools that make workers more productive and increase the rate of innovation, laying the foundation for a significant acceleration in economic growth. They foresee AI generating a virtuous circle, with productivity gains at its center, potentially leading to an 18-percent increase in aggregate productivity and output over a decade or two.

However, there are also valid concerns about AI’s impact on employment. Acemoglu, Autor, and Johnson warn that AI directly threatens high-skill jobs, including those in healthcare, as it can potentially attain human parity in a wide range of cognitive tasks (e.g., radiology).

The authors make the case that there is no guarantee that the transformative capabilities of AI will be used for the betterment of work or workers and that the bias of the tax code, the private sector generally, and the technology sector specifically lean toward automation over augmentation. They argue that redirecting AI development onto a human-complementary path requires changes in the direction of technological innovation and corporate norms and behavior, backed by the regulations and incentives set at the federal level.

Rework Processes and Workflows

If companies focus on using AI to enhance worker efficiency or replace them, they make the same error many provider organizations initially made when implementing EMRs. By failing to rework processes and workflows and instead digitizing manual, paper-based workflows, organizations saw decreased efficiency and no improvement in clinical and administrative outcomes. To effectively leverage AI to increase efficiencies and create new and higher quality products and services, companies must think creatively about how they use the power of AI, recognizing that the focus needs to be on inventing the new rather than mimicking the old.

Recent developments in AI have sparked both excitement and concern among healthcare professionals. These tools have the potential to revolutionize clinical decision support, streamline administrative tasks, and enhance patient engagement. However, as with disruptive technology, significant risks must be managed carefully.

As I reviewed in a previous article, “Protect Patients from AI-Driven Healthcare Misinformation,” misinformation and unforced errors remain primary concerns surrounding AI use in healthcare. AI models are only as good as the data they are trained on, and biases or inaccuracies in that data can lead to flawed outputs. In a clinical setting, where decisions can have life-or-death consequences, relying too heavily on AI-generated information without proper validation and oversight could seriously harm patients.

Striking the Right Balance

AI will also impact clinical roles and responsibilities as it reshapes the healthcare workforce. If AI tools enable less skilled clinicians to take on tasks traditionally performed by highly trained physicians, there is a risk that the quality of care could suffer. While AI has the potential to augment and support clinicians, it should not be seen as a replacement for human expertise and judgment. Striking the right balance between leveraging AI’s capabilities and maintaining appropriate human oversight is essential. Workflows must be designed to avoid automation bias, where clinicians habitually accept AI recommendations without reviewing them critically.

To mitigate these risks and realize the full potential of AI in healthcare, we need a proactive and collaborative approach to governance and regulation. Healthcare organizations must establish clear policies and procedures for AI implementation, including rigorous testing and validation processes, ongoing monitoring of AI-assisted clinical outcomes, and mechanisms for identifying and addressing unintended consequences.

Collaboration between healthcare providers, AI developers, policymakers, and patient advocates is critical to ensure AI tools are designed and deployed in a way that prioritizes patient safety, privacy, and equity. By engaging all stakeholders in the development process and establishing clear guidelines and standards for AI in healthcare, we can create a framework for responsible innovation that maximizes benefits while minimizing risks.

AI has revealed unfathomable vistas and ungraspable, unrecognizable vulnerabilities—and the process has only just begun. As we embark on this journey, we must remain vigilant, proactive, and committed to AI’s ethical and responsible deployment in healthcare, always keeping the best interests of patients at the forefront of our efforts.

Sources:

Will A.I. Be a Creator or a Destroyer of Worlds?, NY Times, Thomas Edsall, June 5, 2024

Machines of mind: The case for an AI-powered productivity boom, Martin Neil Baily, Erik Brynjolfsson, and Anton Korinek, Brookings, May 10, 2023

Policy Insight 123: Can We Have Pro-Worker AI? Choosing a path of machines in service of minds?, Daron Acemoglu, David Auto, Simon Johnson, Center for Economic and Policy Research,  October 4, 2023

Protect Patients from AI-Driven Healthcare Misinformation, DocsNetwork.com, March 20, 2024

GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models, Tyna Eloundou, Sam Manning, Pamela Mishkin, Daniel Rock


I look forward to your thoughts, so please submit your comments in this post and subscribe to my bi-weekly newsletter Future-Primed Healthcare on LinkedIn.

Leave a comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.