WHO Report Weighs Benefits (And Risks) of AI in Healthcare

Posted May 2, 2024 under:

WHO Weighs AI's Impact on Healthcare

The World Health Organization’s recent report explores the potential applications, benefits, and risks of AI language models in healthcare, covering diagnosis, patient care, administrative tasks, education, and research.

Digital Innovation

AI in Healthcare: Benefits and Risks

WHO's analysis suggests AI could transform healthcare - within safe boundaries.

Artificial intelligence (AI) has been increasingly utilized in healthcare to optimize workflows, evaluate data, and personalize treatments for progressive conditions like dementia and cancer. Now, advanced large language models (LLMs) like OpenAI’s ChatGPT have sparked a new era where generative AI can engage doctors and patients in natural dialogue on endless, open-ended health topics – spanning diagnosis, treatments, case analysis, and more.

A recent World Health Organization (WHO) report outlines five major areas where AI LLMs could be applied in medicine and public health: diagnosis, patient care, administrative tasks, medical education, and research. However, the report also cautions about significant risks of bias, inequity, privacy violations, and transparency issues with AI – concerns echoed by experts and civil society groups.

Read below for the five applications of AI covered in this report – and their associated risks.

1. Diagnosis

The report suggests that while physicians already use online resources to assist in determining diagnosis, LLMs can catch insights humans might miss. AI’s ability to analyze medical history details across millions of data points – spanning decades of treatment plans and outcomes – may uncover patterns that improve early detection of conditions a human may have overlooked.

Additionally, AI chatbots fielding basic medication requests and other patient queries may increase physician access for patients with complex cases requiring in-person visits. The report cites one U.S. study where an AI chatbot was able to provide quality responses to standardized medical questions posed on an online forum.

In conclusion, the report states that AI may be helpful in “answering standardized ‘curb-side consult’ questions and providing “information and responses on the initial presentation of a patient or to summarize laboratory test results.”

For the applications above, transparency is vital – what drives AI suggestions should be clearly explained and contextualized. Neglecting transparency measures in patient diagnosis or blindly following LLM recommendations may harm patients and erode physicians’ trust in AI.

2. Patient-Centered Applications

The WHO analysis also examined AI applications designed to assist patient health management between appointments. Conversational AI chatbots and LLM-driven programs can help patients better understand their conditions and self-triage symptoms, explore treatment options, and ease self-care between appointments – empowering individuals to take charge of their health while reducing physician workload amid a nationwide shortage.

According to the report, “LMM-powered chatbots, with increasingly diverse forms of data, could serve as highly personalized, broadly focused virtual health assistants” to patients, addressing physical and mental health challenges whenever necessary.

The report also warns of critical risks; these conversational tools may offer incorrect or incomplete medical advice, placing patients in danger. Furthermore, consumer-grade applications that gather incredible volumes of personal health data require strict governance to avoid information leaks that expose patients to privacy violations or discrimination.

3. Administrative Tasks

Documentation and reporting burdens have contributed to soaring physician burnout and early retirement, fueling staffing losses and care access challenges. According to the report, LLMs may significantly lessen these burdens by automating administrative tasks. AI has been shown to successfully automate notetaking, billing, insurance authorizations, prescriptions, and more.

One physician reported that using AI to record and submit details freed up two hours of their day. They stated, “AI has allowed me, as a physician, to be 100 percent present for my patients.”

Despite this increased efficiency, using AI to automate the above tasks carries an element of risk. Serious medical errors may result from minor mistakes, oversimplifications,

meta or “hallucinations” (incorrect AI outputs) particularly regarding prescriptions.

4. Medical Education

AI shows significant potential in improving clinical education by adapting simulations to individual learning needs while connecting students globally. LLM-powered programs could recreate endless variations of symptoms and complications that trainees may rarely see in person. This can help them practice response strategies before facing the real situation.

However, the report adds that AI in healthcare must be used as a supplemental aid – not as a shortcut to bypass foundational learning. Physical experience, physician oversight, and peer learning should always factor heavily in training new clinicians.

5. Research

LLMs show significant potential in expediting nearly every phase of biopharmaceutical research – if thoughtfully implemented. By rapidly sorting large datasets and analyzing intake criteria against genomic records, AI can quickly match patients to relevant drug trials. This process may take years with traditional recruiting.

Ongoing AI assessment may also enable more flexible trial oversight as real-world conditions change. As factors or demographics shift, automated monitoring can detect subtle indications or safety concerns early in a clinical trial, minimizing late-stage costs or delays.

However, AI systems could widen health disparities if their research disproportionately helps privileged groups over marginalized ones due to biased algorithms. Sustainable progress requires balancing advanced technology with equal accessibility so all communities can participate in and benefit from medical discoveries.

Conclusions

AI in healthcare carries vast potential. However, safe and equitable progress hinges on matching innovation with oversight and accountability. Further research is necessary to fully grasp this technology and ensure organizations and individuals are held responsible for its use.

You can access the WHO report here.

Resources

LinkedIn Pulse: https://www.linkedin.com/pulse/ai-healthcare-already-here-before-chatgpt-sm-hasan-ul-bari-md-mph

The World Health Organization: https://www.who.int/publications/i/item/9789240084759; https://www.who.int/news/item/18-01-2024-who-releases-ai-ethics-and-governance-guidance-for-large-multi-modal-models

Center for Democracy & Technology: https://cdt.org/ai-machine-learning/

The American Medical Association: https://www.ama-assn.org/practice-management/sustainability/physician-shortage-crisis-here-and-so-are-bipartisan-fixes

Medical Group Management Association: https://www.mgma.com/mgma-stats/burnout-driven-physician-resignations-and-early-retirements-rising-amid-staffing-challenges

Beckers Hospital Review: https://www.beckershospitalreview.com/healthcare-information-technology/who-releases-ai-ethics-guidance.htm

OpenAI: https://openai.com/blog/chatgpt

Want to learn more about AI? Check out this article.

A new way to think about Health Care

Create a next level experience for outpatients with modern facilities, high quality of care.

Doctor talking to adult patient

Stories

Case Studies for the benefits of an integrated system