By Akanksha Karwar, MPH

Contributor & Researcher: Aliya Jain 

With artificial intelligence (AI) seemingly permeating almost every industry, it is no surprise that this buzz is starting to impact healthcare. Already, AI is being implemented in medical diagnostics, patient monitoring, and learning healthcare systems. AI algorithms produce probability analyses by combing large data sets and finding patterns in these cases. Healthcare providers use these predictive models to make informed decisions about patients and policies. But we are just starting to scratch the surface of AI in healthcare. 

Getting started: AI implementation for prior authorizations. 

There is potential for the implementation of AI in prior authorization, which plays an impactful role in access to and quality of care. Prior authorization (PA) is a health plan resource utilization management process that requires healthcare providers to get approval from insurance payors before service delivery. This process is complex – often leading to delays in care or abandonment of care altogether. In a recent AMA survey, 42% of healthcare providers reported experiencing delays frequently. By leveraging AI ethically, PA processes can be streamlined, and care can be delivered more timely.  

The prior authorization process is particularly arduous and time-consuming for many reasons. Its manual nature adds an administrative burden— extensive paperwork and multiple stakeholders are often required for approval. Healthcare providers must ensure their authorization requests align with insurance companies' evolving clinical guidelines and criteria. Assessing medical necessity for such requests can also be complex and subjective. When any link in this chain is broken, it can only result in delayed care delivery and poorer health outcomes. 

How AI can help streamline prior authorizations 

However, AI may be the knight in shining armor that streamlines claims assessments and improves care delivery. For example, claims assessors must manually parse through significant amounts of clinical information to determine medical necessity. AI can help by extracting the most crucial information from the provided clinical documentation and matching that to the payer's Medical Necessity Rules to determine whether the patient's disposition aligns with their criteria.  

AI can also read, sort, and scan decision letters. By training AI algorithms to pick up certain keywords and phrases, we can automatically assign the faxed letter to the patient and identify whether authorization has been approved or denied or if additional clinical information is required. This allows healthcare providers more time to focus their attention on delivering care rather than sorting through faxes and scrutinizing wordy decision letters.  

This can help decrease your case management team's reliance on manual processes. One 2022 McKinsey & Company analysis identified that a 50 - 75% reduction is possible with AI. By limiting this extremely time-consuming and subjective work with the integration of AI, PAs can be approved faster, care can be delivered quicker, and patient outcomes can improve. 

Ensuring ethics are applied to AI in healthcare (and prior authorizations) 

There are, however, some ethical nuances that must be considered. AI algorithms are trained on historical data with inherent disparities and biases, so we must constantly account for and correct these issues, not perpetuate them.  

How can we inject transparency and fairness into this process? Collaboration between AI and the people using AI is critical. We can first achieve this by ensuring that AI-driven decision-making is explainable and interpretable to healthcare providers and patients. This helps break down the mysticism behind AI, increases trust between all stakeholders, and allows healthcare providers to identify potential biases quickly. Multiple studies in this area have identified that regular auditing, testing, and other mitigation mechanisms should be embedded into AI systems, so we are constantly checking to see if fairness and transparency are at the forefront of this process.   

Addressing concerns about AI in healthcare  

Applying AI requires care, review, and assessment, just as any technology does. One recent news story involved UnitedHealth using AI to evaluate claims for post-acute care (PAC). A class action lawsuit was filed against them after two elderly patients had their claims denied for transfer to skilled nursing facilities. According to the lawyer representing the plaintiffs, 90% of AI-driven denials that are appealed get reversed.  

Humana was recently under fire for similar reasons. However, in this case, plaintiffs questioned both the AI algorithms and the humans using them. The lawsuit claimed that Humana employees were pressured to "keep up with the model's targets" or faced consequences.   

This is a two-fold issue: denial statuses were assigned via AI-driven decision-making without further input from clinicians or other experts. Second, because so many denials were being reversed, it is easy to see that fairness was not at the forefront of the decision-making process. Instead, AI was unethically used to cut costs at the patient's expense. 

However, we can learn from these setbacks to continuously improve in a feedback loop. This makes it clear that ethical compliance and human-centric approaches are necessary to deliver faster care to people who need it, so we must use AI as a tool, not as a replacement for human judgment.  

I am positively impacting patients with AI in healthcare. 

Fierce Healthcare recently covered some work by the Health Care Service Corporation (HCSC) in this area, where they noted that they have been able to process PAs 1,400 times faster than before with the help of AI. However, no denials are issued; PAs are either auto-approved when critical criteria are met or pushed forward to a clinician for further determination.  

Deciding the medical necessity of a PA can radically change patients' health outcomes; it deserves human eyes and sensitivity. Instead of using AI to make an exclusive determination, it can make suggestions that a human expert can refer to.  

Unable to make it to Providence for #CMSA2024? No problem! Register for our digital component (59 CE Hours) and enjoy access to all main conference concurrent sessions: 🔗https://cmsa.societyconference.com/ Digital access remains open for viewing and evaluating until July 31—Maximize your CE earnings from your home or office!

Bio: Akanksha Karwar, MPH has held a variety of roles in the healthcare industry since 2009. Akanksha started as a Research Assistant - South Asian Total Health Initiative at Robert Wood Johnson University Hospital in 2009. In 2012, they took on the role of Project Management Assistant at Dartmouth-Hitchcock. In 2013, they were an Program Evaluation and Quality Improvement Intern at Centre Hospitalier et Universitaire de Kigali. In 2014, they were an Analyst at Huron. From 2016 to 2019, they held a variety of roles at GE Healthcare, including Senior Consultant, Consulting Manager, and Senior Director. In 2019, they became the Chief Operating Officer at Aidin.

Akanksha Karwar, MPH obtained their Master's degree in Public Health from Dartmouth College in 2013. Akanksha completed their Bachelor of Science in Biological Sciences and Cultural Anthropology from Rutgers University in 2011. In 2019, they obtained a Yoga Instructor Certification - Level I from the Laughing Lotus Yoga Center, as well as a Lean Six Sigma Green Belt (ICGB) from Rutgers University.