CMS Clarifies AI Use in Medicare Advantage: A Payer’s Guide

Several large insurers have been named in lawsuits alleging they use artificial intelligence (AI) and similar technology to decline coverage. As AI proliferates and improves, many in the Medicare Advantage (MA) ecosystem are concerned that coverage decisions may increasingly be the domain of AI. 

For many, that leads to the question: Can MA plans use algorithms or AI to make coverage decisions? The Centers for Medicare and Medicaid Services (CMS) addressed that question in a Frequently Asked Question document released on Feb. 6, 2024.

Background

Last spring, in its Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program, Medicare Prescription Drug Benefit Program, Medicare Cost Plan Program, and Programs of All-Inclusive Care for the Elderly rule, CMS proposed changes to MA plans regarding coverage criteria for basic benefits. That led CMS to answer the following question in a Frequently Asked Questions document:

Do the new rules on clinical coverage criteria for basic Medicare benefits mean that MA organizations cannot use algorithms or artificial intelligence to make coverage decisions?  

New Rules on Clinical Coverage

To understand the question, let’s take a look at the new rules on clinical coverage criteria:

Here are the key points:

MA plans must use coverage criteria no more restrictive than Traditional Medicare:

  • They must follow National Coverage Determinations (NCDs), Local Coverage Determinations (LCDs), and general coverage/benefit conditions in Traditional Medicare regulations.
  • They can’t deny coverage based on internal criteria unless it’s in NCDs, LCDs, or meets specific proposed requirements (§422.101(b)(6)).

Specific requirements for coverage criteria development:

  • When no established criteria exist, MA plans can create internal criteria based on widely used treatment guidelines/clinical literature (§422.101(b)(6)).
  • This information must be publicly available, including sources and rationale for the criteria (§422.101(b)(6)).

Medical necessity determinations:

    • MA plans must consider: Coverage criteria defined in §422.101(b) & (c) (§422.101(c)(1)(i)(A))
    • Whether the item/service is reasonable and necessary under the Act (§422.101(c)(1)(i)(B))
    • The enrollee’s medical history, physician recommendations, and clinical notes (§422.101(c)(1)(i)(C))
    • Involvement of the medical director where appropriate (§422.101(c)(1)(i)(D))

Utilization management:

  • MA plans can still use prior authorization, referrals, and financial incentives within networks if they comply with coverage criteria rules.
  • They can’t limit when/how benefits are furnished if Traditional Medicare allows different settings/providers.

These changes aim to ensure MA enrollees have equal access to basic benefits as in Traditional Medicare.

Can Health Plans Use AI to Make Coverage Decisions? CMS’ Guidance

In its response to the question about whether MA organizations can use AI to make coverage decisions, CMS indicated that plans can use an algorithm or software tools to assist plans in making coverage decisions but must comply with all applicable rules about coverage determinations. 

For example, when deciding based on medical necessity, the plan must use the individual patient’s circumstances, not those of a larger data set. Instead, it must use patient-specific data, like physician’s recommendations, clinical notes, or the patient’s history, when making determinations. In this case, health plans may use AI to aid in surfacing relevant patient information but can’t use a model based on other patient data to determine medical necessity.

Additionally, plans can only deny coverage for basic benefits based on specific criteria. As a result, plans can use AI to ensure the claim complies with explicit coverage criteria but can’t use it to change that criteria over time.

Finally, plans must ensure their AI models are unbiased. MA plans must comply with non-discrimination requirements and plan sponsors must ensure their application of AI doesn’t worsen existing bias to introduce new bias.

Potentially Problematic Uses of AI

UnitedHealth Group faced a class action lawsuit in November of 2023 due to its use of the nH Predict AI Model that predicts how much care a patient should require based on an AI model. According to the lawsuit, the “nH Predict AI Model compares a patient’s diagnosis, age, living situation, and physical function to similar patients in a database of six million patients.” The lawsuit claims UnitedHealth Group allows little deviation from the model and doesn’t account for a patient’s individual needs. Reversals occur on more than 90% of patient claim denials upon internal review.

Humana faced a similar lawsuit in December of 2023 for using the same nH Predict AI Model. According to CMS’ Frequently Asked Questions response, UnitedHealth Group and Humana likely did not correctly use AI to make care determinations. They can use AI to predict the length of care a patient may need but can’t use that determination alone to terminate or disallow care. 

How Health Plans Can Comply

Here are a few tips to ensure your AI applications don’t run afoul of regulations:

  • Conduct a thorough review of any AI tools currently used for coverage decisions.
  • Ensure alignment with CMS guidance, addressing any identified gaps or non-compliance issues.
  • Implement robust testing and validation processes for AI algorithms.
  • Develop clear internal policies and procedures for using AI responsibly and ethically.
  • Maintain detailed documentation of AI decision-making processes and data sources.
  • Be prepared for potential CMS audits and inquiries regarding AI use.

Certifi’s health insurance billing and payment solutions help Medicare Advantage payers improve member satisfaction while reducing administrative costs.

AI for Health Insurance: A Practical Handbook

Related Posts

Start typing and press Enter to search

Get New Posts in Your Inbox!

+
Skip to content