AI In Health Insurance: Takeaways from HHS’ Trustworthy AI Playbook

Last year we wrote about how the US Department of Health and Human Services (HHS) released a new artificial intelligence (AI) strategy in conjunction with hiring a new AI officer. In September of 2021, HHS furthered its AI efforts by releasing a Trustworthy AI Playbook. HHS designed that playbook to help the agency design, develop, acquire and use AI in a manner that the public trusts. At the same time, HHS desires those AI solutions to protect values like privacy and civil rights while complying with applicable laws.

Though written specifically for HHS, that Trustworthy AI playbook can be reviewed for use in other organizations investing or planning to invest in AI, including health insurers. There are plenty of AI use cases in health insurance. Service chatbots and claim processing are among the areas where organizations can apply chatbots to streamline processes. But without a framework to address key bias and privacy concerns, among other things, health insurers face risks that may damage their brand.

With that in mind, here’s a look at HHS’ Trustworthy AI Playbook and how to apply it within a health insurance environment.

What is the Trustworthy AI Playbook?

As mentioned, HHS created the Trustworthy AI Playbook to help the agency design, develop, acquire and use AI while adhering to certain ethics and principles as well as regulatory guidelines. Specifically, Executive Order 13960 set out some guidelines for federal agencies to use when considering AI.

HHS designed the playbook to:

  1. Describe AI building blocks, like machine learning, natural language processing, and speech recognition
  2. Create principles the agency will use to achieve ethical, effective AI deployments
  3. Define the AI lifecycle and considerations for each of those lifecycle stages
  4. Express regulatory and non-regulatory considerations

As a result, it serves as a guide to applying ethical principles to each stage of AI development. The agency is hoping leadership will leverage the playbook to create specific policies related to AI development while evaluating the risks associated with any new investments in AI. And it hopes program and project managers leverage the playbook to ensure the projects they oversee incorporate ethical principles at every step of the project lifecycle while identifying and mitigating potential risks. If these two things happen, it will likely increase the efficacy and success rates of AI projects.

How does it define AI?

The playbook defines both AI and AI building blocks. HHS defines AI as:

  • Performing tasks under varying and unpredictable circumstances without human oversight or learning from experience and improving performance when exposed to data sets.
  • Uses computer software, physical hardware, or other technology to solve tasks that require human-like perception, thinking, planning, learning, communication, or physical action
  • Thinks or acts like a human, including the use of cognitive architecture or neural networks
  • Relies on a set of techniques, including machine learning, to approximate a cognitive task
  • Is designed to act rationally by utilizing intelligent software or an embodied robot to achieve goals. It uses perception, planning, reasoning, learning, communicating, decision-making, and acting.

That’s a lot. But if a solution meets any of those definitions, then the HHS recommends the organization apply the Trustworthy AI Playbook principles.

What does it consider the AI building blocks?

In addition to defining AI, the playbook further defines common AI building blocks, like AI methods, solutions, and use cases. AI methods include:

  • Machine learning —enables computers to learn without explicitly being programmed.
  • Natural language processing — enables machines to understand natural language as spoken and written by humans
  • Speech recognition — systems that interpret speech and translate it into text or commands
  • Computer vision — algorithms that perform tasks such as recognition, scene categorization, scene understanding, human motion recognition, etc.
  • Intelligent automation — Using automation techniques to streamline and scale decision-making.

What are the six core principles of Trustworthy AI?

For HHS, there are six core principles that, when applied across all lifecycle stages of a project, can lead to ethical and effective AI. They are:

  • Fair/Impartial — To ensure fairness, HHS believes AI products should include stakeholder checks.
  • Transparent/Explainable — To ensure transparency, all aspects of the AI application should be open to inspection. Plus, individuals should understand how an organization uses this data and how the organization makes AI decisions.
  • Responsible/Accountable — To ensure accountability, the organization should describe governance. Additionally, the organization should assign responsibilities  for all parts of the solution
  • Robust/Reliable — To ensure reliability, systems should be able to learn from people and other systems and produce accurate outputs.
  • Privacy — To ensure privacy, the organization should not use data beyond the original intent. The data owner should approve the use of that data.
  • Safe/Secure — To ensure security, the organization should protect systems from threats that may cause physical or digital harm to any entity.

The playbook also includes several pages dedicated to further exploration of each of the six core principles. That includes describing what each looks like in action. The playbook also examines key questions to ask to ensure HHS properly considers the principle.

What’s the AI lifecycle?

The playbook also describes the four stages of the AI playbook. They are:

  1. Initiation and Concept — At the first phase of the process the organization reviews the initial concept. Then, the organization determines the solution’s feasibility and decides whether to move forward.
  2. Research and Design — At this stage, the decision has been made to move forward and project planning, requirements gathering, and data/algorithm selection are performed.
  3. Develop, Train and Deploy — Development is performed and completed, performance is evaluated, and the solution is deployed and verified.
  4. Operate and Maintain — The solution has been deployed and continuous performance monitoring and stakeholder feedback are solicited. ROI can be tracked and measured and the solution iterated or decommissioned as needed.

How do those AI playbook pieces — principles and lifecycle stages— work together?

This is where the rubber meets the road. Organizations waste effort if they fail to use the defined principles in every stage of the AI development lifecycle. This review process ensures the solution adheres to the defined principles. For HHS, that means ensuring that all models undergo reviews during each lifecycle stage. That review ensures principles are followed and any risks are understood.

For example, let’s assume you’re building a use case for a chatbot. During the research and design phase, you’d generate a use case overview. Then, you walk through each of the principles, identifying any actions to take. In our chatbot example,  you may determine what kind of data is available to the bot to ensure you meet the privacy principle. Will it include personally identifiable information (PII)? How will we secure information? Are there any risks to individuals using the AI solution?

At the Develop, Train, Deploy stage, for the Robust/Reliable principle you may develop metrics that help you determine if the bot reliably responded to members. Or, you may prepare training data or validate that the dataset is large enough and representative of activities the bot will perform. During the operate and maintain phase, to align with the transparent/explainable principle, you may establish a change management process, which may include a change control group, evaluating the impact of any changes, documenting any changes, etc. So at each stage of the process, you’re reflecting on your defined AI principles.

What can payers learn about using AI in health insurance from the HHS Trustworthy AI playbook?

Many of the playbook items described above fit nicely into the development of AI use cases within a health insurance environment. Though you don’t necessarily need to follow every principle outlined by HHS and can certainly include your principles, the playbook framework model of tying principles to the AI solution lifecycle helps uncover tasks and questions that can help identify and mitigate risk as well as improve the overall efficacy of the AI solution.

Certifi’s health insurance premium billing and payment solutions help healthcare payers improve member satisfaction while reducing administrative costs.

Emerging Technology: A Health Insurance CIO's Guide

Related Posts

Start typing and press Enter to search

Get New Posts in Your Inbox!

+
Skip to content