Posted in

The Hidden Cost of HCC Coding Systems That Can’t Explain Themselves

The Explainability Crisis

A growing number of Medicare Advantage plans use AI-assisted coding technology to process medical charts and identify HCC diagnoses. The efficiency gains are real. The problem is that most of these systems can’t explain how they reached their recommendations. They scan a clinical note, flag potential codes, and deliver a list. What they don’t deliver is the documented reasoning that connects each recommendation to specific clinical evidence in the note.

That gap didn’t matter when enforcement was light. It matters enormously now. The OIG’s February 2026 Industry-wide Compliance Program Guidance warned that AI should serve as a “medical coder support tool” with humans making final determinations. CMS is scaling its own audit workforce to approximately 2,000 certified coders and using AI to flag suspicious patterns. When the agency’s AI identifies a questionable code and asks the plan to justify it, “our system recommended it” is not a defensible answer.

The Aetna DOJ settlement ($117.7 million, March 2026) and Kaiser settlement ($556 million) both involved coding programs where the process of generating codes outpaced the process of documenting why those codes were valid. The technology worked fast. The evidence trail was thin.

What Explainable AI Actually Produces

An explainable system doesn’t just flag a diagnosis. It shows its work. When it identifies a potential HCC in a clinical note, it maps the recommendation to specific sentences in the documentation. It identifies which MEAT criteria (Monitoring, Evaluation, Assessment, Treatment) are satisfied by those sentences and which are absent. It produces a structured evidence package that a coder can validate and an auditor can follow.

This output changes the coder’s workflow. Instead of receiving a list of potential codes and deciding whether they look right, the coder receives a documented assessment with evidence attached. The validation step takes less time because the hard work of locating and mapping evidence has already been done. And when the code gets submitted, the evidence trail exists because the system built it as its primary output, not as an afterthought.

The difference shows up in audit outcomes. A code supported by an evidence trail that maps to specific clinical language and specific MEAT elements is defensible by design. A code supported by nothing more than an AI flag and a coder’s judgment call is a liability waiting for an audit to expose it.

The Governance Question CIOs Are Asking

Health plan CIOs are increasingly concerned about ungoverned AI making clinical coding decisions. When a system recommends a code without transparent reasoning, the plan has no way to audit the AI’s decision-making process. Compliance teams can’t verify that the system is applying MEAT criteria correctly. Quality assurance teams can’t identify systematic errors in the AI’s logic. The technology operates as an opaque recommendation engine, and the plan trusts it without the ability to verify.

That trust is misplaced in the current environment. CMS-HCC V28’s coefficient restructuring changed which diagnoses generate the most value. An AI trained on pre-V28 patterns may prioritize codes that no longer carry significant weight while underweighting codes that now matter more. Without transparency into the system’s reasoning, the plan can’t detect this misalignment until audit findings or revenue anomalies surface it.

Explainability isn’t a nice-to-have feature. It’s an AI governance requirement. Plans that can audit their coding AI’s reasoning are plans that can catch problems before regulators do.

The Standard for 2026

Any HCC Coding Software deployed in 2026 must produce three things for every recommendation: the specific clinical evidence that supports the diagnosis, the MEAT criteria that evidence satisfies, and the reasoning chain that connects evidence to recommendation. Systems that deliver recommendations without these three elements are creating audit exposure at scale. The enforcement environment has made explainability a compliance requirement, not a product differentiator, and plans still using opaque systems are accepting risk they can no longer justify.

Leave a Reply

Your email address will not be published. Required fields are marked *