When Americans talk about the high cost of healthcare, the conversation usually lands on drug prices or insurance premiums. What's rarely mentioned is that nearly a third of the money flowing through the system goes to administrative overhead, much of it caused by billing and compliance problems.
Something as mundane as a mismatched address in a provider directory can trigger an insurance denial, and each rejected claim costs $25 to $100 in labor to fix. For a mid-sized practice, that can mean tens of thousands of dollars every month just to collect money they've already earned.
Charles Wong believes this is solvable, but not with the “replace the humans” approach that dominates a lot of tech discussions. Wong, Enterprise Product Manager at Headway and associate editor for Frontiers in Emerging Technology, has built his career designing operational solutions for messy, human-centered systems. And his central belief is that healthcare's biggest administrative challenges can be solved by pairing AI with staff in smarter ways.
Early on in his tenure at the company, Wong faced a brand new challenge – rebuilding Headway’s revenue cycle management function, a dry-sounding domain with big financial stakes. In one case, he traced significant, preventable annual losses to billing issues driven by data errors. Traditionally, the fix would be brute force: hire more people to comb through claims manually and submit corrections. It worked, but only up to a point. Many organizations simply accepted a certain level of loss as the cost of doing business.
Instead, Wong focused on rebuilding the compliance infrastructure itself. His team spent weeks with billing staff, identifying patterns in claim denials and mapping where mismatches with insurers’ records were most common. From there, they built a system to automate the worst of the busywork, pulling provider data from insurance directories and other sources, and flagging discrepancies. A dashboard ranked them by financial impact, so the company’s Payer Partnerships Team could focus on what would recoup the most money.
Humans were still at the center. The system flagged issues and suggested solutions, but payer-facing teams decided how to act and monitored the edge cases. Within a year, claim losses dropped by over 80%, and resolution times improved for both providers and insurers.
“We were able to take the time and effort spent firefighting,” Wong says, "and put it toward solving root causes."
Wong is quick to point out that automation alone wouldn't have achieved those results. Revenue cycle management work is full of gray areas. Rules are constantly changing, and two nearly identical cases might require different handling. Machines can scan thousands of records in minutes, but they can’t negotiate with an insurer or spot when a rule has been interpreted differently in practice than on paper.
"AI can tell you where the anomaly is," Wong says. “But you still need people to interpret the context and, especially in healthcare, have conversations with partners.”
This echoes a broader industry sentiment that AI's strengths are best utilized when guided by human expertise, especially in sectors like healthcare. The machines excel at speed and scale; people manage ambiguity and preserve trust.
The human–AI pairing works beyond billing, too. Wong’s team took on a different kind of failure point: getting patients from primary care to behavioral health.
Even when a doctor identifies a need for therapy or psychiatry, finding an in-network provider with availability is often a dead end. Many patients simply give up. Wong's team started small. Every time a primary care provider referred someone for mental health care, they would step in to verify insurance, locate a qualified provider nearby, and book the appointment. The referring doctor got status updates, and the patient got help faster.
Once they proved the model worked, they built an intelligent, AI-driven platform to handle the workflow, effectively automating much of what his team had been doing manually. Now, doctors make referrals directly from their electronic health record, the system processes them automatically, and coordinators only step in for unusual cases. The early, human-driven phase made physicians comfortable and set the quality standard AI could later scale.
"Sometimes it's necessary to start by doing something that doesn't scale," Wong explains. "You learn what has the highest leverage, and then you build the tech to support it."
Whether it's billing or patient care, the lesson is the same: automation succeeds when it starts with a deep understanding of the human process it's meant to support.
And it's not unique to healthcare. In finance and biotech, AI tools scan regulations and flag compliance risks, but compliance officers make the judgment calls and handle the problematic cases. On social platforms, machine learning surfaces questionable content, but human moderators make the hardest decisions. In each case, machines handle the volume, humans handle the gray areas.
“If you treat AI as a magic box and take humans out of the equation, you’re just waiting for the problems that humans were solving to surface again," Wong says. "Let machines process data and identify patterns. Let people focus on relationships and nuance.”
Regulators are starting to reinforce this balance. The latest Office of Management and Budget guidance on AI in federal programs emphasizes that AI-first should never mean AI-only. In healthcare, that's especially important because trust is currency. Patients trust providers to put their needs first, and providers trust insurers to pay fairly. For a system under constant pressure to do more with less, maintaining that trust means humans and machines working side by side.