The Waiting Room Has a New Address
Picture this: It is 10 p.m. on a Tuesday. An employee has a nagging cough that has lasted two weeks, a mild rash on their forearm, and a low-grade fever. Their doctor's office is closed. The telehealth line has a 45-minute wait. So they do what millions of Americans do every single night: they open a browser tab, type their symptoms into an AI chatbot, and start reading.
For many, that conversation does not stop there. It continues through a WebMD rabbit hole, a Reddit thread, three AI-generated summaries, and maybe, eventually, a self-diagnosis they have already half-convinced themselves is correct. By the time they finally call their provider, they are either more anxious than they need to be, or they have decided not to call at all.
This is not a fringe behavior anymore. It is the new normal, and it is accelerating fast.
A nationally representative Gallup survey conducted in late 2025 found that roughly one in four U.S. adults had used an AI tool or chatbot for health information or advice in the past 30 days. Of those users, 59% said they turned to AI before visiting a doctor, and 56% used it to research information after an appointment. A separate KFF Tracking Poll reinforced those findings, noting that younger adults and lower-income individuals are especially likely to use AI when they face cost or access barriers. Meanwhile, a study from Microsoft and Carnegie Mellon University found that as people increasingly rely on generative AI for information and decision-making, they tend to engage in less independent critical thinking, a pattern that carries particular implications when the decisions involve health.
And the technology itself is moving fast. What started as basic symptom checkers has matured into conversational tools that can interpret lab values, parse medical literature, and produce nuanced guidance on complex chronic conditions. The tools available today are meaningfully more capable than those available 18 months ago. The regulatory frameworks, employer benefit strategies, and provider communication models built around older health information ecosystems are all scrambling to keep up.
The question is no longer whether your employees are using AI for health decisions. They almost certainly are. The question is: what does that mean for you?
Why the Shift Is Happening
Access Is the First Problem
The United States has a significant and growing shortage of primary care providers. According to a 2024 report from the Association of American Medical Colleges (AAMC), the country could face a physician shortfall of up to 86,000 by 2036, including a shortage of as many as 40,400 primary care physicians. In practical terms, that means longer wait times, shorter appointments, and an overloaded system that often leaves patients feeling like they are being managed rather than heard.
For employees at small and mid-sized businesses, this problem is often more acute. Large enterprise employers may offer on-site clinics or direct primary care arrangements as part of a robust benefits package. Smaller employers typically do not have those options, which means their workers are navigating an already strained healthcare system with fewer shortcuts.
AI fills the gap, at least in perception. It is available at midnight. It does not put you on hold. It does not cost a copay. And increasingly, it can engage in a reasonably sophisticated back-and-forth that at least feels like it is addressing the specific situation at hand.
Trust Is the Second Problem
There is something deeper at work here beyond access and convenience. For a growing segment of the population, trust in traditional healthcare institutions has eroded, and often not dramatically. It is subtle. Patients who felt dismissed during a rushed appointment. Employees who received a bill that made no sense after a visit that resolved nothing. People who asked their doctor a straightforward question and felt like they got a scripted, liability-conscious answer that did not help them actually understand what was happening.
AI, for all its limitations, is perceived by many users as neutral, non-judgmental, and patient. It will answer follow-up questions without sighing. It will not make you feel foolish for asking. The Gallup data reflect this: while only about one-third of AI health users said they "strongly" or "somewhat" trust the accuracy of the information, they keep using it anyway, which says something important about the perceived value of access and tone even when trust in accuracy is incomplete.
Cost Is the Third Problem
Even with employer-sponsored coverage, out-of-pocket costs remain a significant barrier to care. High-deductible health plans have become common across the employer market, which means employees are often responsible for hundreds or thousands of dollars before their benefits kick in in a meaningful way. Faced with a $200 urgent care visit or a free conversation with an AI tool, many people choose the latter, especially for concerns they are not sure rise to the level of "worth the copay."
This cost-avoidance behavior is not irrational. It is a logical response to a system where seeking care carries real financial risk. The KFF poll specifically found that younger adults and lower-income individuals are disproportionately likely to turn to AI because they cannot afford a provider visit or are struggling to access care. The problem is that cost-driven delays can result in far higher costs, in both health and financial terms, down the road.
What This Means for Employers
The Hidden Cost of Delayed and Misguided Care
Employers who sponsor health benefits have a direct financial stake in how their employees make care decisions. When an employee delays or skips care because an AI suggested they "monitor the situation," they are not just making a personal health choice. They are setting in motion a chain of events that could result in a more expensive, more complex medical situation later.
A urinary tract infection that gets managed through AI guidance for two weeks before a provider is finally consulted can turn into a kidney infection. A concerning mole that gets described to a chatbot and then set aside can continue going unexamined. For self-insured employers in particular, these downstream costs hit the claims experience directly. For fully insured groups, patterns of delayed care can affect renewals, absenteeism, presenteeism, and long-term workforce health.
Benefits Utilization Gets Murkier
Employers invest real money in their benefits programs and most want employees to actually use them. AI-driven health consultations sit entirely outside the benefits ecosystem. They do not generate a claim. They do not trigger a care management referral. They do not appear in any utilization report. They are invisible to the employer.
That invisibility has consequences. If a meaningful portion of your workforce is routing health questions through AI tools rather than through their PCP, telehealth benefit, or nurse hotline, your utilization data becomes an incomplete picture of what is actually happening. Plan design decisions made on that data may not reflect reality.
Mental Health Adds Another Layer
The AI-for-health trend is especially pronounced in mental health. Many employees who are struggling with anxiety, depression, or other behavioral health concerns find it easier to describe what they are experiencing to an AI than to call a therapist or navigate a mental health benefit they do not fully understand. This is understandable, and in some cases, AI tools can serve a genuine supportive function. But they are not a replacement for clinical mental health care, and the line between "helpful resource" and "substitute for treatment" can blur quickly.
Employers who have invested in EAP programs, mental health benefits, or behavioral health platforms need to be aware that those resources may be underutilized, not because employees do not need them, but because employees are going elsewhere first and often staying there.
The goal is not to compete with AI tools. It is to make sure employees know what they have access to, trust the benefits they are offered, and understand when professional care is the right next step.
What the Broker's Role Looks Like Now
For brokers advising small and mid-sized employers, the AI-for-health trend is both a challenge and an opportunity.
The challenge: employers are increasingly asking questions their brokers need to be ready to answer. What AI tools are accurate enough to be considered useful supplements to care? Should the company communicate anything to employees about AI health tools? Are there liability considerations when an employer-sponsored wellness platform incorporates AI functionality? How does AI-driven self-diagnosis affect plan utilization, and what does that mean for renewal?
The opportunity: brokers who understand this shift can help clients build a more informed, coherent approach to employee health decision-making. That might mean recommending telehealth benefits that are genuinely easy to access and can compete on convenience. It might mean helping clients improve their benefits communication so employees actually know what they have before they default to a chatbot. It might mean identifying carriers or platforms that are thoughtfully integrating AI in ways that complement clinical care rather than replacing it.
In a market where many employers feel like their benefits package is a commodity, the broker who brings perspective on how employees are actually making health decisions stands out. This is exactly the kind of advisory conversation that moves the relationship beyond renewal transactions.
What It Means for Insurance Carriers
A Disruption to a Carefully Balanced Ecosystem
Insurance pricing is built on assumptions: about how a population will use care, how often, for what conditions, and at what point in the progression of those conditions. Carriers in the fully insured market absorb the financial risk of those assumptions and price accordingly. When employee behavior shifts in ways that alter how and when care gets used, the assumptions that underpin that pricing shift with it.
AI-driven health guidance introduces a variable that is genuinely difficult to price for. It sits outside the care delivery system entirely, which means it has the potential to change utilization patterns in ways that are not immediately legible as a pricing signal. Employees who defer or avoid care do not generate the kind of activity that factors predictably into risk models. What does eventually factor in is the cost of that deferred care when it surfaces, often later and at a higher acuity than it would have been had the employee engaged with the system earlier.
Services Carriers Built Are Being Bypassed
Carriers have invested significantly in nurse hotlines, telehealth networks, care management programs, and condition management services, all designed to help members navigate care more effectively and keep utilization in check. These are not just member benefits; they are cost management tools that help justify and stabilize pricing over time.
When employees bypass those services in favor of a free AI chatbot, carriers lose the ability to influence the care pathway at exactly the moment it matters most. A nurse hotline can triage a symptom and steer a member toward the right level of care. A care management program can catch a worsening chronic condition before it becomes a hospitalization. AI tools operating outside the carrier's ecosystem cannot do either of those things, and in many cases, they actively reduce the likelihood that the member will engage with carrier-sponsored resources at all.
For brokers advising fully insured clients, this dynamic is worth raising proactively. Carriers that see sustained underutilization of care management programs alongside unexpected claims volatility will eventually price for it. Understanding the behavioral forces driving that pattern, including AI-driven care avoidance, positions the broker as a strategic partner rather than just a renewal facilitator.
In a fully insured market, the consequences of delayed care are not just a member health issue. They are a carrier risk issue, and ultimately a premium issue for employers who renew in that environment.
What the Future May Hold
AI Will Get Better. The Stakes Will Get Higher.
The AI tools that employees are using today are already significantly more sophisticated than what was available 18 months ago. The tools available 18 months from now will be more capable still. This matters because improving AI health guidance increases the temptation to rely on it exclusively, even as the complexity of the decisions it is being asked to support grows alongside it.
Regulatory attention is increasing on multiple fronts. The FDA has been actively developing frameworks for AI-enabled clinical decision support, including 2025 draft guidance on AI in medical device lifecycle management. As of mid-2025, the FDA had authorized more than 1,250 AI-enabled medical devices, and regulatory conversations about consumer-facing AI health tools are intensifying. Some states are also beginning to explore liability questions around health-related AI guidance. These conversations will shape how AI health tools are built, marketed, and used in the years ahead.
Employers and Brokers Have a Window to Act
The current moment is one where employer action actually matters. The habits employees are building around AI and healthcare are still being formed. The benefit structures, communication strategies, and vendor partnerships that employers establish now will shape how their workforce navigates health decisions for years to come.
That means a few practical things worth thinking through. First, take a hard look at the accessibility of existing benefits. If your telehealth benefit requires three steps and a 45-minute wait, it is not competing effectively with the convenience of an AI tool. If your benefits communication happens only during open enrollment, employees are not going to remember what they have access to at 10 p.m. on a Tuesday.
Second, do not assume that AI health tools are inherently bad. Some are built responsibly, integrate appropriately with clinical care, and provide genuine value as decision support. The question is whether those tools are part of a coherent health strategy or whether employees are cobbling together their own approach from whatever happens to surface first on a search.
Third, think about what a well-informed employee looks like. Not one who never uses AI for health information; that ship has sailed. But one who knows when AI is appropriate and when it is not. One who understands their benefits well enough to use the right resource for the right situation. That kind of health literacy is something employers and brokers can actively support through better communication, smarter plan design, and year-round engagement.
The AI doctor is not going away. The question is whether it becomes a complement to good benefits strategy or a substitute for it.
The Bottom Line
The shift toward AI-driven health guidance is real, it is accelerating, and it is reshaping the way employees interact with their health benefits. For small and mid-sized employers, the stakes are direct: delayed care, underused benefits, and workforce health outcomes that may not surface on any report until the situation is already expensive. For carriers in the fully insured market, the stakes are equally real: invisible utilization gaps, bypassed care management tools, and delayed-care costs that hit the claims ledger without warning.
For brokers, this is an opportunity to be the advisor who sees around corners. Who understands not just what benefits clients are buying, but how employees are actually navigating the healthcare system between open enrollments, and what that means for carriers, pricing, and long-term plan sustainability.
The technology will keep evolving. The fundamentals, access, trust, cost, and communication, will not. Getting those right, and helping employer clients and carrier partners do the same, is still the most durable competitive advantage in this market.