Artificial Intelligence
April 6, 2025

Avoiding the AI Bullseye: How Digital Health Companies Can Navigate FTC Scrutiny and Shifting Federal Policy

“Wait—the FTC is watching us too?”

That’s the kind of wake-up call you don’t want to receive the hard way. You’re heads-down building cutting-edge tools—AI for triage, diagnostics, patient engagement—when suddenly, regulators start poking around.

The Federal Trade Commission (FTC) has made it crystal clear: there is no AI exemption from the law. If your product touches consumers, patients, or sensitive data—and let’s face it, most health tech does—you’re now on their radar.

To make matters even more interesting? The White House just replaced its entire AI policy playbook. And while the administration has shifted gears, the bottom line hasn't. The bar for responsible AI is still high - and getting higher.

If you're only focused on FDA regulations, you're missing the broader legal landscape. Between evolving federal policy and the FTC’s aggressive enforcement stance, now is the time to build compliance into your product strategy—not scramble for it later.

New Administration, New AI Guidance—Same High Stakes

Let’s zoom out for a moment.

For the past few years, digital health companies have been aligning with guidance rooted in the Biden administration’s 2023 AI Executive Order—especially around responsible use, agency governance, and oversight. But as of January 23, 2025, that executive order is gone.

In its place? Two new memos from the Trump administration’s Office of Management and Budget (OMB), issued April 3, that set fresh guardrails on how federal agencies use and procure AI.

Here’s the quick breakdown:

  • M-25-21 (AI use): Emphasizes innovation, governance, and public trust—echoing Biden-era principles—but rebrands risk categories (e.g., “rights-impacting” → “high-impact”).
  • M-25-22 (AI acquisition): Keeps focus on performance, accountability, and risk mitigation, while doubling down on American-made AI and accelerating AI procurement timelines.

Despite the change in administration, many foundational structures remain, including:

  • Chief AI Officers and internal governance councils
  • Risk assessments for “high-impact” AI systems
  • Requirements to terminate non-compliant uses
  • Ongoing agency inventories of AI deployment

One thing both administrations agree on? Healthcare AI, biometric tools, and medical devices are squarely in the “high-impact” category—and will face extra scrutiny.

I've written more about the Trump EO and how it reshapes federal AI strategy here. Expect ripple effects throughout the healthcare AI ecosystem.

Why AI Compliance Isn’t Optional in Healthcare

AI in healthcare isn’t just cool tech—it impacts real lives. Whether your system helps route patients, detects early signs of disease, or makes coverage decisions, the margin for error is small and the stakes are huge.

And FTC is turning up the heat on companies with high impact AI.

While many founders keep their regulatory radar tuned to the FDA (and rightly so), the FTC is quietly—but aggressively—making moves. Think false advertising, deceptive practices, discriminatory AI... the FTC has jurisdiction over all of it, and they’re not shy about flexing.

Here’s what they’ve been up to in 2025 already:

3 FTC Cases Every Digital Health Founder Should Know

1. Evolv Technologies

The "AI-powered" scanner that couldn’t spot weapons

Used in: schools, hospitals, and other sensitive settings
Issue: Claimed the AI was better than traditional scanners—turns out, it wasn’t
Outcome: FTC crackdown, banned claims, customer cancellation rights

Takeaway: Overpromising = legal liability. In healthcare, a false sense of security isn’t just risky—it’s dangerous.

2. DoNotPay

Marketed as a robot lawyer. Delivered... not much.

Claim: AI could replace human legal services
Reality: No validation. Misled users.
Outcome: Fines + ban on unsubstantiated claims

Takeaway: If your AI acts like a clinician—or even sounds like one—you better have the evidence (and disclosures) to back it up.

3. IntelliVision

Claimed its facial recognition was bias-free. It wasn’t tested.

Claim: No racial or gender bias
Reality: No testing across diverse populations
Outcome: FTC action for deceptive claims

Takeaway: Don’t skip bias audits. And definitely don’t market as bias-free unless you have receipts.

Your AI Compliance Roadmap: NIST’s Still-Standing Framework

Amid all this regulatory reshuffling, one framework has stayed untouched and trusted across administrations: the NIST AI Risk Management Framework (AI RMF).

While the White House’s AI governance memos now shape how agencies buy and use AI, NIST AI RMF remains the gold standard for everyone else—especially companies building AI in sensitive sectors like healthcare.

Here’s the breakdown:

1. MAP – Know Your AI’s Environment

  • Who will use this AI tool, and how?
  • What are the risks—clinical, ethical, reputational?
  • Does it intersect with HIPAA, FTC Act, civil rights laws?

Tip: Build use case documentation into your development process from day one.

2. MEASURE – Test Before You Boast

  • Run bias and accuracy tests across demographic groups
  • Track false positives/negatives
  • Stress test your AI for real-world edge cases

Tip: Your validation data should reflect the patients you actually serve—not just the ones who are easy to model.

3. MANAGE – Reduce Risk Like a Pro

  • Stand up internal governance (think: AI review board)
  • Document SOPs for retraining, updating, consent
  • Set privacy and explainability standards

Tip: Appoint an “AI product owner” who owns risk—not just engineering.

4. GOVERN – Make It Everyone’s Problem

  • Create an auditable record of decisions
  • Assign accountability (yes, even at the board level)
  • Monitor downstream use if you’re selling your AI to others

Tip: Build your governance playbook like investors and buyers will read it—because they will.

AI Compliance Cheat Sheet for Digital Health

Overhyped AI Claims: Back every claim with clinical or technical validation.

Data Privacy & Consent: Be clear, transparent, and document informed consent.

Bias & Fairness: Run fairness audits and keep the receipts.

Transparency: Say what the AI does, how it was trained, and what it can’t do.

Replacing Clinicians: Don’t imply parity unless tested. Keep humans in the loop.

Data Governance:Track, secure, and document all data and model handling.

Final Thoughts: Don’t Just Comply—Lead

Yes, the AI policy winds are shifting in D.C.—but the need for trusted, responsible healthcare AI remains constant.

Regulators aren’t waiting for new AI laws—they’re applying existing ones now. For digital health companies, that means your AI risk management can’t be a compliance afterthought.

Instead, use frameworks like NIST AI RMF and lessons from real enforcement actions to:

  • Earn trust with providers, payers, and patients
  • Stay off regulators’ radar (or at least, on the good side)
  • Build AI that’s ethical, scalable, and sustainable

Want to future-proof your AI strategy?

At Elevare Law, we help digital health design compliance into their growth plan - from MVP to enterprise scale.

Ready to elevate your vision? Let's talk.

Let’s build smarter, safer, and more trusted healthcare AI—together.