AI Slop illustration

AI Slop × Health
Misinformation × Elderly

Ecosystem Map — Part 2
Gordon Cheng • Week 11 • Inclusive Design, SVA
Illustration: SearchStax, "AI Slop and Information Discovery" (2024)
↓ Scroll

When AI Becomes the Doctor:
Health Misinformation × Elderly

AI-generated health misinformation is flooding social media. Elderly Americans (65+) are the most vulnerable — and the least protected. We focus on healthcare, where consequences are irreversible.

🏥 Healthcare

Fake health advice, AI "doctor" accounts, fraudulent drug promotion. Our focus.

💰 Finance

AI investment scams, fake advice targeting retirement savings.

📱 Daily Life

Fake news, deepfake voice calls, emotional manipulation.

The Scale of the Problem

74%
of 50+ distrust
AI health info
7%
of 65+ know what
a "deepfake" is
44%
of TikTok health videos
are non-factual
0
platforms have
elderly protections
64%
of 65+ use YouTube
for health info
57%
of 65+ use
Facebook daily
more likely to share
fake news (vs 18–29)
90+
fact-check orgs
dropped by Meta

Michigan Medicine • Nature Aging • Pew 2025 • Princeton / NYU • Meta Transparency

Ecosystem Map

AI Health Misinformation × Elderly

attention & trust content & validation recommends more ↻ shares misinfo → social proof ↻ AI content + ad $ infrequent declining trust R1 Reinforce + R2 Reinforce + B1 Balance − ⚠ BROKEN 👴 65+ Elderly User 🎵 TikTok ~7% of 65+ 👨‍👩‍👧 Family 👨‍⚕️ Doctor 👥 Peers 📺 YouTube 64% of 65+ 📘 Facebook 57% of 65+ 💬 WhatsApp Blind spot 🤖 Algorithm 🎭 Bad Actors 💊 Supplements 🏛️ Regulators 📋 WHO / CDC 📰 News Media

Value Exchange

Elderly → System
  • Attention & time
  • Trust (unverified)
  • Behavioral data & money
  • Spreading misinfo to peers

System → Elderly
  • Social connection & comfort
  • Health "info" (uncontrolled)
  • Entertainment
  • Belief validation → R+ loop

⚠ Broken
  • Verification — severed
  • Family check — declining

Power Mapping

High Power / Low Interest

📱 Social Media Platforms
Control algorithms. Goal: ad revenue. Retreating from moderation.
🎭 AI Content Farms
Generate medical content at scale for profit.

High Power / High Interest

🏛️ Regulators (FTC / FDA)
Authority exists but enforcement lags. AI "info" ≠ "advertising".
🤖 Algorithms
Invisible but decisive. Optimize for engagement, not truth.

Low Power / Low Interest

📰 News Media / 📋 WHO / CDC
Guidelines exist but no enforcement. Content doesn’t reach elderly.

Low Power / High Interest

👴 Elderly Users
Most affected, least voice. Distrust AI (74%) but can’t identify it (7%).
👨‍👩‍👧 Family & 👨‍⚕️ Doctors
Influence declining. Quarterly visits can’t compete with daily algorithms.

System Boundaries

🤖

AI Generation

LLMs, deepfakes,
content farms

📡

Distribution

YouTube, Facebook,
WhatsApp algorithms

👴

Consumption

PRIMARY FOCUS
How elderly encounter
& trust AI health content

🧠

Decision Making

Self-medication,
purchasing, sharing

⚠️

Consequences

Health harm,
eroded trust

🏛️

Regulation

FTC / FDA / FCC
policy & enforcement

← ————— Our System Boundary ————— →

Platform Landscape

📺 YouTube

Best protections
Health info panels
AI disclosure required
Raised removal threshold
64%
of 65+ use

📘 Facebook

Biggest backslide
Ended 90+ org fact-checking
Community Notes instead
57%
of 65+ use

🎵 TikTok

Most misinfo-dense
25% = misinformation
AI labels on 1.3B+ videos
~7%
of 65+ (growing)

𝕏 X (Twitter)

All policies gone
COVID policy ended
Misinfo report removed
low
65+ adoption

💬 WhatsApp

The blind spot
E2E = zero moderation
Family groups amplify
~30%
of 50–64 (growing)
Policy direction: LESS moderation → elderly more exposed than ever.