AI agents · OpenClaw · self-hosting · automation

Quick Answer

China Regulates AI Digital Humans: What's Changing

Published:

China Regulates AI Digital Humans

China published comprehensive draft rules on April 3, 2026 targeting AI “digital humans” — virtual AI entities that simulate human personality and engage in emotional interaction. This is the first major regulatory framework for AI companions and avatars globally.

Last verified: April 2026

Key Requirements

RuleDetails
Mandatory labelingAll digital human content must have prominent “digital human” labels
Minor protectionNo virtual intimate relationships for users under 18
Anti-addictionBan on addictive AI services for children
Data consentStrict safeguards on personal data collection
Provider registrationAI digital human providers must register with regulators
Real identityServices must verify user identity for age-gating

What’s Covered

The rules apply to AI that:

  • Simulates human personality traits
  • Engages in emotional interaction through text, images, audio, or video
  • Provides products or services to the public in mainland China

This includes:

  • AI companions (Grok companions, Character.AI, etc.)
  • Virtual influencers (AI-generated social media personalities)
  • Customer service avatars
  • AI chatbot personalities
  • Digital streamers (virtual hosts on platforms like Douyin)

Why This Matters Globally

First-Mover Regulation

China is the first major economy to propose comprehensive rules specifically for AI digital humans. Europe’s AI Act touches on this but doesn’t have dedicated digital human regulations.

Impact on Global Companies

Any AI company operating in China must comply — this affects:

  • OpenAI/ChatGPT (if operating in China)
  • Grok companions (xAI)
  • Character.AI and similar services
  • Enterprise AI avatar products

Template for Other Countries

China’s AI regulations often become templates for other Asian markets and influence European and US policy discussions.

The Child Safety Focus

The strongest provisions protect minors:

  • No virtual intimate relationships for under-18 users
  • No addictive AI services designed for children
  • Age verification required
  • Parental controls mandated

This directly responds to growing concerns about children forming emotional attachments to AI chatbots — a trend that’s accelerated globally in 2025-2026.

Timeline

  • April 3, 2026 — Draft rules published
  • May 6, 2026 — Comment period ends
  • Mid-2026 — Final rules expected

Last verified: April 2026