AI, Minors, and the Global Patchwork Problem: Why Regulation Still Can’t Keep Up
Every time a new AI model lands, somebody in government wakes up and decides this is the moment they’re going to “rein in AI.” Press conferences happen. Task forces spawn. White papers multiply.
Meanwhile, kids and other vulnerable people are already living inside these systems — being profiled by them, targeted by them, nudged by them — long before the grown-ups finish their first consultation round.
Right now, five key jurisdictions are making five very different bets about how dangerous AI can be and what to do about it:
European Union
United States (with Colorado now leading)
South Korea
Japan
China
If you build games, platforms, tools, or anything remotely “AI-driven” that touches minors, this isn’t background noise. This is your risk surface.
Let’s map it without the hype.
The EU: The Only Region That Explicitly Outlaws Exploiting Kids’ Vulnerabilities
The EU is the only one that has gone all-in and said, in black-letter law, that some AI uses are so toxic they’re just illegal. Full stop.
Under the EU AI Act, certain systems are banned outright. That includes AI that:
Uses subliminal techniques to materially distort behavior and cause harm.
Exploits vulnerabilities based on age, disability, or social/economic situation in a way that materially distorts behavior and risks significant harm.
That “age” piece is not metaphor. It’s aimed squarely at children and other people the law treats as structurally vulnerable.
Then you layer on:
GDPR, which treats children’s data as needing “specific protection,” especially around consent and profiling.
The Digital Services Act (DSA), which:
Forces platforms accessible to minors to implement “a high level of privacy, safety and security” for them.
Bans targeted advertising based on profiling when the platform knows with reasonable certainty the user is a minor.
So in the EU, an AI-powered system that profiles kids, manipulates their behavior, and feeds them targeted content isn’t just an ethical problem — it can be a prohibited practice.
You can argue about how fast enforcement will move, but you can’t argue the structure: Europe is treating AI aimed at vulnerable people like a regulated product, not a fun gadget.
The U.S.: No Federal AI Act, but Colorado Just Broke the “We Have Nothing” Narrative
At the federal level, the U.S. still does not have a comprehensive AI statute. That part is true.
What we do have is a messy ecosystem of older laws and emerging state/local rules that now includes some actual AI-specific regulation:
The Old Guard: COPPA, FTC, and Friends
COPPA protects kids under 13 online. It forces notice and verifiable parental consent when you collect personal information from children.
That hits AI in two ways:Training data pulled from services “directed to children”
Kids’ data used for personalization, targeting, or profiling
The FTC Act (Section 5) bans unfair and deceptive practices. The FTC has already used that to:
Go after companies for lying about training data practices
Force “algorithmic disgorgement” — making companies delete models trained on unlawfully collected data
A handful of state privacy laws (like California’s) treat teens’ data more strictly when you’re “selling” or sharing it.
But that’s all retrofitted — laws built for Web 1.0 being used to chase Web 3.5.
Colorado: The First U.S. State to Actually Regulate AI Like AI
Then Colorado showed up.
In 2024, the state passed what’s now known as the Colorado Artificial Intelligence Act (SB 24-205). It:
Regulates “high-risk AI systems” that make, or are a substantial factor in making, “consequential decisions” about people (think: employment, credit, housing, health, education, essential services).
Defines “algorithmic discrimination” to include unlawful differential treatment or impact based on age and other protected traits.
Imposes “reasonable care” obligations on both developers and deployers of high-risk AI, including:
documented risk management,
impact assessments,
transparency to consumers when AI is used in consequential decisions.
It doesn’t target “kids” specifically, but because age is explicitly included, minors are pulled into the protected category.
Colorado’s law kicks in February 2026 and is the first U.S. statute that actually looks like a cousin of the EU AI Act: risk-based, AI-specific, and structured around systemic harm — not just privacy.
Local Experiments: NYC and Friends
On top of that, you’ve got local rules like New York City’s Local Law 144, which regulates automated tools used in hiring. It’s not “kids-focused,” but it does:
Force bias audits of automated employment decision tools.
Require notice to job applicants when AI is used.
Again, not a minors law — but part of the same ecosystem that’s starting to treat algorithmic decision-making as something you don’t just unleash and hope for the best.
Where This Actually Leaves the U.S.
So the corrected version is:
No comprehensive federal AI statute.
A growing patchwork of sector- and state-level rules (privacy, employment, kids’ safety).
Colorado as the first real risk-based AI law that clearly touches vulnerable groups (including minors) through its age-based discrimination provisions.
The floor is still low. But there is, finally, a floor.
South Korea: Strong Privacy Law + New AI Framework, Still Light on Child-Specific AI Rules
South Korea is doing two things at once:
Running one of the strictest personal-data regimes in the world.
Ramping up a brand-new AI Basic Act that will take effect in 2026.
Privacy and Minors: PIPA Still Does the Heavy Lifting
Under Korea’s Personal Information Protection Act (PIPA):
Personal information of children under 14 can only be processed with consent from a legal representative.
Controllers have duties around clarity, necessity, and data minimization.
Regulators and guidance emphasize special care for children.
That hits AI directly wherever training, profiling, biometric analysis, or recommender systems involve minors’ personal data.
There’s also the Youth Protection Act, which goes after harmful content and requires “youth protection measures” from service providers — relevant for AI-driven recommendations and content curation.
AI Basic Act: Comprehensive, but Not Child-Specific
In late 2024, South Korea passed the Basic Act on the Development of Artificial Intelligence and the Establishment of Trust (often called the AI Basic Act), which:
Creates a national framework for “high-impact” AI, including risk management and trust requirements.
Sets up governance structures and principles for safe, reliable AI.
Includes obligations for certain high-impact and generative AI operators.
What it does not do (at least in its initial text) is set out the kind of explicit, minors-focused bans you see in the EU AI Act or in China’s generative AI measures.
So if you’re building AI in Korea that touches minors, your hard constraints still primarily come from:
PIPA (data and consent), and
Youth content laws, with the AI Basic Act sitting on top as a general governance layer.
Japan: Heavy on Guidance, Light on Binding AI Rules for Kids
Japan is running almost entirely on soft law when it comes to AI and minors.
Data Protection: APPI, With Children Handled Mostly in Guidance
Japan’s Act on the Protection of Personal Information (APPI):
Protects personal data across the board.
Does not contain child-specific statutory provisions the way GDPR, COPPA, or PIPL do.
However, the Personal Information Protection Commission (PPC) has issued guidelines that:
Treat minors (generally under 18) as needing parental responsibility to consent in many cases.
Encourage extra care with children’s personal information.
So the kid-specific protections come in via guidance, not primary legislation.
AI and Education: Policy, Not Law
Japan’s education ministry has also issued guidelines for using generative AI in schools — talking about risks, safeguards, and how to integrate AI into learning environments. These are useful in practice, but they’re not AI liability statutes. They’re “please do this right,” not “you’re in violation of Article X.”
Bottom line: Japan is taking AI risks seriously on a policy level, including for students, but hasn’t built a binding, minors-focused AI regime yet. If something goes wrong with AI and children, you’re still mostly in the land of general data protection and tort.
China: Explicit AI Rules That Name Minors Directly
China is one of the few jurisdictions where AI rules actually say “minors” in the text and then attach enforceable obligations to that.
PIPL: Minors’ Data as “Sensitive”
Under China’s Personal Information Protection Law (PIPL):
All personal information of minors under 14 is classified as “sensitive personal information.”
Processing that data requires:
A specific purpose,
Strict necessity, and
Guardian consent, plus special handling rules.
That alone makes AI training and personalization involving kids’ data legally high-risk.
Algorithm and Generative AI Rules: Anti-Addiction and Content Controls
Then you layer on:
Recommendation Algorithm Provisions: targeting minors’ online addiction and harmful content.
Interim Measures for the Management of Generative AI Services (2023), which:
Require providers to take effective measures to prevent minors from over-relying on or becoming addicted to generative AI services.
Require safeguards against content that harms minors’ physical or mental health.
Combine those and you get something very few jurisdictions have: binding, AI-specific duties that explicitly call out minors and addiction-like risk patterns.
The Global Problem in One Sentence
AI products are global.
AI law is not.
Here’s what that fragmentation looks like in practice:
A system that’s legal in Japan might be banned in the EU because it exploits vulnerabilities of children or uses profiling in ways the AI Act and DSA don’t tolerate.
A tool that passes U.S. federal scrutiny might still violate Colorado’s AI Act once it goes live, if it causes age-based algorithmic discrimination in “consequential decisions.”
A generative AI feature that flies in most Western markets could trigger enforcement in China if it doesn’t include anti-addiction controls for minors.
The people least equipped to navigate that mess — kids, teens, and other vulnerable groups — are the ones most likely to be targeted by personalization, gamified design, and recommendation engines.
We’ve watched this cycle play out with social media, loot boxes, “engagement” design, and mobile monetization. AI is just the next iteration — only this time, it’s stitched directly into decision-making systems around education, health, credit, and work.
So Where Does That Leave Anyone Building with AI?
Short version:
EU: Act like you’re designing medical-grade systems whenever your AI touches minors or vulnerable people. That’s roughly the expectation.
U.S.:
Federally: privacy + consumer protection + anti-discrimination law, used creatively.
Locally: Colorado and NYC are early warning shots of what real AI regulation will look like. Ignore them and you’re betting the whole stack on “no one cares yet.”
South Korea:
PIPA + Youth Protection + the new AI Basic Act = serious obligations, but still more privacy-centric than minors-specific in the AI layer.
Japan:
Guidance-heavy, law-light. You can do a lot here technically, but you’re still exposed via general privacy and tort principles if you screw up.
China:
If your AI system touches minors, the law already treats that as a high-risk zone and expects you to behave like it.
The common mistake is to build to the weakest regime and treat everything else as “localization.” That made sense when we were just talking about UI strings and billing flows.
It doesn’t scale when the thing you’re localizing is who gets manipulated, who gets denied opportunities, and whose developing brain gets optimized for engagement instead of agency.
If AI is going to sit between an entire generation and their world — mediating what they see, how they’re categorized, and which doors open or close — then “wait until regulators catch up” isn’t a strategy. It’s an abdication.
The regulators are already moving. They’re just not moving in sync.
Your job, if you’re building in this space, is to notice that before the enforcement letters do.
*Articles and insights are for educational purposes only and do not constitute legal advice