
AI and Children's Privacy: What Data Do AI Tools Collect?
Version 2.4 — Updated April 2026 | Reviewed by Felix Zhao
By KidsAiTools Editorial Team
Reviewed by Felix Zhao (Founder & Editorial Lead)
When your child uses an AI tool, a conversation happens that you cannot see. Your child types a question or speaks a command. The AI responds. But between those two events, data is collected, transmit
What You Do Not See Happening Behind the Screen
When your child uses an AI tool, a conversation happens that you cannot see. Your child types a question or speaks a command. The AI responds. But between those two events, data is collected, transmitted, and often stored. Understanding what data AI tools collect, and what they do with it, is essential for any parent in 2025.
This guide covers the specifics: what major AI tools actually collect, what the law says, and a practical checklist you can use before letting your child try any new AI tool.
What Major AI Tools Actually Collect
ChatGPT (OpenAI)
ChatGPT collects and stores every conversation by default. This includes:
- Every prompt your child types
- The AI's responses
- Device information (browser type, operating system, IP address)
- Usage patterns (how often, how long, what features are used)
OpenAI's privacy policy states that conversations may be used to improve their models unless you specifically opt out. This means a personal story your child shares with ChatGPT could theoretically influence future AI training. You can disable chat history in settings, which prevents conversations from being used for training, but OpenAI may still retain them for 30 days for safety monitoring.
Important: ChatGPT's terms of service require users to be at least 13 years old (18 in some jurisdictions). Children under 13 should not be using it without parental setup and supervision.
Character.AI
Character.AI collects similar data to ChatGPT, plus additional information specific to its social features:
- All messages exchanged with AI characters
- Character creation data (if your child creates characters)
- Social interactions (follows, favorites)
- Device and usage data
Because Character.AI encourages extended, emotional conversations with AI personas, the data collected can be deeply personal. Children may share feelings, experiences, and personal details they would not share in a search engine query.
Khan Academy / Khanmigo
Khan Academy collects:
- Account information (name, email, grade level)
- Learning progress and performance data
- Interactions with Khanmigo AI tutor
- Time spent on different topics
Khan Academy is a nonprofit with a strong privacy track record. They do not sell personal data and are transparent about what they collect. Khanmigo conversations are used to improve the educational experience but are handled more carefully than commercial AI tools.
Google (Bard/Gemini, Google Arts)
Google collects extensive data across its ecosystem:
- Search and conversation history
- Location data (if enabled)
- Device information
- Voice data (if using voice features)
- Activity across all Google services linked to the same account
Google offers Family Link for parental controls, which provides some data limitation for children's accounts. However, the sheer breadth of Google's data collection makes it important to configure privacy settings carefully.
What the Law Says: COPPA Explained
The Children's Online Privacy Protection Act (COPPA) is the primary U.S. law protecting children's data. Here is what it requires:
Applies to children under 13. Any online service that knowingly collects data from children under 13 must comply with COPPA.
Requires parental consent. Companies must obtain verifiable parental consent before collecting personal information from children under 13.
Limits data collection. Companies can only collect data that is reasonably necessary for the child to participate in the activity.
Gives parents control. Parents have the right to review their child's data, request deletion, and refuse further collection.
The problem: Many AI tools technically prohibit children under 13 from creating accounts, which they argue exempts them from COPPA compliance. But in practice, millions of children under 13 use these tools, often without parental knowledge. The legal gray area is significant, and regulators are actively working to close these gaps.
International protections: The EU's GDPR sets the age of digital consent between 13 and 16 (varies by country) and provides stronger data protection rights. The UK's Age Appropriate Design Code specifically requires that digital services used by children prioritize children's best interests in how they handle data.
Your 5-Point Privacy Checklist
Before letting your child use any AI tool, run through this checklist:
1. Read the Age Requirement
Check the terms of service for the minimum age. If your child is below the stated age, the tool is not designed for them. Look for a family or kids version instead.
2. Check the Privacy Settings
Before your child starts using the tool, explore every privacy and data setting. Key things to look for:
- Can you disable conversation history or data collection?
- Can you delete stored data?
- Is there a parental control or family mode?
- Can you opt out of data being used for AI training?
3. Review What Data Is Collected
Read the privacy policy (or a summary of it). Specifically look for:
- What types of data are collected (conversations, location, voice, images)
- How long data is retained
- Whether data is shared with third parties
- Whether data is used for advertising or model training
4. Set Ground Rules with Your Child
Have a clear conversation about what is and is not appropriate to share with AI:
- Never share your full name, address, school name, or phone number
- Never share photos of yourself or friends
- Never share passwords or account information
- If the AI asks personal questions, tell a parent
5. Monitor and Review Regularly
Privacy is not a one-time setup. Schedule a monthly check-in:
- Review your child's AI conversation history (where available)
- Check for any changes in the tool's privacy policy
- Ask your child about their AI interactions
- Delete stored data you are not comfortable with
Practical Privacy Actions You Can Take Today
For ChatGPT: Go to Settings, then Data Controls, and toggle off "Chat history and training." This prevents conversations from being used to train models.
For Google services: Set up a supervised Google account using Family Link. Configure activity controls to auto-delete history every 3 months.
For Character.AI: Review your child's conversations periodically. The app's settings allow some restrictions but are less granular than other platforms.
For all tools: Use your child's device settings to limit app permissions. Disable microphone, camera, and location access for AI apps unless specifically needed.
The Bigger Picture
Children growing up today will share more data with AI systems than any previous generation shared with any technology. The patterns being set now, what children consider normal to share, what companies consider acceptable to collect, will shape privacy norms for decades.
Teaching your child to be thoughtful about data privacy is not about fear. It is about empowerment. A child who understands that their words have value, that their data belongs to them, and that they have the right to control who sees it, is a child who will make better decisions about privacy for their entire life.
You would not let a stranger listen to every conversation your child has. Apply the same standard to AI.
Real-World Safety Scenarios and How to Handle Them
Scenario: Your child shows you something disturbing an AI generated
What happened: A 10-year-old asked ChatGPT about World War II for a history project. The AI provided accurate historical information but included graphic descriptions of violence that upset the child.
What to do:
- Thank the child for telling you (this preserves future disclosure)
- Acknowledge that the content was upsetting — don't dismiss their feelings
- Explain that AI doesn't know how old the user is unless told
- Together, add custom instructions: "The user is 10 years old. Use age-appropriate language."
- Report the response using the thumbs-down button (helps improve AI safety)
Scenario: Your child's essay sounds too polished
What happened: Your 12-year-old submits a perfectly structured essay with vocabulary they've never used. You suspect AI wrote it.
What to do:
- Don't accuse directly — ask them to explain their main argument
- If they can't explain it, have a calm conversation about the difference between AI-assisted learning and AI-generated submissions
- Establish the "explain it to me" rule: if you can't explain it without the screen, you didn't learn it
- Work with the teacher to align home and school AI policies
Scenario: Your child prefers talking to AI over friends
What happened: Your 13-year-old spends 2+ hours daily chatting with Character.AI and declining social invitations.
What to do:
- This is a yellow flag, not a red flag — investigate the underlying need
- Ask: "What does the AI give you that friends don't?" (Often: consistency, no judgment, availability)
- Set time limits on AI chat (not as punishment but as balance)
- Facilitate real-world social activities that meet the same needs
- If withdrawal persists for 2+ weeks, consult a school counselor
Building a Family AI Safety Culture
Safety isn't a one-time setup — it's an ongoing family practice:
Weekly: 3-minute check-in at dinner — "What's the most interesting thing you did with AI this week?"
Monthly: Review and adjust AI tool permissions and time limits based on your child's growing maturity.
Quarterly: Update family AI rules. What was appropriate for a 10-year-old may be too restrictive for a newly-turned-11-year-old.
Annually: Review which tools your child uses. Remove unused ones (they still have data access). Add age-appropriate new ones.
The goal is raising a child who doesn't need parental controls — because they've internalized good judgment about AI use.
Read our complete AI safety guide collection. Browse COPPA-compliant tools.
Ready to try this with your child?
Knowing the risks is half the work — the other half is putting your child in front of tools that were built with those risks in mind. These five are the ones we use with our own kids first, before recommending any third-party platform.
| Your child's goal | Try this | Why it works |
|---|---|---|
| Build 3D creations hands-on | 🧱 3D Block Adventure | Browser-based 3D building with 15 AI-guided levels. Ages 4-12, no downloads. |
| Play an AI game right now | 🎨 Wendy Guess My Drawing | A 60-second drawing game where the AI tries to guess. Ages 5-12, zero setup. |
| Learn AI over 7 structured days | 🏕️ 7-Day AI Camp | Day 1 is free. 15 minutes a day covering art, story, music, and safety. |
| Create art, stories, or music | 🎨 AI Creative Studio | Built-in safety filters. Three free creations a day without signing up. |
| Pick the right AI tool for your child | 🛠️ 55+ Kid-Safe AI Tools | Filter by age, subject, safety rating, and price. Every tool parent-tested. |
All five start free, run in the browser, and never ask for a credit card up front.
📋 Editorial Statement
Written by the KidsAiTools Editorial Team and reviewed by Felix Zhao. Our guides are written from a parent-builder perspective and focus on AI literacy, age fit, pricing transparency, and practical family use. We do not currently claim named external expert review or a child-test panel. We may earn commissions through referral links, which does not influence our reviews.
If you find any errors, please contact support@kidsaitools.com. We will verify and correct as soon as we can.
Last verified: April 22, 2026