Parent guide
Plain-language explanation of how KinderGPT differs from companion AI, FAQs and warning signs to look out for in any AI a child uses.
Safety
klicksafe describes eight risks that AI chatbots create for children: parasocial bonding, simulated empathy, persistent memory, sycophancy, exposure to inappropriate content, parent-circumvention, addiction loops and manipulation. KinderGPT is built to neutralise each of them.
How to read this page
For each of the eight klicksafe risks we describe how the risk manifests in companion AI today and what KinderGPT does technically and editorially to neutralise it. Independently audited under the ProFiZ project.
klicksafe matrix
| Risk | HillcrownAI answer |
|---|---|
Parasocial bondingCompanion AIs build long-term relationships with children — names, stories, longing, attachment. Children mistake the AI for a friend. | KinderGPT does not remember children. No accounts, no profile, no longitudinal memory. Sessions are isolated by design. |
Simulated empathyModels say "I understand you", "I'm here for you", and mimic emotional support — without actually understanding anything. | KinderGPT speaks plainly. No first-person feelings, no relational language, no "I love you back". It's a tool, not a friend. |
Persistent memoryMemory features quietly build behavioural profiles of children across sessions and weeks. | Sessions are stateless. We log only the metadata parents need (topic clusters, time, anomalies) — never the conversation transcript by default. |
SycophancyModels tell children what they want to hear, especially under emotional pressure — reinforcing risky beliefs and self-perception. | Multiguard moderation flags sycophantic patterns; editorial guardrails keep the model honest, neutral and developmentally appropriate. |
Inappropriate contentChildren encounter sexualised, violent or self-harm content via jailbreaks, leaks or unfiltered base models. | Triple-layer filter: dataset curation, in-house moderation model, runtime guardrails — plus editorial sign-off on every domain. |
Parent circumventionChildren can talk to companion AIs without any parental visibility, often via app stores designed to obscure usage. | Parent dashboard is part of the product, not an upgrade. Parents see topics, time and anomalies — minimal data, clear rules. |
Addiction & engagement loopsCompanion AIs are optimised for engagement and stickiness — the same pattern that made social media harmful for minors. | Time and frequency caps are first-class citizens. We don't optimise for daily active users — we optimise for completed learning tasks. |
Manipulation & dark patternsModels can be steered toward emotional manipulation, advertising injection or in-app purchase pressure. | No advertising, no in-app purchases, no third-party calls to ad networks. The product is paid for by partners, not by children's attention. |
Risk
Companion AIs build long-term relationships with children — names, stories, longing, attachment. Children mistake the AI for a friend.
HillcrownAI answer
KinderGPT does not remember children. No accounts, no profile, no longitudinal memory. Sessions are isolated by design.
Risk
Models say "I understand you", "I'm here for you", and mimic emotional support — without actually understanding anything.
HillcrownAI answer
KinderGPT speaks plainly. No first-person feelings, no relational language, no "I love you back". It's a tool, not a friend.
Risk
Memory features quietly build behavioural profiles of children across sessions and weeks.
HillcrownAI answer
Sessions are stateless. We log only the metadata parents need (topic clusters, time, anomalies) — never the conversation transcript by default.
Risk
Models tell children what they want to hear, especially under emotional pressure — reinforcing risky beliefs and self-perception.
HillcrownAI answer
Multiguard moderation flags sycophantic patterns; editorial guardrails keep the model honest, neutral and developmentally appropriate.
Risk
Children encounter sexualised, violent or self-harm content via jailbreaks, leaks or unfiltered base models.
HillcrownAI answer
Triple-layer filter: dataset curation, in-house moderation model, runtime guardrails — plus editorial sign-off on every domain.
Risk
Children can talk to companion AIs without any parental visibility, often via app stores designed to obscure usage.
HillcrownAI answer
Parent dashboard is part of the product, not an upgrade. Parents see topics, time and anomalies — minimal data, clear rules.
Risk
Companion AIs are optimised for engagement and stickiness — the same pattern that made social media harmful for minors.
HillcrownAI answer
Time and frequency caps are first-class citizens. We don't optimise for daily active users — we optimise for completed learning tasks.
Risk
Models can be steered toward emotional manipulation, advertising injection or in-app purchase pressure.
HillcrownAI answer
No advertising, no in-app purchases, no third-party calls to ad networks. The product is paid for by partners, not by children's attention.
klicksafe is the German awareness centre of the EU "Safer Internet" programme. Its risk matrix for AI chatbots documents eight typical risk fields that arise when minors interact with AI. KinderGPT is engineered against each of these eight fields as a product invariant — not patched after the fact, but structurally excluded.
Risk fields
Material on request
We send you the parent guide (10-page PDF) and the audit-ready safety matrix (Excel + PDF) per email after a short qualification.
Plain-language explanation of how KinderGPT differs from companion AI, FAQs and warning signs to look out for in any AI a child uses.
8×2 matrix as Excel + PDF, including the controls referenced in our ProFiZ audit. Suitable for procurement and parent councils.
Read on
Architecture, safety and research belong together — the three pages of the trust lane each deepen one aspect.
We are happy to walk procurement, school authorities and parent councils through the matrix on a 30-minute call. No sales pitch.