HillcrownAI

AI Act · Transparency

AI Transparency

Which AI models HillcrownAI uses, where the training and content data come from, how we moderate, and which parental and protection mechanisms keep children safe from risk — openly documented in line with the EU AI Act and the German Youth Protection Act (JuSchG).

As of: April 2026

Models and provenance

KinderGPT and Sesame Studio rely exclusively on open-source foundation models that are operated by HillcrownAI on European infrastructure in Germany. No requests are forwarded to OpenAI, Anthropic, Google or other proprietary US providers.

Language model (text)

Open-source foundation model, fine-tuned for German-language children's conversations, curriculum context and age-appropriate behaviour. Hosted in HillcrownAI's own data centres in Germany. The specific model selection is reviewed with each release cycle and may evolve along with the open-source ecosystem; it is disclosed to procurement bodies and auditors on request.

Image model (Sesame Studio)

Open-source diffusion model, complemented by HillcrownAI's own LoRA fine-tunings for GDPR-compliant brand and character image generation without rights risk. Specific model selection disclosed on request, as for the language model.

Moderation and safety stack

HillcrownAI's own multiguard pipeline with classifying pre- and post-filtering, JuSchG-compliant categorisation and a pedagogically trained escalation tier for sensitive topics. Moderation models are not trained on children's conversation data.

Training and content data

HillcrownAI strictly separates foundation-model pre-training (open source, documented by the respective provider), our own fine-tuning (curated and licensed data) and inference data (children's conversations — not used for training).

Pre-training data of the foundation models

Open-source data sets of the respective foundation models. Provider model cards and training disclosures are linked from our technical documentation and updated with every model upgrade.

HillcrownAI's own fine-tuning data

Licensed content from our publishing, education and research partners and curriculum context produced in-house. Every training batch is reviewed for source provenance, age-appropriateness and JuSchG compliance before it enters a model update.

Children's inference data

Conversations that children have with KinderGPT are not used to train the model. In anonymous use, no conversation data is permanently stored; it remains only within the current session. If parents activate a parent account on the Basis or Premium tier, conversations are surfaced in the parent console for the legal duty of care and deleted within the retention windows defined in the privacy policy.

Parental and oversight mechanisms

Parents and legal guardians sit as the legal and operational layer above every use of KinderGPT. The app is installed via the parents' app-store account — consent is therefore structurally ensured before any use. By default KinderGPT can be used anonymously; a parent account with console, topic filters and history export is optional from the Basis or Premium tier onwards.

Active parental consent (Art. 8 GDPR)

KinderGPT is listed in the app stores with an age rating of 4+. Installation is performed via the Apple or Google account of a legal guardian — children can only install the app after the parents' active approval. Parental consent is therefore structurally ensured before any use.

Anonymous use as default

After installation KinderGPT can be used without an account. No names, e-mail addresses or place of residence are requested. Sessions are isolated, no profiling takes place, and no identification is built up across uses.

Optional parent account (Basis and Premium tier)

Parents who want to use the parent console, topic filters or history export on the Basis or Premium tier can create an account. The account requires only an e-mail address for authentication — no names, no address data, no demographic profiles. Data-minimal under GDPR Art. 5 c.

Parent console

Parents always see conversation histories, topic clusters, escalations and moderation interventions, and can disable individual topics, time windows or features.

Escalation and support chain

On sensitive topics (self-harm, bullying, sexualised content), KinderGPT responds in a pedagogically de-escalating tone and refers to established counselling services in age-appropriate form.

AI Act mapping and governance

HillcrownAI is built around the obligation regime of the EU AI Act, which applies EU-wide from 2 August 2026 (in particular the GPAI and high-risk obligations for child-facing applications).

Risk classification

KinderGPT is classified as an AI system intended for children with elevated risk; all requirements from the chapters on high-risk systems and GPAI transparency are implemented in full.

Documentation and traceability

HillcrownAI maintains complete model and data documentation, a risk-management file and a change log for all production models and moderation rules.

Human oversight

Escalations, moderation hits and unclear cases are handled by a pedagogically trained operations team with clear response deadlines — no full automation on critical topics.

Contact for oversight and procurement

Procurement bodies, supervisory authorities, school and data-protection officers, research partners and parent boards receive our complete model, data and moderation documentation on request — including AI Act mapping, JuSchG risk assessment and parent-flow documentation. Write to partner@hillcrownai.com.

partner@hillcrownai.com