Most AI EdTech products built in 2024 and 2025 added AI features first, then layered “compliance” on top — usually as a page on the marketing site. We did it the other way round. Before the first AI call could run in CurioPilot, the audit-log layer it runs through had to exist. We called it TraceLayer, and it was the first feature we shipped.
This is why.
What I kept hearing in school discovery calls
In every conversation with a school admin or DPO over the last 18 months, the same question came up — sometimes politely, sometimes not — within the first 15 minutes:
“If a parent comes to me next semester and asks what your AI said to their child, what can I show them?”
Most products in our space have no answer. The AI call happened; the prompt was assembled; the response was returned; the database stored the resulting activity. The intermediate steps — what model was chosen, what was redacted before the prompt left the tenant, who approved the output before it reached the student — none of that was captured. So the answer to the parent was “we don’t have that information.”
That answer is fine for an internal tool. It is unacceptable for a K-12 platform. So we made the answer impossible to give: every AI call in CurioPilot is logged before it can run.
The five gates every AI call passes through
Before any prompt leaves a tenant, TraceLayer runs five checks:
- Consent verified.Is there a current, valid consent record for this child + this AI feature? If not, the call is rejected at the gate. There is no path for an AI feature to run without consent — the function literally won’t compile if you try.
- Redaction applied. Names, emails, school IDs, birthdates, addresses are stripped from the prompt. The prompt that reaches the model provider is pseudonymous.
- Model selected. Which model, which provider, which fallback chain. Logged with the request.
- Output reviewed.For high-stakes outputs (grading comments, intervention recommendations, parent-facing content), a teacher review is queued. The student never sees AI output that hasn’t been signed off where review is required.
- Decision recorded. An immutable entry written to the per-tenant audit-log shard with everything above + a signed hash.
Every entry is searchable, exportable, and persistent. When a parent asks “why did the AI recommend this?” the school admin can answer in 30 seconds.
Why it had to be first, not last
Two reasons.
The first is technical.If you ship AI features before the audit infrastructure, you ship features that can’t be audit-logged retroactively. The right shape for the audit log depends on what you’re logging. Trying to bolt it on after the fact means you log only the easy things — request-response pairs — and miss the parts that matter (consent state, redaction inputs, review decisions). We’ve seen products try this. They end up with audit logs that don’t answer the question the auditor is asking.
The second is cultural.Once a team ships AI features without an audit layer, “just enable AI for this customer without a consent check, we’ll backfill it later” becomes a perfectly reasonable thing for an engineer to type into a Slack channel. The compliance posture has degraded by then. Building the gate first changes what is even possible to ask.
What it cost us
Six weeks. The first AI feature in CurioPilot shipped six weeks later than it would have without TraceLayer. That is the entire cost. Every feature since then has shipped through TraceLayer at no additional time — the gate was already there.
We don’t think this approach scales linearly. The first AI EdTech product to retrofit a real audit layer onto an existing surface is going to take six months, not six weeks. Easier to do it first.
What this means for buyers
If you’re evaluating an AI EdTech product, ask for a sample TraceLayer entry — not a marketing screenshot, an actual exported log line. If the vendor can’t produce one, the audit story isn’t real.
Ours is at /tracelayer. Browse a sample audit log; the export schema is documented in the same place.
— Moiz, founder, CurioPilot