Every AI decision is logged. Every redaction is recorded.
TraceLayer is the canonical audit spine of CurioPilot. Every AI call writes to ai_decision_logs with the student, model, and redaction report. School admins query it. Parents see plain-language summaries. DPOs use it for evidence.
Most EdTech AI is a black box.
When a parent asks “why did the platform recommend my child read this passage?”, most platforms answer with marketing copy, not a real reason.
When a regulator asks “show us every AI decision affecting student X over the last 90 days”, most platforms have to dig through five log files in three formats.
When a DPO needs to verify that consent gates are actually enforced, most platforms can only show that consent is in the database — not that it was checked at every entry point.
TraceLayer was built to answer all three questions in one query.
Five phases, one audit spine.
G1 — Lockdown
Every AI entry point goes through a consent gate. Every payload sent to AI passes through redactForAi() — a TypeScript brand the compiler refuses to bypass. CI fails if a new AI route doesn't gate consent.
G2 — Audit completeness
Every silent profile mutation — Bloom's promotions, misconception extractions, recommendations, risk-flag crossings — emits a canonical audit row. One correlationId chains the entire submission cascade.
G3 — Super-admin TraceLayer UI
A queryable dashboard at /super-admin/tracelayer. Filter by student, activity, flow, kind, time range. Click any correlationId to see the full cascade.
G4 — School-admin UI + Parent “Why this?”
Tenant-scoped TraceLayer for school admins. Parent-facing “Why this?” card on activity detail showing redaction count, model used, and a plain-language reason.
G5 — Compliance attestations
Build-time PII attestation script scans every AI route, generates docs/compliance/pii-flow-attestation.md. CI gate fails if a new AI route is ungated. DSAR export now includes the TraceLayer spine.
Real entries. Real redactions.
Two example rows from the audit spine — one a successful AI call, one a flow that the platform refused to run because consent wasn’t granted.
{
"kind": "flow_completed",
"flowName": "gradeShortAnswerPrompt",
"studentId": "[anonymised]",
"activityId": "act-equivalent-fractions-007",
"assignmentId": "asg-2026-04-15-section5a-007",
"summary": "Short-answer graded by AI rubric",
"model": { "provider": "googleai", "model": "gemini-2.5-flash" },
"usage": { "inputTokens": 412, "outputTokens": 48, "totalTokens": 460 },
"latencyMs": 1340,
"redactionReport": { "paths": ["studentName", "studentEmail"], "redactedCount": 2 },
"correlationId": "c187b3a4-…-da7f1712",
"timestamp": "2026-04-15T14:32:11.000Z"
}{
"kind": "flow_skipped_no_consent",
"flowName": "writingCoachAnalysis",
"studentId": "[anonymised]",
"activityId": null,
"summary": "Skipped because student is under 13 and COPPA consent has not been granted",
"redactionReport": { "paths": [], "redactedCount": 0 },
"correlationId": "c187b3a4-…-fc0921bb",
"timestamp": "2026-04-15T14:35:42.000Z"
}redactionReport.paths— names + emails redacted before transmissionmodel.model— which provider answered: Gemini, GPT, Claudeflow_skipped_no_consent— even non-events are loggedcorrelationId— click to see the entire cascade
Different views for different roles.
Super-admin
For platform ops + investigation
- ·Cross-tenant view
- ·Every kind of decision
- ·Filter by tenant, student, flow, kind, model, time
- ·Cost analytics + token usage
School admin
For day-to-day compliance
- ·Tenant-scoped view
- ·Same view, scoped to your school
- ·Pull DSAR evidence in seconds
- ·Audit log retention: 7 years
Parent
For trust + conversation
- ·Per-child, per-activity view
- ·Plain-language summary
- ·“Why this?” card on activities
- ·Redaction count + model used — no raw model output
What TraceLayer stands for, by regulation.
Article 15 (right of access): TraceLayer rows are included in the DSAR export. Article 17 (right to erasure): TraceLayer rows for the deleted subject are purged on request. Article 22 (automated decision-making): every AI decision is logged with rationale.
Verifiable parental consent for under-13s enforced at the gate. Consent grant + revocation audit-logged with timestamp + actor.
Records review endpoint exports all student records on demand, including the TraceLayer cascade. Audit log retention: 7 years.
DPA enforcement at every AI entry point. The school's DPA acceptance is recorded with version + signatory.
Same posture as GDPR; export and erasure flows are jurisdiction-neutral.
An attestation that fails the build, not a marketing claim.
The PII boundary isn’t a runtime check — it’s compile-time. Every AI route in our codebase must call redactForAi() before any payload reaches a model.
Our build script scans the codebase, generates an attestation report, and fails CI if a new route is ungated. The report is committed to the repo. Your IT can request it.
$ npm run tracelayer:attest:check ✓ TraceLayer PII attestation: clean. 52 sites · 46 gated · 6 allow-listed · 0 ungated
TraceLayer, in detail.
How long are TraceLayer entries kept?
90 days for ai_decision_logs (Firestore TTL). 7 years for audit_logs. Both retention windows are configurable on the Campus plan.
What if AI is down?
TraceLayer logs the failure as a flow_failed decision with the error reason. Cascade fallbacks try the next provider; if all fail, the user sees a graceful error and the failure is logged.
Can a school export TraceLayer data?
Yes. The school admin can export their tenant's full TraceLayer history in JSON or CSV. Per-student exports are part of the GDPR Article 15 flow; bulk exports are admin-only.
Do you store the raw AI prompts?
We store the redacted input (post PII-boundary) and the model output. We don't store the system prompts (platform IP), but we log which prompt version was used so you can reconstruct what the AI saw.
What about model-side logging by the AI provider?
We use enterprise tiers from Google, OpenAI, and Anthropic — all contractually exclude using our data for training. Provider-side logs are governed by their data-processing agreements, linked in our DPA.
How do parents see TraceLayer?
Parents see a “Why this?” card on every AI-recommended activity, not the raw audit log. The card explains the recommendation in plain language and surfaces the redaction count + model used. They can drill into the full TraceLayer for their child via DSAR.
Can we customise what gets logged?
No. The schema is fixed by design — predictability is the point. Custom fields would defeat the audit-trail guarantee. Your tenant owns its TraceLayer rows; you can export and post-process.
See TraceLayer live in 30 minutes.
We’ll set up a demo with you, your IT, and your DPO. We’ll show TraceLayer scoped to a real school, run a live data export, and walk through your DPA.
Or read the compliance overview first.