Personalisation by Bloom’s level — not by vibes.
Most “personalised” ed-tech sorts students into bins and calls it a day. CurioPilot builds a per-subject mastery model — the Learning Twin — and calibrates every activity, hint, and recommendation against it.
The bin model is broken.
Most platforms put a student into a bucket — “Grade 5 maths” — and serve them the average activity for that bucket. That works for the median student. Everyone else gets bored or lost.
Real personalisation has to model what the student actually understands at the topic level (not the grade level), what misconceptions they’re carrying, and what reading level the content needs to land at. Then it has to calibrate every next activity against that model.
That’s the Learning Twin. It’s not a marketing layer; it’s the substrate the rest of the AI features run on.
The AI loop, in one diagram.
Five nodes. The student does work, the AI extracts misconceptions, the Twin updates, recommendations recalculate, the next activity calibrates. Every node logs to TraceLayer.
- 01Student takes activity
Submission + hint requests captured.
- 02AI extracts misconceptions
Bloom's classifier + misconception detector run on the submission.
- 03Subject profile updates
Per-topic mastery + Lexile + accommodation context refresh.
- 04Recommendations recalculate
Next 3 topics surface with a 'why' tooltip.
- 05Next activity calibrates
Difficulty, format, and reading level pinned to ZPD.
Closes the loop → back to step 1 with a calibrated next activity. Every iteration writes to TraceLayer.
What runs on the Twin.
Learning Twin
Per-child, per-subject mastery model. Bloom's-level progression bars per topic, misconception list grouped by category, recommended next topics with “why” tooltips. Updated as the student submits work; teachers and parents see the same source of truth.
AI Quiz Generator
Multiple-choice, fill-in-the-blank, and short-answer questions generated on any topic, with explanations, grounded by curriculum standards. All 12 question types supported across the Free and Premium tiers.
Spark — hint-giver
Subject-scoped, Bloom's-aware. Won't answer a maths question with a history fact. Won't hand the answer to your child. Every Spark exchange is logged in TraceLayer so parents and teachers can review.
AI Writing Coach
Long-form essay grading and feedback against a rubric. Detailed strengths / improvements breakdown plus a numeric score. Teacher-overridable, always — every AI-graded essay is reviewable before the score is shared.
Personalised Learning Paths
Step-by-step learning journeys generated from the Learning Twin. Updated as the student progresses; teachers can edit, accept, or reject AI-suggested next steps. Available on Starter+ tier.
Spark, the Parent Guide, and the Teacher Coach.
One architecture, three audiences. All three pass through the same consent gates. All three log to TraceLayer. Misconceptions detected during chat feed back into the Twin.
Spark
Hint-giver, never answer-giver. Subject-scoped, Bloom's-aware. Will walk through reasoning step by step when a student is genuinely stuck — but won't write the essay or do the maths.
Parent Guide
Plain-English chat for parents. "What did Maya struggle with this week?" "Why is this activity recommended?" One-click actions: schedule a check-in, switch to a calmer topic, change the consent state.
Teacher Coach
Friday digest with six action cards ranked by impact. Drafts reteach lessons, parent updates, conference briefs. Teacher edits + approves; nothing leaves the platform without a human signing off.
What our AI doesn’t do.
Every “won’t” below is enforced in code, not just policy. Most of them refuse to compile if violated.
- Send student names, emails, or birthdays to AI providers — the redactor refuses to compile if you try.
- Train models on your students' data. (Enterprise tier contracts forbid this on the provider side too.)
- Replace teacher judgement on grading. Every AI score is overridable; the teacher's score wins.
- Personalise without consent. The DPA + COPPA gates run before any prompt is assembled.
- Make decisions about students that aren't reviewable. Every AI decision lives in TraceLayer, forever.
- Use third-party trackers in the AI surface — no behavioural ad pixels, no cross-site fingerprinting.
- Provide AI features without redaction. The PII boundary refuses to compile a route otherwise.
Each item above is enforced by TraceLayer + the build-time PII attestation. Read the full compliance posture at /compliance.
How the personalisation works.
How does the Learning Twin update?
Every submission a student makes — quiz answers, essay drafts, Spark conversations — runs through a Bloom's classifier and a misconception detector. The Twin updates within seconds. Nothing is shared off-platform.
What's a misconception, in your model?
A misconception is a category of error, not a single wrong answer. "Treats numerator and denominator independently" is one. "Reads 0.4 as zero point four without place value" is another. We tag misconceptions so teachers can target them rather than chasing every individual mistake.
Does Spark just give the answer when a student presses?
No. Spark is configured to give hints, ask the next question, and surface the working — not the answer. If a student is genuinely stuck after multiple hints, Spark will walk through the reasoning step by step. The teacher can review every Spark exchange in TraceLayer.
How is reading-comprehension difficulty calibrated?
Lexile measure of the passage + the student's current Lexile range from the Learning Twin. Premium Family unlocks full Lexile-adjusted text; Free Family uses the basic three-tier difficulty.
See the Learning Twin live.
30-minute walkthrough with you, your IT, and your DPO. We’ll build a Twin from a sample student dataset and show you how it updates in real time.