← Back to blog
April 16, 2026 · 5 min read

Best Practices for Building AI Tutoring Agents

AI tutors that actually teach — not just answer questions — need to remember what each student knows, what they struggle with, and adapt in real time. That requires per-student persistent state.

1. Maintain per-student learning state

A tutor that doesn't remember what the student learned yesterday is just a chatbot. Store skill levels, completed exercises, common mistakes, and learning velocity per student. Use this to adapt difficulty and choose what to teach next.

const progress = ctx.db.get("progress") || {
  level: 1,
  masteredTopics: [],
  struggles: [],
  sessionsCompleted: 0,
};

// Adapt the exercise based on their history
if (progress.struggles.includes("recursion")) {
  // Offer a simpler recursion problem with more scaffolding
}

2. Use full conversation history as context

Don't just pass the last 5 messages to the LLM. Search across the student's entire history to find relevant past interactions. If they asked about "for loops" last week and now ask about "iterating over arrays," the tutor should connect the concepts.

// Search past conversations for relevant context
const pastInteractions = ctx.search.query(currentQuestion);
// Include relevant past Q&A in the LLM prompt
// "Last week you learned about for loops. Arrays use the same concept..."

3. Let students run code in isolation

For coding tutors, students need to write and run code. Each student needs their own sandbox — their own filesystem, their own shell, their own installed packages. If a student writes an infinite loop, it should only affect their environment.

The tutor agent should run the student's code, check the output against expected results, and give feedback — all within the student's isolated environment.

4. Track mistakes, not just answers

When a student gets something wrong, store the mistake pattern — not just "wrong answer." Was it a syntax error? A logic error? A conceptual misunderstanding? Over time, these patterns reveal the student's actual knowledge gaps.

Use this data to generate targeted exercises. A student who repeatedly confuses == and === needs a different exercise than one who struggles with async/await.

5. Make sessions resumable

Students study in short bursts — 10 minutes on the bus, 30 minutes at night, 5 minutes during lunch. Each session must pick up exactly where the last one ended. No "let's start from the beginning."

Auto-pause the environment when the student leaves. Resume in under a second when they return. All state — files, progress, conversation — must survive the pause.

6. Generate exercises, don't hardcode them

A bank of 50 exercises runs out fast. Instead, generate exercises dynamically based on the student's current level and knowledge gaps. The LLM creates the exercise, the sandbox runs the student's solution, the agent checks correctness.

7. Separate learning tracks

Store the curriculum and student's position in it as structured data — not in conversation history. This way you can show progress bars, unlock new topics, and let the student jump between subjects without losing context.

ctx.db.set("curriculum", {
  current: "arrays",
  completed: ["variables", "loops", "conditionals"],
  unlocked: ["arrays", "functions"],
  locked: ["objects", "classes", "async"],
});

Build with OnCell

Per-student environments with persistent state, code execution, and 200ms resume.