LangChain Breaks Down How AI Agents Learn
Harrison Chase published a framework for continual learning in AI agents organized across three layers - model weights, harness code, and external context - arguing that most teams fixate on the first while the real leverage sits in the other two. Harness-level learning uses execution traces to suggest code improvements, while context-level learning updates instructions, skills, and memory at agent, user, or organization scope. The concrete mapping to Claude Code is useful: the model is Sonnet, the harness is Claude Code itself, and the context lives in CLAUDE.md and /skills directories. LangSmith collects the traces that power all three learning flows. The leaked Claude Code source revealed an autoDream system doing exactly this - offline insight extraction from session traces - and Karpathy's LLM wiki proposal takes a similar approach to compounding knowledge through persistent artifacts rather than per-query retrieval.