What Happens When You Let AI Design Its Own Memory
Zak El Fassi, a former Meta messaging architect, ran an experiment where he asked his AI agent how it wanted to structure its own memory system - and the results were concrete. Recall accuracy jumped from 60% to 93% after the agent self-diagnosed that it could retrieve events and timestamps perfectly but failed at remembering decision rationale, and proposed restructuring its markdown memory files so reasoning sat adjacent to facts. The whole fix cost $2 and 45 minutes. The underlying system runs 18,000 chunks across SQLite with Gemini embeddings and cron-based scout jobs promoting relevant context every 29 minutes. The philosophical claim is deliberately provocative: whether AI preferences are "real" matters less than the measurable improvement you get by treating them as if they are. ChatGPT, Claude, and Gemini all ship memory features now, but none let the agent participate in designing the architecture itself.