LLM Personalization Changes Less Than You Think
Joshua Budman tested the same query across three ChatGPT accounts - logged out, his own, and his wife's - and found that while the raw text differed completely, the substance barely moved. Same show recommendations, same list structure, same categorical framing. He builds a case that LLM personalization operates on a "shared core, variable margin" model: the underlying answer archetype stays fixed because identical model weights, overlapping retrieval neighborhoods, and entropy-minimizing decoding all push outputs toward the same center of gravity. The practical takeaway matters for anyone thinking about brand visibility in AI search. Optimizing for LLM answers isn't about chasing individual response variations - it's about getting into the semantic core that persists across all of them. A useful corrective to both the "every answer is unique" panic and the "it's just SEO again" dismissal that have dominated the GEO conversation.