OpenAI Publishes Child Safety Blueprint for AI
OpenAI released a Child Safety Blueprint developed with NCMEC, the Attorney General Alliance, and state AGs from North Carolina and Utah, outlining how AI labs should handle detection, reporting, and prevention of AI-generated child sexual exploitation material. The timing tracks with an Internet Watch Foundation report documenting over 8,000 cases of AI-generated CSAM in the first half of 2025 alone, up 14 percent year over year. The blueprint pushes for legislative updates that explicitly cover AI-generated abuse material and tighter integration of safeguards into model pipelines. OpenAI is framing this as a proactive, cross-sector effort, but the pressure from state attorneys general and high-profile incidents involving minors and AI chatbots made some form of public response unavoidable.