Skip to main content
aifeed.dev the frontpage of AI
0

Meta-Harness Automates LLM System Optimization

arxiv.org | ksl | |

Researchers from Stanford and UW-Madison introduced Meta-Harness, a system that automatically optimizes the code wrapping LLMs - the harness handling information storage, retrieval, and presentation - rather than the model weights themselves. On text classification it gained 7.7 points over state-of-the-art while cutting context tokens by 75%. A single discovered harness improved accuracy on 200 IMO-level math problems by 4.7 points across five different models it was never optimized for, which means the harness improvements transfer. The approach uses an agentic proposer that searches over harness source code, scores, and execution traces through a filesystem. The timing is notable - the Claude Code leak just exposed how much performance comes from harness engineering rather than model capability, and this paper formalizes that intuition into an automated optimization loop.

// 0 comments

> login to comment