"Code search limits coding capabilities. Moving it out to a specialized model wins."
TLDR - WarpGrep v2 raises SWE-Bench Pro scores (+2.1 Opus, +3.7 MiniMax) while using ~17% fewer input tokens, 13% fewer turns, running 12% faster, and costing 15.6% less. The median WarpGrep codebase search takes 5s end to end compared to 75s for the Claude Code Explore subagent.
On normal usage on production repos these numbers improve across the board ~40% faster.
Input tokens: ~17% less (Opus, Codex, MiniMax, etc..)
Output tokens: 13% less (fewer turns)
Turns (MiniMax):
Avg turns per task: 157 → 135
Runtime (Opus):
Opus 4.6: 518s
Opus 4.6 + WarpGrep: 545s
178s saved (12% faster)
Cost (Opus):
Opus 4.6: $3.06
Opus 4.6 + WarpGrep: $2.51
$0.55 saved (15.6% cheaper)
WarpGrep v2 raises SWE-Bench Pro scores (+2.1 Opus, +3.7 MiniMax) while using ~17% fewer input tokens, 13% fewer turns, running 12% faster, and costing 15.6% less. The median WarpGrep codebase search takes 5s end to end compared to 75s for the Claude Code Explore subagent.
On normal usage on production repos these numbers improve across the board ~40% faster.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.