Agentic Coding + Evals
RecommendedSpecialist scouts evaluate AI-written code, a judge dedupes findings, repair agents compete, and a receipt captures the proof.
Hackathon strategy
This page keeps the internal lane choice out of the demo surface. The product pitch is Scout: the patch tournament for AI-written code. The build strategy is Agentic Coding plus Building Evals.
Pathway choices
Use this for planning and judging narrative, not as the first screen of the app.
Agentic Coding + Evals
RecommendedSpecialist scouts evaluate AI-written code, a judge dedupes findings, repair agents compete, and a receipt captures the proof.
Multimodal Intake
AvailableVoice or screen-recorded bug reports can feed the same evaluation pipeline after the core loop is stable.
Domain Agents
AvailableThe chosen domain is agentic engineering itself: review infrastructure for code produced by coding agents.
Seeded demo mode is deterministic for recording reliability. Live repo review uses the OpenAI API in /api/review, and live patch generation uses the OpenAI API in /api/fix. Once hackathon credits are available, set OPENAI_API_KEY and optionally tune OPENAI_MODEL. Scout keeps static prompt rules ahead of dynamic repo context and shows cached-token usage when the live model response reports it.