Coding with AI Agents
-
Has this ever happened to you?
- Hello agent please make me an entire project from scratch
- Ok, let me think...
- Agent one-shots entire project from scratch
- Here you go! 🔥
- Holy s*** this is incredible
- Prompt for new feature
- What a great idea! 🧠Let me build that...
- Agent writes a lot more code
- Oh cool it added new code, nice
- New prompt, Agent adds more code
- Ok this is a lot code, gettin kinda bloated
- Prompt to clean things up
- Ah, now I see! 💎 We should be concise...
- Agent writes new code in the process of cleaning up
- Hm it introduced a bug, lets fix that...
- Prompt to fix the code
- You spotted a crucial point! 🎯 Here we go...
- Agent writes new code in the process of cleaning up
- OK, that should fix the issue! 🤖
- Hm it introduced a new bug...let's try again later.
The Problems
Agents cannot know the unspoken goals, constraints, and preferences specific to you.
Agents accumulate a tangle of clutter and assumptions after sessions of unexamined vibe-coding.
Agents tend to be overconfident, insufficiently doubtful of their own actions and the text in front of them.
Agents cannot foresee the full consequences over time of their edits into your codebase.
Solution Criteria
Cerebella should help you notice when the actions of your agents deviate from your goals.
Cerebella should help you properly scrutinize agents because they really suck at doing everything themselves.
Cerebella should run in the background on your machine, tracking your agents' work project-by-project.
Cerebella should use simple tools, not more agents, to steer your agents away from avoidable errors.