• hey LLM make me an entire project from scratch
  • Ok, let me think...
  • LLM one-shots entire project from scratch
  • Here you go! 🔥
  • holy s*** this is incredible
  • prompt for new feature
  • What a great idea! 🧠 Let me build that...
  • LLM writes a lot more code
  • oh cool it added new code, nice
  • new prompt, LLM adds more code
  • ok this is a lot code, gettin kinda bloated
  • prompt to clean things up
  • Ah, now I see! 💎 We should be concise...
  • LLM writes new code in the process of cleaning up
  • uh it introduced a bug, lets fix that...
  • prompt to fix the code
  • You spotted a crucial point! 🎯 Here we go...
  • AI writes new code in the process of cleaning up
  • OK, that should fix the issue! 🤖
  • ...it introduced a new bug. eh, let's go outside instead.

Install:
> npm install -g cerebella
Run:
> cerebella

this is how coding with AI should feel

LLMs cannot know the unspoken goals, constraints, and preferences specific to your projects.

LLMs accumulate clutter and a tangle of assumptions in your codebase after unexamined vibe-coding.

LLMs tend to be overconfident, insufficiently doubtful of their own actions and the text in front of them.

LLMs cannot foresee the full consequences over time of their edits into your codebase.

Cerebella is a runtime engine that notices when LLM interventions deviate from your goals.

Cerebella ensures LLMs are properly scrutinized, which LLMs really suck at doing themselves.

Cerebella's project-specific workflows run on-device, since local compute is often idle and unutilized.

Cerebella's runtime loop uses fast and reliable statistics, not LLMs, to steer LLMs away from avoidable errors.