Have you ever sat on an idea for 8 years? Telling yourself "I'll build it someday," but always putting it off because it's too hard, too tedious, too risky? That's exactly where Lalit Maganti, a senior engineer on Google's Perfetto team, found himself.
Then in late 2025, something shifted when he started using AI coding agents. "Maybe this time I can actually do it." The result: 250 hours and 3 months later, he shipped syntaqlite, a set of developer tools for SQLite. But here's what makes this story genuinely interesting — it's not a success story. It's a record of failure and restart.
What Is It?
Lalit Maganti works on Perfetto, a performance analysis tool at Google. Perfetto has a SQLite-based query language, and users kept asking for formatters, linters, and editor extensions. The problem? Every existing SQLite tool out there was either unreliable, slow, or inflexible.
"So build one from scratch," you might think. But building a SQLite parser from the ground up is brutally hard — there's no formal spec for the language, and the source code is incredibly dense C. As a side project, it was just too much. Over 400 grammar rules to define, tests to write for each one, bugs to fix... For 8 years, the idea stayed in "I want to, but I can't" mode.
Why this matters even if you don't code
This looks like a coding story, but the core is universal. It's about finally starting a project that was too hard and too tedious to tackle alone — until AI lowered the barrier. A marketer's automation system, a designer's design system, a PM's data pipeline — don't you have a project like this too?
What Changes?
Lalit's 3 months break down into two distinct phases. The gap between them is the real takeaway of this story.
Phase 1: Vibe Coding (January) — Failure
Over the 2025 Christmas break, he decided to test the most maximalist version of AI: "Can I vibe-code the entire thing using Claude Code on the Max plan ($200/month)?" He delegated almost everything — technical decisions, implementation, the lot — and played the role of a "semi-technical manager."
The result? Functionally, it worked. A parser, formatter, and web playground all materialized. Over 500 tests were generated. But when he reviewed the codebase in detail at the end of January — it was complete spaghetti.
Functions scattered across random files. Single files ballooning to thousands of lines. A Python extraction pipeline he couldn't even understand himself. The approach was validated, but the code could never support real users.
He threw everything away and started from scratch.
Phase 2: Agentic Engineering (Feb–Mar) — Success
What changed in the second attempt was his role:
| Phase 1: Vibe Coding | Phase 2: Agentic Engineering | |
|---|---|---|
| Human's role | Semi-technical manager | Designer + reviewer |
| Design decisions | Delegated to AI | Owned by human |
| Code review | Minimal | Every change reviewed |
| Refactoring | "I'll deal with it later" | After every batch |
| AI's role | Everything | Implementation + research + grunt work |
| Outcome | Spaghetti code → scrapped | Shippable product |
This is precisely what Simon Willison describes as agentic engineering. If Andrej Karpathy's "vibe coding" means forgetting the code exists, agentic engineering means AI writes the code, but the human designs, verifies, and steers.
Getting Started
Prototype with vibe coding. Never ship it.
Using AI to validate "is this approach even feasible?" is a great first step. Lalit says his vibe-coding month proved the approach was viable. But the moment you try to ship that code, it falls apart. Prototype ≠ Production.
Design must stay with the human
AI excels at "implement this function with these parameters" but is terrible at "will this API feel good to users?" Architecture, public APIs, user experience — these are domains with no objectively verifiable right answer, and that's exactly where AI falls short.
Refactor relentlessly
When AI generates code at industrial scale, you have to refactor at industrial scale too. Skip it, and things spiral out of control immediately. Lalit says he asked "is this ugly?" after every large batch. Let the human spot the big abstractions AI can't see, then hand the execution back to AI.
Use AI as a research assistant
The highest ROI Lalit got from AI wasn't code generation — it was research. Learning an unfamiliar algorithm that would take a day or two, compressed into a one-hour conversation. Picking up the VS Code extension API in an hour instead of a full day. AI dramatically lowers the entry barrier to unfamiliar domains.
Watch out for the "slot machine" trap
"Just one more prompt" — the most dangerous pitfall in AI coding. When you're tired, prompts get vague, output gets worse, you try again, you get more tired. In those moments, turning off AI and writing it yourself is actually faster. Energy management applies to AI usage too.
The uncomfortable truth from METR's research
According to METR's 2025 RCT study, experienced open-source developers were actually 19% slower when using AI tools. The kicker? They believed they were 20% faster. This lines up perfectly with Lalit's experience — AI creates the feeling of speed, but design debt and loss of understanding are invisible costs.
Deep Dive Resources
Eight years of wanting, three months of building with AI — Lalit Maganti's full retrospective, backed by project journal entries and commit history.
Agentic Engineering Patterns — Simon Willison's guide to working with AI coding agents. Clearly distinguishes vibe coding from agentic engineering.
METR: Measuring AI Impact on Developer Productivity — The RCT study showing AI tools made experienced developers 19% slower. Data on the gap between perceived and actual speed.
Karpathy's Vibe Coding vs Agentic Engineering — Even the inventor of vibe coding acknowledges its limits. Fun vs. production.



