Santiago Valdarrama (@svpino) dropped one line on LinkedIn: "I'm writing less code and spending more time managing agents." That single sentence shook the developer community. Writing less code — as a developer?

3-Second Summary
Less hands-on coding Delegating work to agents Planning, supervising, verifying become core work Developer = tech lead of an agent team

What is this about?

As of March 2026, the developer's daily routine is quietly changing. According to an NYT Magazine article that interviewed 70+ developers, San Francisco startup founder Manu Ebert spends all day talking with multiple Claude Code agents. One implements new features, another tests, and a third oversees the whole thing. Feature requests that used to take a full day now get done in 30 minutes.

The point isn't simply "AI writes the code." It's that developers are spending less time typing code and more time explaining what they want to agents and verifying the results. The question itself is shifting from "how do I implement this" to "what do I want."

This isn't just an early adopter story. Microsoft revealed that 30% of its production code is AI-generated, and 73% of developers use AI tools on a weekly basis. GitHub's CEO even said his next title would be "code creative director."

A tool called Tonkotsu, mentioned by Santiago, symbolizes this shift. Built by former Facebook engineers, this desktop app turns developers into "managers of a coding agent team." It provides a single interface for planning tasks (Plan), delegating to multiple agents simultaneously (Delegate), and reviewing the resulting diffs (Verify).

The essence of the paradigm shift

Developers who used to "write" code in IDEs or terminals are now moving to "assigning" work to multiple AI agents and "reviewing" the results. It's not the tools that changed — the very definition of what development means is changing.

What's actually changing?

AI coding agents in 2026 have already converged into three archetypes: CLI agents (Claude Code, Codex CLI), IDE-native agents (Cursor, Windsurf), and cloud engineering agents (Devin, GitHub Coding Agents). Despite different interfaces, they share a common architecture of memory files, tool use, long-running execution, and sub-agent orchestration.

Traditional DevelopmentAgent Management Approach
Core activityWriting code directlyCommunicating intent to agents + verifying results
Parallelism1 developer = 1 task1 developer = N agents running simultaneously
Key skillProgramming language proficiencyContext design + architectural judgment
Productivity metricLines of code writtenNumber of verified, merged PRs
Team structure1 senior + 5 juniors pyramid1 orchestrator + N agents hub-and-spoke

Toss developer Sehun Jung explains this shift through "abstraction." Delegating work to AI is essentially the act of abstracting that work. It's declaring "I'm no longer going to worry about this part." But bad abstraction is worse than no abstraction at all. Telling an agent "just figure it out" is guaranteed to fail.

HackerEarth's 2025 hiring data shows this change in numbers. Companies dramatically increased the weight of programming ability (+54x), problem-solving (+39x), and data visualization (+35x) when evaluating developers. They're no longer asking "do you know this syntax" but "can you solve this problem."

But ask the same question at Google and you get a different answer. The number Sundar Pichai shared was a 10% engineering speed improvement. Compared to startups claiming "10–100x faster," the temperature difference is stark. In large companies with billions of lines of existing code, inserting bad code could shut down services for millions of users.

The essentials: How to get started

  1. Start by clarifying "what you want"
    Don't tell agents "build this feature." Instead, deliver structured requirements, constraints, and expected outcomes. If you organize project context in a CLAUDE.md or AGENTS.md, you won't need to re-explain every time.
  2. Build verification loops first
    Don't read agent-written code line by line. Set up automated tests and make sure PRs only open after passing CI. The key is "a system where agents verify their own results."
  3. Gradually expand delegation scope, one task at a time
    This is the approach Toss developer Sehun Jung suggests. After completing any task, ask yourself "how could I delegate this to AI?" Next time a similar task comes up, experiment with delegating to an agent, even if it's slightly less efficient.
  4. Experiment with running agents in parallel
    Use tools like Tonkotsu to run multiple agents simultaneously. One on new features, one on tests, one on documentation. The key experience is one developer managing multiple work streams like a team lead.
  5. Make your codebase AI-readable
    Clean up dead code, half-finished migrations, and competing patterns. Agents work probabilistically, so contradictory signals in the codebase lead to weird code generation.

What isn't going away

Senior developer value is actually going up. Research shows 30% of AI-generated code contains security vulnerabilities, and code churn has doubled compared to pre-AI. The value of people who can judge whether agent-produced code is "actually right," and who can face the system alone when abstraction breaks — that value is growing.