In August 2025, Mark Zuckerberg offered 24-year-old PhD dropout Matt Deitke a $250M, four-year package. When Deitke passed on the initial $125M offer, Zuckerberg doubled it. Around the same time, Meta was throwing $100M signing bonuses at OpenAI employees.

That's the priciest scene in the talent war — but here's the thing: the really interesting part is what happened at Anthropic. CEO Dario Amodei watched his own employees turn down $100M offers one after another, and didn't even try to match the compensation. And Anthropic came out ahead — 80% two-year retention for new hires, beating Meta (64%), OpenAI (67%), and DeepMind (78%).

TL;DR
$100M–$250M signing bonuses → Meta on an aggressive buying spree → some people wavered, but the core didn't → mission alignment and pay equity are the real assets

Just How Much Were We Talking?

It started in June 2025, when Sam Altman went on a podcast and said it out loud: "Mark Zuckerberg is throwing $100 million signing bonuses at our people." A lot of people assumed it was an exaggeration — then the actual cases started coming out within a month.

The most talked-about case was Trapit Bansal, a key contributor to OpenAI's o1 reasoning model. On June 26th, he left OpenAI for Meta's Superintelligence Lab. OpenAI understood immediately what it meant to lose one of the key people behind o1. The same week, three more OpenAI researchers — Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai — made the same move.

Then came Matt Deitke. 24 years old, AI PhD dropout, in the middle of launching his own company. Meta offered $125M. He passed. They came back with $250M — a four-year package, but we're talking NBA superstar money.

The overall market moved with it. According to a Forbes analysis from January 2026, the average OpenAI employee's stock compensation hit $1.5M by end of 2025. Senior LLM engineer packages climbed to $400K–$900K, and AI VP/Head-level roles pushed into $700K–$2M+ territory.

So Who Actually Won?

On the surface, it looks like Meta won. The data says otherwise.

Metric Meta Anthropic OpenAI DeepMind
2-Year Retention 64% 80% 67% 78%
$100M Matching Policy Offering side No matching (by principle) Partial matching Partial matching
Key Talent Lost to Competitors "Not a single one left" At least 7 went to Meta 3+ went to Meta
OpenAI Engineer Destination Preference 8x preferred over DeepMind Baseline

Amodei put it directly: "A far smaller fraction of Anthropic people were swayed by those offers than at other companies — and it wasn't for lack of trying." People turned down $100M. Some refused to even take calls with Zuckerberg. That's the key point.

What's more interesting is Amodei's deliberate no-match decision. When an employee walks in with an outside offer, the normal playbook is to counter. Anthropic didn't. Here's why:

  1. Breaking the level system destroys trust
    Pay one person 10x and the fairness perception of everyone sitting next to them breaks instantly. Amodei's words: "The fact that Mark threw a dart and it happened to land on your name doesn't mean you deserve 10x more than the person sitting next to you."
  2. Responding to outside pressure sends the wrong signal to everyone
    "When you respond individually to one person's offer, that becomes a signal to everyone." The whole company ends up hostage to the highest outside bidder.
  3. Loyalty bought with money loses to bigger money next time
    Loyalty bought with money will always lose to a bigger number. Only mission-driven loyalty survives the next bidding war.

Heads Up: This isn't the right answer for every company. Anthropic has strong mission alignment, and their models had just beaten OpenAI on certain benchmarks for the first time — motivation comes naturally there. If your organization doesn't have that kind of mission pull and you adopt the same no-match policy, it just looks cheap.

The Key Points: Surviving an Expensive Talent Market

  1. Don't use outside offers as a trigger for a counter
    The moment you match one person, you've sent 100 colleagues the message: "bring in an outside offer and you'll get more." Matching is a signal that ties your entire comp system to whatever the market will pay on a given day.
  2. Build a level-based comp system first
    Same level = same band. Negotiation ends at the leveling decision during hiring — and you make it explicit that there's no renegotiating after that.
  3. Bet on areas where mission can partly substitute for market rate
    A moral mission like "AI safety," or a technical one like "we're building the best model in the world." Without that, a no-match policy is just stinginess.
  4. Use team quality as a recruiting card at the offer stage
    Top 1% talent cares more about who they're working with than what they're being paid. The names on the "people you'll work with" list can be a stronger card than any signing bonus.
  5. Track retention as a KPI
    Prioritize retention over hiring. Anthropic's 80% is 13 points above OpenAI and 16 points above Meta. Spending the same budget on keeping people creates more value than spending it on poaching them.

FAQ

Meta actually landed big names like Bansal, Beyer, and Deitke — so why do you say they lost?

The key is rate, not raw count. Amodei himself said a far smaller fraction of Anthropic people were swayed than at other companies — and retention data backs that up (Anthropic 80% vs. Meta 64%). Landing a few superstars is a different game from keeping your entire senior cohort intact. Meta had some success at the former; it's falling behind on the latter.

What happens if a company with weak mission appeal adopts a no-match policy?

That's a misapplication of the Anthropic model. Amodei's policy only works when (1) the mission is genuinely compelling to employees and (2) the company is already paying above-market baseline. Without both, refusing to match just accelerates attrition. The priority order is: clarify the mission, raise the baseline comp — then consider a no-match policy.

Do numbers like $100M have any real relevance outside Silicon Valley? It feels like a different world.

Focus on the mechanism, not the number. The moment you match an outside offer, (1) colleagues' perception of fairness breaks and (2) the lesson spreads organization-wide: "bring in an outside bid and you'll get more." That mechanism works the same whether it's $100M or $100K. You see the exact same pattern at any tech company where a senior engineer gets a 30% raise by walking in with a competing offer — six months later, they do it again.

Then how do you adjust compensation when the market moves?

Through a full band refresh, not individual counters. You use market data to recalibrate level-by-level bands on a quarterly or biannual basis, and apply the update to everyone in that level. Anthropic isn't freezing pay — it's just not reacting to individual outside offers. Systemic raises and individual matching are completely different moves.

Deep Dive Resources

Anthropic CEO Dario Amodei on the no-match policy A Fortune piece based on the original Big Technology Podcast interview. The full logic is there: "the fastest way to destroy your company culture is to match outside offers." fortune.com

The details behind Meta poaching OpenAI's o1 key contributor What position Trapit Bansal held, and how three more OpenAI researchers left the same week — the first report from June, when the talent war really started heating up. techcrunch.com

The podcast where Sam Altman first disclosed the $100M signing bonuses The June 18th moment, as covered by CNBC. The first time the price tag on this talent war went public. cnbc.com

Full AI compensation market data for 2026 Forbes January analysis. Market bands by role, compensation for emerging titles (LLM engineer, prompt architect, etc.), and the $1.5M average stock comp per OpenAI employee. forbes.com