Where do you get your data on enterprise AI adoption? Most reports are survey-based. "Is your company using AI?" — "Yes." That kind of data tells you nothing useful. Here's the thing — a16z did something different this time. They went straight to the source: actual contract and revenue data from Fortune 500 and Global 2000 companies.

Key Numbers
29% of Fortune 500 are paying AI startup customers Coding is the #1 use case by a wide margin Support and search are #2 and #3 Tech, legal, and healthcare lead by industry Model performance up 20 percentage points in 4 months

What Is It?

This is a report published in April 2026 by a16z partner Kimberly Tan. The key difference from other enterprise AI research is straightforward.

Other research: Self-reported surveys asking "Is your company using AI?" They measure sentiment, not reality.
The a16z report: Actual contract data from AI startups, public revenue figures, and data gathered from thousands of enterprise and startup meetings.

The conclusions are noticeably different. MIT claimed 95% of AI pilots fail — a16z's data points the other direction.

29%
of Fortune 500 are paying AI startup customers
~19%
of Global 2000 are paying AI startup customers
3 years
from ChatGPT's launch to reaching this milestone
$37B
enterprise gen AI spend in 2025 (Menlo Ventures)

To put that in context: Fortune 500 companies aren't early adopters. The typical path is startup-to-startup first, then years before landing the first big enterprise contract. AI flipped that script. In just three years, nearly a third of the Fortune 500 became paying customers of AI startups.

Other data points in the same direction. According to Menlo Ventures, enterprise generative AI spend tripled from $11.5B in 2024 to $37B in 2025. And in NVIDIA's survey, 64% of companies said they're using AI in actual production.

What Changes?

The money is concentrating in three clear areas.

Use Cases: Coding > Support > Search

Use Case Characteristics Why It Works
Coding Bigger than all other use cases combined Data-rich, verifiable output, engineers are early adopters, 10–20x productivity gains
Support SOP-driven, quantitatively measurable Limited intent scope, escalation path built in, low change management cost since it replaces BPO
Search Internal search + industry-specific search ChatGPT is itself a search tool; Glean, Harvey, and OpenEvidence are all growing fast

a16z's explanation for coding's dominance: code is data-rich (there's a massive amount of high-quality code online), text-based so models can parse it easily, and syntactically precise so results can be verified immediately. Cursor's explosive growth, and the rapid rise of Claude Code and Codex, back this up.

And coding tools don't need to be 100% perfect to deliver value. Partial automation — generating boilerplate, finding bugs — already saves significant time. The human-in-the-loop workflow where developers review the output is natural, which keeps enterprise adoption friction low.

Why Support Lands at #2 Is the Interesting Part

Support sits at the opposite end of the spectrum from coding. Coding gets the most investment attention; support is the most overlooked. But it's a near-perfect fit for AI. Intent is limited ("I want a refund"), SOPs are clear, and ROI is immediately provable through CSAT and resolution rates. Since most companies already outsource support to BPOs, switching to AI doesn't require a big internal change management effort. And there's a natural escalation path — "let me connect you with a manager" — which makes the pilot risk minimal.

Industries: Tech (Expected), Legal and Healthcare (Surprising)

Tech leading the pack isn't surprising — 27% of ChatGPT's business users come from tech. What's interesting is legal and healthcare.

Legal has historically been a slow software market — long sales cycles, few tech-friendly buyers. Traditional software offered lawyers limited value. But AI directly tackles core legal work: processing large volumes of text, reasoning, summarizing, and drafting. Harvey hitting $200M ARR in three years from founding is the proof.

Healthcare follows a similar pattern. EHR systems dominate the market and have made it hard for new software to break in. But AI found a niche that doesn't require replacing EHRs — medical scribing, clinical search, back-office automation. Companies like Abridge and Ambience Healthcare are growing quickly.

Areas Where the Model Is Improving Fast (But Adoption Hasn't Caught Up Yet)

This is actually what a16z finds most interesting. Based on OpenAI's GDPval benchmark, model performance in certain areas is improving dramatically.

01
Accounting & Audit — GDPval score up ~20 percentage points in just 4 months. No major standalone AI startup has moved here yet — wide open.
02
Investigation & Detective Work — ~30 percentage point improvement in 4 months. AI capability for unstructured data analysis is climbing fast.
03
Spreadsheets & Finance Workflows — Anthropic is building a finance-specific Claude. Computer use on top of legacy systems.
04
Long-Horizon Tasks — METR benchmark shows agent autonomous task time increasing rapidly, enabling complex automation well beyond simple tasks.

Other research paints a similar picture. McKinsey found that nearly every company is using AI, but two-thirds haven't started scaling yet. Deloitte projected the share of companies with 40%+ of projects in production would double within six months. And ISG found that the proportion of use cases reaching production doubled year-over-year to 31% in 2025.

The bottom line: AI adoption is real and accelerating. But it's not happening uniformly — it's concentrated in specific use cases and specific industries, and that's the key insight.

Getting Started

Here's a framework distilled from the a16z report and other research, for anyone evaluating enterprise AI adoption.

  1. Start with "Verifiable" Areas
    The common thread in areas where AI works best: text-based, repetitive work, natural human-in-the-loop, and verifiable results. That's exactly why coding, support, and search rank 1–2–3. Find the tasks in your organization that match these conditions first.
  2. Try Support First — It Has the Lowest Pilot Risk
    You've got clear SOPs, ROI is immediately measurable through CSAT and resolution rates, and if something goes wrong, you escalate to a human. Coding tools tend to get adopted bottom-up by engineering. Support requires an executive decision — but the ROI case is easy to make.
  3. Track Model Performance Trends
    What AI can't do well today may look very different in six months — accounting and audit improved by 20 percentage points in just four months. Track benchmarks like GDPval and METR regularly to catch the moment AI becomes viable for your domain.
  4. Don't Underestimate Partial Automation
    Even 50% automation frees people to focus on the other 50%. Coding tools that only handle boilerplate still deliver 10–20x productivity gains. Aiming for 100% automation tends to fail. Partial automation is already a proven strategy.
  5. If You're Building — Target Gaps Where the Model Is Ready But Nobody's Moved Yet
    This is a16z's core advice for builders. Look for areas where model capability is improving fast on benchmarks but revenue momentum hasn't started. Many of today's successful AI startups built their infrastructure and customer relationships before the model was good enough — that's what gave them the lead.

Remember the Gap Between Surveys and Real Revenue Data

The McKinsey, Deloitte, and NVIDIA surveys are self-reported, so "using AI" covers a wide range. The a16z report uses actual contracts and revenue, so it's more conservative. Reading both types of data together gives you the most accurate picture.