Issue No. 13 · Developer Edition

The AI Enthusiast.

Nine pages on what actually shifted this week in frontier AI: premium coding, enterprise depth, mass distribution, and the governed runtime that agents are about to run on.

Issue No. 13 · Developer Edition

The AI Enthusiast.

Anthropic tightened the premium workflow story, OpenAI showed what enterprise depth looks like in the numbers, Google kept shipping into enormous reach, and ServiceNow turned MCP into governed enterprise action.

Irv Cassio · Weekly Briefing May 8 · 2026
Inside this issue · 11 pages

Contents

Page 03 · MarketAll users · Developers

Anthropic looks like the premium lane.

This was the first week where Anthropic's story felt less like "better model" and more like "better business." Opus 4.7 sits at the top of the stack, pricing stayed flat, and the company paired product momentum with a blunt message: we are financing more compute because demand is already here.

Newsletter angle
Claude got stronger without getting more expensive, and Anthropic is using that wedge to own premium coding and enterprise workflows.
$30B
Run-rate revenue now surpassed
1,000+
Customers spending $1M+ annually
$5 / $25
Opus 4.7 input / output per 1M
5GW
AWS compute commitment over ten years

Anthropic said on April 6 that run-rate revenue had moved past $30 billion and that customers spending more than $1 million annually doubled in under two months. On April 20 it added an AWS deal worth more than $100 billion over ten years, securing up to 5 gigawatts of capacity and nearly 1 gigawatt before the end of 2026. That is not experimentation behavior. That is "we know where the demand is coming from" behavior.

Opus 4.7 reinforces the same point. Anthropic kept pricing at $5 per million input tokens and $25 per million output tokens, then positioned the model around longer-running coding, vision, and finance-heavy work instead of general chatbot theater. The signal is clean: premium workflow spend beats breadth if the outputs matter enough.

Page 04 · DevDevelopers

Claude Code stops babysitting.

The real UX shift is not a benchmark. It is Claude Code becoming harder to interrupt. Auto Mode, routines, ultrareview, and better usage visibility move the product from "assistant in a terminal" toward "agentic runtime with opinionated controls."

Newsletter angle
Anthropic made the press-y-to-continue problem materially smaller, and that changes how much real work developers will let the agent own.
Auto Mode
Safer actions run without the approval tax
Anthropic removed the separate enable flag for Max subscribers on Opus 4.7, which turns permission prompts into a policy problem instead of a keyboard problem.
Routines
PRs and schedules can fire cloud agents
Create an async code worker once, attach repos and connectors, then let GitHub events or schedules trigger it without your machine running.
Ultrareview
Cloud review becomes a first-class path
Anthropic is explicitly productizing multi-agent review, not treating it as an advanced trick for power users.
Usage Breakdown
Spend finally becomes inspectable
Parallel sessions, subagents, cache misses, and long context now show up as visible contributors instead of invisible cost leaks.

The more important strategic read is that Anthropic is teaching developers to think in terms of background work, guarded execution, and system-level observability. Once that mental model sticks, model quality still matters, but product stickiness matters more.

Page 05 · EnterpriseAll users · Developers

Wall Street gets agent templates.

Anthropic's May 5 move was not a new chatbot. It was a pack of opinionated finance workflows, Microsoft 365 reach, and a services wrapper designed to get the work actually deployed.

Newsletter angle
Anthropic is no longer selling generic intelligence to financial firms; it is selling pre-shaped workflows, governed data access, and a path to production.
Finance agents
10 templates out of the box

Pitchbooks, KYC, earnings review, diligence, and other finance workflows ship as reference architectures with skills, connectors, and subagents.

Microsoft 365
Claude across Excel, PowerPoint, Word, and Outlook

The add-ins story matters because it puts Claude inside the actual documents and communications that drive high-value enterprise work.

Services wrapper
Blackstone, Hellman & Friedman, and Goldman Sachs

Anthropic also launched a new enterprise AI services company, which is the missing layer for firms that want outcomes more than they want tools.

This is the part of the market that looks boring until it compounds. Once a finance team has a preferred diligence stack, governed data access, and review flows wrapped around it, the switching cost is not "which model is better next quarter?" It is "do we want to re-platform our workflow?" That is a much stronger position.

Page 06 · MarketAll users · Developers

OpenAI turns depth into a moat.

The most useful OpenAI post this week was not about a model. It was about behavior: what frontier firms actually do differently once AI stops being just a chat tab and starts becoming part of the operating system for work.

Newsletter angle
OpenAI is widening the moat from both ends: deep enterprise usage at the top, and ever-larger consumer distribution and monetization at the bottom.
900M
ChatGPT weekly users
9M
Paying business users
3M
Codex weekly active users
>40%
Revenue now from enterprise
B2B Signals
Frontier firms use 3.5x as much intelligence per worker
OpenAI says the gap is increasingly about task depth, not simple message count, and that frontier firms send 16x as many Codex messages per worker as typical firms.
Ads platform
Monetization gets broader, not just deeper
OpenAI expanded ChatGPT ads with partner buying, a beta Ads Manager, CPC bidding, and new measurement. The business is clearly learning how to monetize intent as well as subscriptions.

The read here is straightforward. OpenAI has enough consumer gravity to reduce rollout friction inside companies, enough developer depth to make Codex a real wedge, and now a clearer theory for how to monetize both attention and work. That is a harder business to attack than a pure model vendor.

Page 07 · PlatformAll users · Developers

Google bets on reach and speed.

Google's story is still easy to underrate because the launches feel less theatrical. But the underlying position keeps getting stronger: Gemini 3 Flash is rolling into mass distribution, the API is already processing over 1 trillion tokens per day, and the app reach is enormous before I/O even starts.

Newsletter angle
Google does not need to win the hype cycle every week if it keeps winning the cost-performance and distribution cycle every quarter.
750M
Gemini app monthly active users
8M
Paid Gemini Enterprise seats
1T / day
Gemini API tokens processed
May 19-20
Google I/O 2026

If Anthropic is trying to own premium workflow quality and OpenAI is trying to own the operating layer for work, Google is trying to make Gemini feel native to every context where people already are. That is a different, but very serious, way to win.

Page 08 · InfraDevelopers

ServiceNow opens the governed runtime.

The interesting MCP story this week was not another hobby server. It was ServiceNow turning its system of action into a generally available MCP surface, then wrapping that surface in identity, permissions, audit trails, and enterprise metering.

Newsletter angle
The most valuable agent market may not be the agent shell. It may be the governed runtime that every serious agent has to route through to touch work.
Action Fabric
MCP Server is generally available now
ServiceNow says any agent, whether built on Claude, Copilot, or a custom stack, can call governed enterprise actions headlessly through the MCP Server.
AI Control Tower
Governance becomes part of the execution layer
Identity verification, permission scoping, enterprise audit trails, session management, and consumption metering are built into the runtime story.
Platform gravity
100 billion workflows already run on the platform each year
That matters because agents are not just reading data. They are trying to trigger real work with approvals, workflows, and business rules attached.

The sharpest implication is that MCP's enterprise future looks less like a messy open registry and more like a layered market: open protocol on top, governed registries and runtime control underneath. The protocol won mindshare. The runtime fight is just starting.

Page 09 · FrontierAll users

Frontier map: same race, three different businesses.

The better comparison is no longer just "which model is smartest?" It is who owns premium workflow spend, who owns broad distribution, who owns developer depth, and who can finance the next compute wall.

Newsletter angle
Anthropic, OpenAI, and Google are not playing the same game anymore. They are competing on different mixes of quality, reach, monetization, and infrastructure leverage.
Anthropic
Premium workflow
$30B
Run-rate revenue
1,000+
$1M+ annual customers
300K+
Business customers
5GW
AWS compute commitment
Best at
Premium coding, long-running agents, finance-heavy enterprise work
Moat
Quality plus stronger workflow fit inside serious developer and knowledge-work loops
Risk
Still smaller consumer reach, so it must keep winning where output quality is visibly worth paying for
OpenAI
Operating layer
900M
Weekly ChatGPT users
9M
Paying business users
3M
Codex weekly actives
>40%
Revenue now from enterprise
Best at
Bridging consumer familiarity into enterprise adoption and delegated work
Moat
Usage gravity across consumer, workplace, API, coding, and now ad monetization
Risk
Broad product surface means execution discipline matters as much as model quality
Google
Distribution machine
750M
Gemini app MAU
8M
Paid enterprise seats
1T / day
API tokens processed
4.4M
Developers using Gemini models
Best at
Cost-performance, mass reach, and weaving Gemini into products people already use
Moat
Search, Android, Workspace, Cloud, and AI Studio form a distribution stack nobody else has
Risk
Needs the narrative to match the reach, or the market keeps underrating what is already live
Consumer reach
OpenAI leads, Google is close behind.

ChatGPT's weekly habit is still the cleanest consumer wedge. Google's strength is that Gemini arrives attached to products with much larger daily gravity.

Anthropic
OpenAI
Google
Premium workflow spend
Anthropic has the strongest premium-enterprise signal right now.

Finance templates, M365, coding reputation, and premium model positioning are all reinforcing the same lane.

Anthropic
OpenAI
Google
Developer depth
OpenAI and Anthropic are shaping the workflow; Google is shaping the substrate.

Codex and Claude Code are changing hands-on developer behavior. Google is making sure Gemini appears in every developer surface that matters.

Anthropic
OpenAI
Google
Compute posture
Anthropic looks most aggressive, Google looks most integrated, OpenAI still looks biggest on raw user demand.

The compute story now matters because each company's product strategy is beginning to mirror the shape of the infrastructure behind it.

Anthropic
OpenAI
Google
Page 10 · PolicyAll users

What to watch before next Friday.

This issue closed with the market more clearly segmented than it looked a week ago. The next turn is likely to be less about another isolated launch and more about whether distribution, governance, or premium workflow adoption sets the story heading into mid-May.

Newsletter angle
The next issue is likely to be about narrative capture: can Google own I/O, can Anthropic keep the premium lane, and can OpenAI keep translating consumer gravity into enterprise depth?
May 19-20
Google I/O 2026

Gemini is already in mass distribution. I/O is where Google gets a shot at changing the market conversation, not just the product menu.

June 30
Colorado AI Act deadline

The Colorado General Assembly moved the effective date of the transparency requirements to June 30, 2026. Governance stories will start getting more operational very quickly.

Any day now
More governed runtimes

ServiceNow will not be the last platform to wrap MCP in enterprise identity, authorization, and observability. Expect the runtime layer to get crowded.

One more practical note: the coverage window here is May 2, 2026 through May 8, 2026. The most unstable part of the next week is not model release timing. It is whether usage, governance, and distribution numbers keep moving fast enough to redraw how people rank the three frontier providers.

Page 11 · Colophon

That is the briefing. See you next Friday.

Built from the May 8 roundup, then tightened with current official sources from Anthropic, OpenAI, Google, ServiceNow, and Colorado's legislature. Semantic-first article in reading mode, deck-style pacing in present mode.

Method
Source pass and synthesis
Started from the existing roundup, checked the last published issue, replaced shaky claims where better primary-source numbers were available, and rebuilt the provider infographic around current disclosed metrics.
Format
Newsletter first, presentation second
The page is readable as semantic HTML for the site and can switch into deck mode for the Friday session without losing structure or SEO value.
Brand
The AI Enthusiast · Issue No. 13
Developer edition, with the audience tags and category tags preserved from the newsletter workflow rather than hiding them in the source document.

Quick stats

The numbers most likely to survive into next week's conversation.

MetricValue
Anthropic run-rate revenue$30B+
Anthropic customers spending more than $1M annually1,000+
Anthropic business customers300K+
OpenAI paying business users9M
OpenAI Codex weekly active users3M
OpenAI enterprise share of revenue>40%
ChatGPT weekly users900M
Gemini app monthly active users750M
Paid Gemini Enterprise seats8M
Gemini API throughput1T tokens/day
ServiceNow workflows running yearly100B+
Colorado AI transparency requirements effective2026-06-30

Audience distribution

Audience tagTopics
All users07 Google distribution · 09 Frontier map · 10 Watchlist and policy
Developers04 Claude Code runtime · 08 ServiceNow and governed MCP
All users + Developers03 Anthropic premium lane · 05 Finance agents · 06 OpenAI depth and moat
1 / 11
← → navigate · ESC reading mode · O overview