The AI Enthusiast.
Anthropic tightened the premium workflow story, OpenAI showed what enterprise depth looks like in the numbers, Google kept shipping into enormous reach, and ServiceNow turned MCP into governed enterprise action.
Nine pages on what actually shifted this week in frontier AI: premium coding, enterprise depth, mass distribution, and the governed runtime that agents are about to run on.
Anthropic tightened the premium workflow story, OpenAI showed what enterprise depth looks like in the numbers, Google kept shipping into enormous reach, and ServiceNow turned MCP into governed enterprise action.
This was the first week where Anthropic's story felt less like "better model" and more like "better business." Opus 4.7 sits at the top of the stack, pricing stayed flat, and the company paired product momentum with a blunt message: we are financing more compute because demand is already here.
Anthropic said on April 6 that run-rate revenue had moved past $30 billion and that customers spending more than $1 million annually doubled in under two months. On April 20 it added an AWS deal worth more than $100 billion over ten years, securing up to 5 gigawatts of capacity and nearly 1 gigawatt before the end of 2026. That is not experimentation behavior. That is "we know where the demand is coming from" behavior.
Opus 4.7 reinforces the same point. Anthropic kept pricing at $5 per million input tokens and $25 per million output tokens, then positioned the model around longer-running coding, vision, and finance-heavy work instead of general chatbot theater. The signal is clean: premium workflow spend beats breadth if the outputs matter enough.
The real UX shift is not a benchmark. It is Claude Code becoming harder to interrupt. Auto Mode, routines, ultrareview, and better usage visibility move the product from "assistant in a terminal" toward "agentic runtime with opinionated controls."
The more important strategic read is that Anthropic is teaching developers to think in terms of background work, guarded execution, and system-level observability. Once that mental model sticks, model quality still matters, but product stickiness matters more.
Anthropic's May 5 move was not a new chatbot. It was a pack of opinionated finance workflows, Microsoft 365 reach, and a services wrapper designed to get the work actually deployed.
Pitchbooks, KYC, earnings review, diligence, and other finance workflows ship as reference architectures with skills, connectors, and subagents.
The add-ins story matters because it puts Claude inside the actual documents and communications that drive high-value enterprise work.
Anthropic also launched a new enterprise AI services company, which is the missing layer for firms that want outcomes more than they want tools.
This is the part of the market that looks boring until it compounds. Once a finance team has a preferred diligence stack, governed data access, and review flows wrapped around it, the switching cost is not "which model is better next quarter?" It is "do we want to re-platform our workflow?" That is a much stronger position.
The most useful OpenAI post this week was not about a model. It was about behavior: what frontier firms actually do differently once AI stops being just a chat tab and starts becoming part of the operating system for work.
The read here is straightforward. OpenAI has enough consumer gravity to reduce rollout friction inside companies, enough developer depth to make Codex a real wedge, and now a clearer theory for how to monetize both attention and work. That is a harder business to attack than a pure model vendor.
Google's story is still easy to underrate because the launches feel less theatrical. But the underlying position keeps getting stronger: Gemini 3 Flash is rolling into mass distribution, the API is already processing over 1 trillion tokens per day, and the app reach is enormous before I/O even starts.
If Anthropic is trying to own premium workflow quality and OpenAI is trying to own the operating layer for work, Google is trying to make Gemini feel native to every context where people already are. That is a different, but very serious, way to win.
The interesting MCP story this week was not another hobby server. It was ServiceNow turning its system of action into a generally available MCP surface, then wrapping that surface in identity, permissions, audit trails, and enterprise metering.
The sharpest implication is that MCP's enterprise future looks less like a messy open registry and more like a layered market: open protocol on top, governed registries and runtime control underneath. The protocol won mindshare. The runtime fight is just starting.
The better comparison is no longer just "which model is smartest?" It is who owns premium workflow spend, who owns broad distribution, who owns developer depth, and who can finance the next compute wall.
ChatGPT's weekly habit is still the cleanest consumer wedge. Google's strength is that Gemini arrives attached to products with much larger daily gravity.
Finance templates, M365, coding reputation, and premium model positioning are all reinforcing the same lane.
Codex and Claude Code are changing hands-on developer behavior. Google is making sure Gemini appears in every developer surface that matters.
The compute story now matters because each company's product strategy is beginning to mirror the shape of the infrastructure behind it.
This issue closed with the market more clearly segmented than it looked a week ago. The next turn is likely to be less about another isolated launch and more about whether distribution, governance, or premium workflow adoption sets the story heading into mid-May.
Gemini is already in mass distribution. I/O is where Google gets a shot at changing the market conversation, not just the product menu.
The Colorado General Assembly moved the effective date of the transparency requirements to June 30, 2026. Governance stories will start getting more operational very quickly.
ServiceNow will not be the last platform to wrap MCP in enterprise identity, authorization, and observability. Expect the runtime layer to get crowded.
One more practical note: the coverage window here is May 2, 2026 through May 8, 2026. The most unstable part of the next week is not model release timing. It is whether usage, governance, and distribution numbers keep moving fast enough to redraw how people rank the three frontier providers.
Built from the May 8 roundup, then tightened with current official sources from Anthropic, OpenAI, Google, ServiceNow, and Colorado's legislature. Semantic-first article in reading mode, deck-style pacing in present mode.
The numbers most likely to survive into next week's conversation.
| Metric | Value |
|---|---|
| Anthropic run-rate revenue | $30B+ |
| Anthropic customers spending more than $1M annually | 1,000+ |
| Anthropic business customers | 300K+ |
| OpenAI paying business users | 9M |
| OpenAI Codex weekly active users | 3M |
| OpenAI enterprise share of revenue | >40% |
| ChatGPT weekly users | 900M |
| Gemini app monthly active users | 750M |
| Paid Gemini Enterprise seats | 8M |
| Gemini API throughput | 1T tokens/day |
| ServiceNow workflows running yearly | 100B+ |
| Colorado AI transparency requirements effective | 2026-06-30 |
| Audience tag | Topics |
|---|---|
| All users | 07 Google distribution · 09 Frontier map · 10 Watchlist and policy |
| Developers | 04 Claude Code runtime · 08 ServiceNow and governed MCP |
| All users + Developers | 03 Anthropic premium lane · 05 Finance agents · 06 OpenAI depth and moat |