In partnership with

What is happening now is more serious and interesting. AI is moving from “look what this can do” to “who controls where this goes next.” Over the past two weeks, the biggest developments have not just been about smarter systems. They’ve been about power, infrastructure, distribution, and limits. OpenAI is pushing harder into professional work, Google is focusing on scale and efficiency, Microsoft is turning copilots into doers, Meta is racing to secure its own computing resources, and Anthropic has become central to a much larger debate about whether AI companies can still set ethical limits once governments and defense agencies get involved.

The clearest signal came from OpenAI’s release of GPT-5.4 on March 5. The company is presenting it less like a flashy upgrade and more like a serious work engine. It has stronger reasoning, better coding, and improved performance across spreadsheets, documents, and presentations. It also includes native computer-use capabilities for longer, more complex workflows. OpenAI claims that GPT-5.4 reduces factual errors compared to GPT-5.2. This is important because the market no longer cares solely about intelligence. It wants reliability. It wants usable output. It wants less cleanup after the model is finished.

This shift is important for creators as well. AI products are moving away from one-off generation and toward real production support. Not just “write me something,” but “finish the job.” As models improve at creating presentations, spreadsheets, and handling multi-step tasks, they stop feeling like toys and start looking like real tools for work. That is the true meaning of this phase: AI is becoming operational.

Ad Break!

First sponsor today: I Hate IT Here

The free newsletter making HR less lonely

The best HR advice comes from people who’ve been in the trenches.

That’s what this newsletter delivers.

I Hate it Here is your insider’s guide to surviving and thriving in HR, from someone who’s been there. It’s not about theory or buzzwords — it’s about practical, real-world advice for navigating everything from tricky managers to messy policies.

Every newsletter is written by Hebba Youssef — a Chief People Officer who’s seen it all and is here to share what actually works (and what doesn’t). We’re talking real talk, real strategies, and real support — all with a side of humor to keep you sane.

Because HR shouldn’t feel like a thankless job. And you shouldn’t feel alone in it.

Less is more!

Google is taking a different approach. Its newly announced Gemini 3.1 Flash-Lite is focused on speed, scale, and cost-efficiency rather than just prestige. Google claims it outperforms Gemini 2.5 Flash in both speed and quality, clearly targeting high-volume use cases where cost is as important as intelligence. This is a smart move. The AI market is reaching a stage where not every winner will be the most powerful model. Some winners will be the cheapest useful model, the fastest deployable model, or the model that is already integrated into the tools people use daily.

 This directly relates to Microsoft. Its new Copilot Tasks feature shows where consumer and workplace AI are headed next. Microsoft describes it as a system that can plan and complete multi-step tasks, then report back. This may sound simple on paper, but it marks a significant shift in product direction. The assistant model is gradually giving way to the agent model. The question is no longer if AI can provide answers. The question is whether it can take action. Moreover, the Financial Times reports that Microsoft is adding Anthropic models to Copilot workplace tools. This suggests that the platform layer may become more important than any single lab. The future may belong less to one perfect model and more to whoever can best coordinate many of them.

 Meta is making a different kind of investment: owning more of the technology that powers the intelligence. This week, the company stated it is developing and deploying four new generations of MTIA chips over the next two years to support ranking, recommendations, and generative AI tasks. This is a significant move. The largest AI companies no longer want to rely solely on the same external computing resources. Model quality still matters, but the infrastructure question is becoming equally strategic: who can train, serve, and scale AI at an affordable rate that lasts over time? Meta’s answer is becoming clear build more of the stack in-house.

 However, the most critical story may be Anthropic’s conflict with the Pentagon. Reuters reports that the company sued to block a Pentagon blacklisting after it refused to remove guardrails limiting the use of its technology for things like autonomous weapons and domestic surveillance. A later report indicated that the Pentagon might allow limited exemptions beyond a six month phase out for rare national-security cases. This highlights how deeply Anthropic’s tools may already be integrated into government operations. This is no longer just a theoretical discussion about AI ethics. It is a real test of whether a private AI company can still refuse once state power comes into play.

Ad Break!

Second sponsor today: The Code

Learn how to code faster with AI in 5 mins a day

You're spending 40 hours a week writing code that AI could do in 10.

While you're grinding through pull requests, 200k+ engineers at OpenAI, Google & Meta are using AI to ship faster.

How?

The Code newsletter teaches them exactly which AI tools to use and how to use them.

Here's what you get:

  • AI coding techniques used by top engineers at top companies in just 5 mins a day

  • Tools and workflows that cut your coding time in half

  • Tech insights that keep you 6 months ahead

Sign up and get access to the Ultimate Claude code guide to ship 5X faster.

Life is beautiful

The implications go beyond Anthropic. Reuters notes that legal experts believe the company may have a strong case, while the Associated Press points out that the standoff has garnered support from parts of the wider AI community. Simultaneously, Microsoft is reportedly integrating Anthropic models into its workplace products. This means that the market is sending one message while politics sends another. Commercial demand is driving wider adoption. Government pressure is pushing for compliance and control. This tension may define the next phase of AI more than performance benchmarks will.

There is also a geopolitical angle tightening underneath all of this. Reuters reported on March 13 that the U.S. Commerce Department withdrew a planned rule on AI chip exports. This may sound technical, but it is not a minor policy change. Export rules shape where advanced AI infrastructure can be built, who gets access to top-tier chips, and how influence in AI spreads across countries and alliances. When governments change positions on chip exports, they are not just changing trade policy. They are reshaping the competitive landscape.

Ad Break!
Ad Break!
Ad Break!

Third sponsor today: The Flyover

The News Source 2.3 Million Americans Trust More Than CNN

Tired of spin? The Flyover delivers fast, fact-focused news across politics, business, sports, and more — free every morning. No agenda. No paywall. Join 2.3 million readers who trust us to start their day right.

let’s continue

So what should creators, builders and operators take away from this?

 The models are indeed improving quickly. That part is true. But the more significant story now is that AI is settling into its true battlegrounds: workflow, distribution, chips, policy, and control. OpenAI is focusing on the professional layer. Google is tackling the efficiency layer. Microsoft is building the orchestration layer. Meta is strengthening the infrastructure layer. Anthropic is pushing the industry to face the governance layer.

That is why this moment feels different.
We are no longer witnessing a race of AI demos.
We are observing the formation of the AI landscape.

BMX

If you’d like to get in touch with me, here’s my X handle
If you want to support my work, you can Buy me a coffee here 

Reply

Avatar

or to participate

Keep Reading