This Week in Cloud — December 11, 2025
Welcome back to The Cloud Cover, your essential guide to navigating the dynamic world of cloud for Solutions Architects, engineers, and IT leaders. This week, the AI drumbeat gets even louder, hybrid cloud strategies solidify, and providers are racing to embed intelligence into every layer of the stack. Let's dive in.
⚡ AI Divergence: Four Clouds, Four Different Paths
For the last two years, the major cloud providers seemed to be running the same race: who can partner with the hottest AI lab and ship a chatbot the fastest? But as the dust settles on AWS re:Invent, Microsoft Ignite, and the final announcements of 2025, a stark reality is emerging. The "Big Four" may be diverging in their philosophies.
AWS has gone all-in on vertical integration. With updates to the Nova model family and Trainium3 silicon, Amazon is signaling that it wants to own the entire stack—from the sand to the synapse. They are betting that enterprises want a "Apple-like" experience where the chips, the models (Nova), and the orchestration (AgentCore) just work better because they were built by the same vendor.
Meanwhile, Microsoft Azure and Google Cloud represent opposite ends of the software spectrum. Azure has aggressively pivoted to a "Model Garden" strategy, bringing in Mistral Large 3 and Anthropic’s Claude to sit alongside OpenAI, acting as a neutral platform for whichever model wins the day. Google, conversely, is doubling down on its "DeepMind DNA," trying to win on raw model intelligence with Gemini 3 Deep Think and developer-focused agents like Jules.
Then there is Oracle. Their strategy appears simple: sell the fastest raw infrastructure to whoever needs it, betting that physics—not model weights—is the ultimate constraint. Is there room for all four? Probably. But the days of "feature parity" may be drawing to a close; choosing a cloud in 2026 will mean choosing a philosophy, not just a price point.
🔍 The Rundown
Vertical Integration Play: While we covered the bulk of re:Invent last week, the strategic implication of Nova 2 and Nova Forge is clear: AWS wants you to build on their models, tuned on your data, running on their metal.
Silicon Refresh: The new Graviton5 (192 cores) and Trainium3 chips are now the default path for cost optimization, with AWS claiming 4x better energy efficiency for ML training over the previous generation.
Database Savings: For those managing massive RDS or Aurora fleets, the new Database Savings Plans offer a committed-use discount (up to 35%) similar to what compute has had for years.
Security Headwinds: December's Patch Tuesday addressed 57 flaws, including a critical zero-day in the Cloud Files driver that allows privilege escalation. Separately, a "Cross Prompt Injection" vulnerability was found in GitHub Copilot for JetBrains, potentially turning your IDE into an attack vector.
Expanding the Garden: Azure integrated Mistral Large 3 into its Foundry, adding a sparse Mixture-of-Experts model that excels at multilingual reasoning to its roster.
Gov Cloud Wobble: Azure Government regions suffered a significant outage on Dec 8 due to a backend misconfiguration, knocking out services like Backup and Databricks for federal users.
Deep Thinking: GCP launched Gemini 3 Deep Think, a reasoning model that explores multiple hypothesis paths before answering. It reportedly scores 41% on "Humanity's Last Exam," a benchmark designed to stump current AI.
Smarter Storage: A new feature called Object Contexts allows developers to attach semantic metadata to files in Cloud Storage, making it easier for AI models to "understand" your unstructured data without heavy lifting.
Flexible Clusters: OCI announced support for mixed-node clusters in the Oracle Kubernetes Engine (OKE), allowing you to run VMs, bare metal, and serverless nodes in a single cluster to optimize spend.
APAC Connectivity: Digital Realty added OCI FastConnect support in Singapore, improving low-latency hybrid cloud options for the region.
🧐 Best Thing I Saw This Week…
Legendary investor Howard Marks recently wrote a fascinating analysis of what has been going on in the AI space—who the winners could be, whether there is a bubble, and how all of it may end.
Below is one compelling excerpt, but the piece is interesting and thought-provoking throughout. I would highly recommend reading the full piece for anyone interested in the space. (Linked below)
The key realization seems to be that if people remained patient, prudent, analytical, and value-insistent, novel technologies would take many years and perhaps decades to be built out. Instead, the hysteria of the bubble causes the process to be compressed into a very short period – with some of the money going into life-changing investment in the winners but a lot of it being incinerated.
📈 Trending Now: Oracle’s Capacity Anomaly
Oracle’s Q2 earnings report yesterday is an interesting study in supply versus demand. On paper, the news looked mixed: revenue came in at $16.1 billion, missing analyst expectations and causing the stock to tumble in early after-hours trading. But if you look closer, the story isn't about failing to find customers—it's about failing to build data centers fast enough to house them.
The metric that matters here is Remaining Performance Obligations (RPO)—essentially the backlog of signed contracts. Oracle’s RPO exploded by 438% year-over-year to $523 billion. This massive divergence between recognized revenue ($4.1B for cloud) and backlog suggests a severe "capacity anomaly". Customers (like OpenAI) are signing multi-billion dollar deals to reserve GPUs years in advance, treating compute power like futures contracts. Oracle's challenge now is purely physical: pouring concrete and racking GPUs fast enough to convert that paper backlog into cash. Only time will tell if they can do it.
📅 Event Radar
15 - 16
Learn to build AI apps with Foundry
17
An interesting multicloud session
14
Hear about new features first!
👋 Until Next Week
It's interesting to see the major players drift apart in strategy. For years, the playbook was "copy the leader." Now, AWS is trying to be the leader in efficiency, Google in pure intelligence, Azure in platform diversity, and Oracle in raw scale. Is one of these approaches “right,” or is there room for everyone to win?
Do you enjoy these emails? Your friends and colleagues might, too! Help us grow the cloud community by sharing the newsletter with others.
