Loading News...
The 2026 tech market is being rebuilt around a compute layer that most of the world, India included, will not own
FUTURECRAFT
On the night of 30 April, four of the world's largest companies confirmed they will spend roughly $650 billion on AI infrastructure this year. Microsoft alone raised its 2026 capex forecast to $190 billion, up sixty-one percent. Amazon held its $200 billion line. Meta guided $125 to $145 billion. Alphabet, $175 to $185 billion. The market did not reward all of them. Alphabet climbed nearly ten percent. Meta fell almost nine. Microsoft slipped four. The split told you what investors are now actually pricing: not capex itself, but proof that the spend is producing AI revenue.
The 2026 tech market is being rebuilt around a compute layer that most of the world, India included, will not own. It will rent it. As AI shifts from training to inference, the rental period stops being a project cost and becomes a permanent operating expense.
The capex wall
Hyperscaler capex for the big five is forecast above $600 billion in 2026, a thirty-six percent jump over 2025, with roughly three-quarters tied directly to AI servers, GPUs, and data centres. Big Tech has issued around $100 billion in bonds this year alone to fund the buildout. Aggregate capex now sits above projected free cash flow even after buybacks. These companies are leveraging up at a scale historically associated with telecom or oil majors, not software firms.
Nvidia's full-year fiscal 2026 revenue closed at $215.9 billion, up sixty-five percent. Jensen Huang has stopped pretending these are GPUs and started calling them "AI factories." The framing is not marketing. A factory does not get bought once. It gets paid for, in tokens, every time it runs.
The layoffs reveal the same shift. Over 92,000 tech workers have been cut in the first four months of 2026. Meta cut 8,000. Amazon has cut at least 30,000 since October. Microsoft is offering buyouts to seven percent of its US staff, the first such programme in the company's fifty-one-year history. The capital is moving from headcount to silicon.
Why inference is the part that matters
Training is what you do once to build a model. Inference is what runs every time the model is used. Deloitte forecasts that compute spending on inference will overtake training in 2026. Analysts at J.Gold Associates put the figure higher, estimating eighty to eighty-five percent of AI workloads will be inference within one to two years.
Nvidia's new Vera Rubin platform was designed explicitly around this shift. Huang's pitch to cloud providers is no longer raw FLOPS. It is a ten-fold reduction in cost per inference token compared to the Blackwell generation. Once a customer commits an application to inference at scale, the meter does not stop running for the life of that product.
“Training was the down payment. Inference is the mortgage. And the bank does not live in Bengaluru.”
Building a competitive frontier model is one kind of dependency. Running an inference workload, every day, for every user, on rented infrastructure is a deeper kind. The first is solvable with capital. The second is structural.
Winners, losers, and the new investor test
The April earnings split makes the new test explicit. Alphabet was rewarded because Google Cloud revenue grew sixty-three percent in the quarter to $20 billion, with backlog nearly doubling to over $460 billion. The capex was visibly converting to paying customers. Meta was punished because its Reality Labs history makes investors sceptical that the AI spend will pay back. Microsoft was punished mildly because its $25 billion capex jump came with a $25 billion hit from higher component costs, suggesting margin pressure rather than demand acceleration.
The losers in this cycle are not bankrupt companies. They are firms whose margins are thinned by a customer base that increasingly buys infrastructure as a bundle, including in cybersecurity, where Cloudflare, CrowdStrike, and Palo Alto Networks lost between seven and fourteen percent in a single April session. The winners are the ones converting AI spending into recurring inference revenue inside a stack they own end-to-end.
The Indian ripple
In December 2025, Microsoft committed $17.5 billion to Indian cloud and AI infrastructure over four years, the largest investment any technology company has ever made in Asia. Amazon followed with a $35 billion commitment through 2030. Google had already pledged $15 billion to a gigawatt-scale AI hub in Andhra Pradesh. The combined private commitment crossed $67 billion in roughly ninety days.
In the same window, the Union Budget for FY27 allocated ₹1,000 crore to the IndiaAI Mission, half the previous year's ₹2,000 crore allocation against a five-year programme outlay of ₹10,372 crore. The mission has empanelled around 38,000 GPUs and is targeting 100,000 by the end of 2026. Yotta's private cluster alone will host 20,736 Nvidia Blackwell Ultra GPUs by August.
The arithmetic is brutal. India's public AI compute is being built on a public outlay roughly one percent the size of what a single American hyperscaler is committing to Indian soil. The compute that Indian startups, ministries, and researchers will actually run on, in volume, will be Microsoft's Hyderabad region, Amazon's expanded AWS footprint, and Google's Visakhapatnam campus. The IndiaAI Mission can fund grants. It cannot reset the terms of a $17.5 billion buildout already underway. Sovereignty over the building is not the same as sovereignty over the meter.
Bottom Line
The 2026 tech market is being consolidated into a compute layer that most of the world will rent for decades. The contest among hyperscalers is real, but it sits at the top of a stack that is itself widening the gap between those who own AI infrastructure and those who use it.
For Indian readers, the question is not whether AI will reach the country. It already has, on terms set largely in Redmond and Seattle. The question is whether the next budget cycle treats compute as the strategic asset it now demonstrably is, or continues to fund it at one percent of what foreign companies are committing on the same soil. Inference compounds. Dependency, once paid for in monthly invoices, is harder to walk back than any model checkpoint.
(The Author studies Computer Science and Artificial Intelligence at Rutgers University, New Jersey, USA. He is interested in emerging technologies and innovation, and can be reached on LinkedIn at @arssh-kumar14)
Leave a comment