This is not a prediction that the AI boom will collapse. It may not. The depreciation hawks may be wrong, and India’s inference demand may be deep enough to give every retired GPU a long second life

ARSSH KUMAR

FUTURECRAFT  |  TECHNOLOGY & MARKETS

In August, a data centre in Noida is scheduled to switch on 20,736 Nvidia Blackwell Ultra GPUs. The operator, Yotta, calls it one of Asia’s largest AI superclusters. The framing around it is sovereignty: Indian soil, Indian compute, Indian control over the machines that will train the country’s models.

Look at how the thing is paid for, and a different picture appears. Yotta’s chief executive has described the model plainly. This round of GPUs earns the revenue that funds the next round of GPUs, which earns the revenue that funds the round after that. That is not a one-time build. It is a treadmill, and the pace is set by how fast the chips underneath lose their value.

India has not just bought the compute. It has bought the clock.

The asset that ages in dog years

A data centre is built to last twenty-five years. The GPUs inside it are not. Nvidia has compressed its release cycle to roughly twelve months: Hopper, then Blackwell, then Rubin, each generation a sharp jump in performance per watt. In a business where power is the largest running cost, a chip two generations old is not just slower. It is expensive to operate for the work it does.

This is where the accounting gets uncomfortable. Hyperscalers depreciate these chips over five to six years. Critics, including investor Michael Burry and valuation specialist Aswath Damodaran, argue the real economic life is closer to two or three. The gap is not academic. If the pessimists are right, the world’s AI balance sheets are carrying hardware at values the market would no longer pay, and the correction arrives as write-downs.

For India, the question is narrower and sharper. The country is wiring a national strategic programme to an asset class whose useful life nobody can yet agree on.

How India is paying for it

Yotta has already put more than 1.5 billion dollars into infrastructure and committed another 2 billion to chips. Some of that is conventional debt; the company raised 40 billion rupees for its data centre expansion. Some is equity, through a pre-IPO round. The rest comes from a route worth pausing on.

Yotta’s parent, Nidar Infrastructure, is going public on the Nasdaq through a merger with Cartica Acquisition Corp, a US-listed shell company. A SPAC, in plain terms, is a way to reach public markets faster and with lighter scrutiny than a traditional listing.

The structure does something specific. The operating asset sits in Noida. The demand assumption that justifies it, that India will need ever more compute at the price Nvidia charges, also sits in India. But the refinancing risk, the obligation to keep raising money against chips that are losing value, gets placed with retail investors in New York. If the depreciation pessimists are proven right, the first losses land on a shareholder base an ocean away from the asset.

That is a clever piece of financial engineering. It is not the same thing as sovereignty.

The state is selling subsidised access to an asset whose value the same state cannot control. That is not insulation from the global AI economy. It is exposure to it, with a government stamp.

The subsidy underwrites the clock

The public side of India’s AI push runs through the IndiaAI Mission, funded at 10,372 crore rupees. Part of that money subsidises compute: startups and researchers can rent a GPU-hour through the IndiaAI portal for around 65 rupees, well below what the hardware costs to run. More than 58,000 GPUs now sit in the national pool.

The intention is sound. Cheap compute lowers the barrier for a small Indian team to build something. But look at what the subsidy is actually doing. It is buying down the cost of using hardware that is depreciating on the same fast clock as everything else. The state is not standing outside the treadmill handing out water. It is on the belt, paying part of the fare.

If GPU values hold, this is simply industrial policy doing its job. If they fall the way the skeptics expect, the IndiaAI Mission has committed public money to an asset losing value faster than the budget cycle can absorb, and the startups it subsidised are building on a cost base that was never real.

The honest counter

There is a serious argument on the other side, and it deserves a fair hearing. Older GPUs do not become useless. They cascade downward, from training frontier models to running inference, the cheaper everyday work of answering queries. CoreWeave, a major US operator, has re-leased returned Hopper chips at close to their original rates. India, with hundreds of millions of users and inference demand concentrated near its cities, is arguably the ideal place for that second life. A chip retired from the frontier could earn for years serving Indian users.

That argument holds, but only on one condition. The cascade works if there is genuine downstream demand to absorb the older hardware. A subsidised market makes that demand harder to read. When compute is sold below cost, you cannot easily tell whether startups are using it because the economics work or because the price is artificial. The subsidy that makes the cascade look healthy is also the thing obscuring whether it is.

Bottom Line

This is not a prediction that the AI boom will collapse. It may not. The depreciation hawks may be wrong, and India’s inference demand may be deep enough to give every retired GPU a long second life.

The point is narrower. India has tied a national programme, a flagship listed operator, and a pool of public money to an asset with a contested useful life and a financing structure that has been routed offshore.

Each of those choices is defensible alone. Together, they mean the downstream consequence of a GPU correction does not stay in a balance sheet in Texas or a shell company in New York. It lands on Indian startups that built on subsidised compute, on an Indian operator whose expansion depends on the next raise, and on a public mission that spent its budget renting time on a clock it does not control.

The supercluster will switch on in August. The treadmill is already running.

(The Author studies Computer Science and Artificial Intelligence at Rutgers University, New Jersey, USA. He is interested in emerging technologies and innovation, and can be reached on LinkedIn at @arssh-kumar14)

By RK NEWS

Leave a Reply

Your email address will not be published. Required fields are marked *