Goldman Sachs’ baseline forecast of $7.6 trillion in artificial intelligence (AI) capital spending ultimately depends on how long AI‑specific silicon remains useful. Decentralized networks promise major cost efficiencies but continue to battle latency issues, and experts argue their long‑term viability will hinge on prioritizing verifiability over raw performance.

A recent Goldman Sachs report shifts the debate from whether artificial intelligence (AI) demand exists to which supply-side factors will determine the actual cost of the build-out. The report projects $7.6 trillion in AI capital expenditure as a baseline but emphasizes that this figure is highly sensitive to “swing variables,” including the useful life of AI silicon.

This longevity is seen as the most critical factor because rapid innovation could make standard chips—which typically last four to six years—obsolete in three years, causing costs to skyrocket. Conversely, a “tiered model” where older chips are reused for simpler tasks, such as inference, could stabilize costs.

Data center complexity and the elasticity of compute demand are other variables likely to affect how much capital is expended on AI infrastructure in the next five years. Shortages in power grid capacity, specialized labor, and electrical equipment are also seen as factors elongating the build-out.