In sprawling stretches of farmland and industrial parks, supersized buildings packed with racks of computers are springing up to fuel the AI race. These engineering marvels are a new species of infrastructure: supercomputers designed to train and run large language models at mind-Âbending scale, complete with their own specialized chips, cooling systems, and even energy supplies.
Hyperscale AI data centers bundle hundreds of thousands of specialized computer chips called graphics processing units (GPUs), such as Nvidia’s H100s, into synchronized clusters that work like one giant supercomputer. These chips excel at processing massive amounts of data in parallel. Hundreds of thousands of miles of fiber-optic cables connect the chips like a nervous system, letting them communicate at lightning speed. Enormous storage systems continuously feed data to the chips as the facilities hum and whir around the clock.


