Research shared by the UK data centre operator Pulsant shows that organisations running artificial intelligence systems are facing mounting pressure over where those systems are hosted. Banks, hospitals and manufacturers all report higher costs and heavier infrastructure demands as AI use grows.
Analysts quoted by Pulsant forecast that global data centre demand could reach between 171 and 219 gigawatts by 2030, compared with about 60 gigawatts today. AI ready facilities could account for around 70% of that demand, according to the same analyst estimates.
AI systems use far more electricity than everyday business software. Training models requires dedicated graphics processors running continuously for weeks. Large data sets move constantly across networks, raising energy use and cooling needs.
Even after training ends, live AI systems rely on consistent processing speed. Delays can affect fraud checks, medical analysis or production forecasts. These pressures have made infrastructure decisions a board level issue rather than a technical detail.
Is The Public Cloud Losing Its Appeal For Long Term AI Use?
Public cloud platforms remain popular at the start of AI projects. Teams can access powerful processors quickly and test different model designs without buying equipment. This suits trials and short term work.
There begins to be problems once AI systems are having to run day and night. Renting advanced processors for long periods drives spending sharply higher. Data leaving cloud platforms also triggers egress charges, which inflate monthly bills. Pulsant notes that finance teams often react once these costs come into view.
Capacity has also become an issue because demand for high end processors has at times exceeded supply on shared cloud platforms. This has left organisations waiting for resources during busy periods.
Banks, insurers and healthcare groups handle sensitive records. Cloud platforms meet strong digital security standards, but data location and audit control sit outside the customer’s direct control, which unsettles compliance teams.
More from Artificial Intelligence
What Makes Colocation More Attractive For AI Systems?
Colocation allows organisations to install their own hardware in specialist data centres built for heavy power draw and advanced cooling. These facilities keep dense processor clusters running around the clock.
Pulsant cited survey findings that explain this move where IT leaders ranked high density power and cooling as the top requirement for hosting AI workloads at 54%. Direct links to public cloud platforms followed at 51%. Support for high performance computing infrastructure came next at 49%.
This set up allows demanding training work to run on stable equipment while retaining cloud access when extra capacity is needed. Connectivity between private systems and public platforms plays a big role in that model.
Cost control also differs BECAUSE loud hosting avoids upfront purchases, but long running AI workloads create unpredictable monthly charges. Colocation requires early investment, but operating costs stay steady, which finance teams can map more easily.
Stephen Spittal, Technology Director at Pulsant, says: “AI puts far more strain on infrastructure than traditional IT. Once you move past the pilot stage, the demand for power, cooling, and connectivity is constant.
“Colocation gives organisations the capacity to run those workloads without interruption in sites specifically designed to be efficient – and the confidence that performance will hold up as projects grow.
“AI is moving to the Edge – inference needs to move closer to consumers. Our latest research indicates 87% of UK businesses plan to migrate partially or fully from public cloud in the next two years.”


