AI Data Centres: Why They Use More Power and Need More People
09 Apr, 202612:33Key Takeaways: AI data centres concentrate vast amounts of computing into fewer physica...
Key Takeaways:
- AI data centres concentrate vast amounts of computing into fewer physical spaces vs traditional facilities, placing greater strain on power and cooling infrastructure.
- Energy availability is increasingly dictating where and how new data centres are planned, built, and operated.
- The workforce profile for an AI data centre differs markedly from traditional ones, with new demand for power, cooling, commissioning, and controls expertise across the full asset lifecycle.
- NES Fircroft supports data centre projects globally with specialist permanent, contract, and project staffing solutions across energy, power, construction, and operations.
Artificial intelligence (AI) is changing what data centres are built to handle, how they’re designed, and how much energy and infrastructure they require. Traditional cloud computing was mainly built around efficiency at scale, but AI workloads place greater emphasis on raw compute performance, often packed in much denser physical environments. For these new, modern AI data centres, the challenge isn’t about data storage or network throughput that previously affected earlier generations, but about electricity capacity, heat, and the people required to manage both.
For operators, investors, and construction partners, understanding why AI data centres require more power and why they need broader technical teams is essential for the future success of your projects.
What is an AI data centre?
Unlike traditional enterprise or cloud-managed data centres, AI-led facilities are designed around accelerated computing. This means dense clusters of graphics processing units (GPUs) or tensor processing units (TPUs), connected by high-bandwidth fabrics and supplied with far more power per rack than legacy environments.
Modern GPUs already consume between 700W and 1,200W per chip, and published roadmaps suggest future designs are expected to go higher still. Once you reach that threshold, conventional air cooling becomes unviable. Because AI workloads concentrate so much demand into a smaller number of racks, everything from facility layout to power distribution and operating models starts to look very different from traditional data centres.
And this effect changes staffing requirements as well. Operators need engineering and technical specialists who understand not only IT loads, but also high-voltage power systems, advanced cooling architectures, and tightly coupled mechanical and electrical controls.

Why AI Consumes So Much Power
The power profile of an AI data centre reflects both the nature of the workloads and their deployment. Training large AI models puts systems under constant strain. Systems are pushed hard, often for days or weeks at a time, and frequently run close to their thermal limits. Research cited by Allianz shows that this kind of training setup can use several times more electricity than typical enterprise workloads, and a single ChatGPT query can use up to 10x the power of a traditional web search. Inference workloads, while less demanding individually, can scale across large clusters when deployed at commercial or global scale.
This has two immediate consequences. First, total electricity consumption rises sharply, even in facilities designed with strong efficiency targets. Second, demand becomes less forgiving of interruptions or power instability, increasing reliance on robust grid connections, onsite generation, and energy storage.

Data Centre Types & Classifications – And How AI Impacts Each
AI workloads aren’t confined to a single build but are shaped into multiple data centre types and classifications, each with distinctive infrastructure and workforce implications.
Enterprise data centres
Often constrained by existing buildings and power connections, enterprise environments face the greatest challenge in accommodating AI. Retrofitting for higher rack densities and liquid cooling typically requires specialist electrical, mechanical, and commissioning expertise.
Colocation data centres
Colocation providers have become a natural home for AI due to access to land, grid capacity, and carrier-dense data centre connectivity. Many now offer AI-ready halls with bespoke power and cooling, increasing demand for design engineering and site-based technical staff.
Hyperscale data centres
Hyperscale operators absorb a significant share of AI demand, and are the largest growth area to date, using purpose-built campuses to support extreme power densities. These projects rely on large, multidisciplinary construction and commissioning teams, with contract staff brought in to support peak phases of delivery.
Edge and modular data centres
AI inference workloads are renewing interest in edge and modular data centre builds, particularly in use cases where low latency matters. While these sites operate at a smaller scale, power and cooling constraints still apply, and the reduced footprint means engineering decisions and controls integration have to be right the first time.
Cloud-based data centres
Cloud operators are increasingly blending AI-specific infrastructure into existing cloud estates rather than isolating it in standalone facilities. This raises expectations for operational teams who must manage mixed workloads, varied cooling systems, and complex energy contracts and power-procurement arrangements.
Across all types, AI increases the need for experienced engineering talent at both design and operational stages, a trend changing data centre recruitment and hiring strategies globally. If you’re interested in seeing where capacity is being built and expanded right now, take a look at some of the most active data centre projects underway:
Suggested Reads: Top Data Centre Projects in APAC and Top Data Centre Projects in the US & Canada
From Site Selection to Operations
AI affects every stage of a data centre project; decisions made early on shape what’s possible later, particularly as power and cooling requirements continue to rise:
- Site selection and early design: Power availability has become the primary constraint in site selection. Grid capacity, substation proximity, and planning conditions now outweigh many traditional considerations. As a result, electrical engineers, grid specialists, and energy consultants are involved much earlier to de-risk feasibility.
- Construction and fit-out: AI-led projects typically have larger electrical scopes, more complex cooling installations, and tighter coordination between multiple trades. Commissioning engineers and control specialists play a central role at this stage, validating performance under extreme loads.
- Operations and maintenance: Once live and operational, AI data centres operate closer to physical limits compared to traditional facilities. This places more pressure on operations staff who must manage power quality, thermal performance, and uptime at the same time. Many rely on contract engineers to cover commissioning overlap and early operational ramp-up.
Power, Microgrids, and the Grid Constraint Problem
AI data centres already consume an estimated 415TWh globally, with demand expected to more than double by 2030, but in many regions, the limiting factor is access to electricity.
In response, developers are integrating onsite generation, battery storage, and hybrid grid solutions into base designs to supplement or stabilise grid supply. Microgrids and long-term purchase power agreements (PPAs) are becoming the standard for new builds. They can reduce exposure to grid congestion and improve resilience, but they also add another layer of technical complexity to a project. Running onsite generation and storage brings responsibilities that sit closer to the energy sector than traditional IT operations, from regulatory compliance through to maintenance and asset management.
Cooling and Sustainability Pressures
Once rack densities push above roughly 70kW, liquid cooling is no longer just an option. Air cooling, which served the industry well for years, struggles to cope as rack loads rise, and many new AI deployments are already operating at 100-120kW per rack or more, forcing changes in mechanical and electrical design.
Methods such as direct-to-chip and immersion cooling provide a more effective way to transport heat away, but they also impact how facilities are operated and maintained. Liquid cooling affects everything from pipework layouts and heat-rejection systems to maintenance procedures and even safety protocols. Sustainability considerations, including heat reuse and planning requirements, add further complexity, expanding the range of skills needed for modern sites, although these depend on local infrastructure and regulation.

Workforce Demand and the Growing Skills Gap
Automation and remote monitoring have reduced headcount in some areas of traditional data centre operations, leading to the assumption that newer facilities will follow suit, but AI data centres challenge that. While certain tasks are increasingly automated, the overall technical burden of these facilities is higher, and this is just for operations; a large workforce is still required for construction.
More than half of data centre operators report difficulty in recruiting and retaining suitably skilled talent, particularly in electrical, mechanical, and operational roles. And the World Economic Forum has also identified workforce availability as a growing constraint on data centre expansion, alongside power and planning.
Demand for specialist talent tends to peak at several points across the project life cycle:
- Power and electrical engineering during design, construction, and grid integration
- Commissioning and controls expertise to validate complex, high‑density systems
- Operations and maintenance specialists capable of managing liquid‑cooled and energy‑intensive environments
These requirements persist long after handover, with many of the same skills needed well into steady-state operations, meaning there’s a need for sustained access to experienced technical professionals, both short- and long-term.
Why You Need to Partner With a Specialist Data Centre Recruitment Firm
As AI changes the build and scale of data centres, the way projects are resourced is evolving with it. Operators are relying more on flexible teams that can work across design, construction, commissioning, and operations, and the right recruitment partner makes this possible.
NES Fircroft brings decades of experience across data centres, power, energy, and construction, supported by a global delivery model. With teams operating across Asia, Australasia, the Americas, the Middle East, and Europe, we support you with tailored permanent, contract, and project staffing solutions, deploying qualified professionals in line with local market conditions and project requirements. This is underpinned by our award-winning payroll and compliance teams, enabling you to mobilise talent and start projects anywhere in the world with confidence that all legal and employment regulations are met.
To discuss how we can support your next AI data centre project, contact our specialist teams today.
FAQs
Why do AI data centre projects require more specialist staff than traditional builds?
AI data centres operate at much higher power densities and closer to physical limits, which increases complexity across the full lifecycle. This drives demand for specialists in power, cooling, commissioning, and controls, rather than generalist data centre roles.
When do staffing demands peak for a data centre project?
Staffing demand tends to rise at multiple points, often during early design and grid integration, again during construction and commissioning, and then through early operations as systems stabilise and performance is validated under live loads.
What skills are most in demand on AI data centre builds?
Demand is strongest for electrical and power engineers, commissioning and controls specialists, and operations staff with experience in high‑density or liquid‑cooled environments.
How do specialist data centre recruitment agencies support AI‑led projects?
Specialist recruitment partners, such as NES Fircroft, help operators access experienced talent across permanent, contract, and project‑based roles. This enables teams to scale during peak delivery phases, mobilise internationally where required, and remain compliant with local labour regulations.

