The 2020s will be defined as the decade society went ‘all in’ on AI. Businesses are deploying chatbots to boost productivity, health services are adopting AI‑driven diagnostics, and customers are experiencing faster, more personalised digital services. With UK business investment in AI expected to rise by 40% in the next two years, its influence on everyday operations will only intensify.
Yet beneath the excitement lies an overlooked reality, AI’s significant environmental footprint. Few organisations grasp its impact across the value chain. From the extraction of rare earth minerals for semiconductors, to the energy required to train large language models. Then there’s the vast volumes of water consumed by data centres, a demand projected to reach 4.2–6.6 billion cubic metres of global water withdrawal by 2027, more than half the UK’s entire annual usage. Against this backdrop, AI growth and environmental commitments may seem at odds, but they don’t have to be.
AI growth must have guardrails
The environmental impacts of AI, if left unmanaged, could have far‑reaching consequences. Data centres already account for around 1.5% of global electricity consumption (roughly 415 TWh a year), and the International Energy Agency reports that demand has risen by 12% annually over the past five years. With accelerated computing for AI driving most of this growth, consumption is projected to more than double to 945 TWh by 2030. Higher electricity demand often increases reliance on fossil‑fuelled grids during peak periods, while large‑scale data centres can strain local and national networks, prompting additional power infrastructure that carries its own environmental footprint.
Unchecked AI growth comes with a steep environmental cost and delaying action risks locking in unsustainable practices that will be hard to reverse.
Water usage compounds the issue. The millions of litres of freshwater required to cool data centres can intensify drought risks and reduce agricultural productivity. At the same time, the extraction of rare earth minerals needed for AI hardware places pressure on fragile ecosystems, contributing to biodiversity loss and record levels of toxic e‑waste, as highlighted in the United Nations’ 2024 Global E‑waste Monitor.
It all starts from within
For organisations, the real challenge is striking the right balance between reducing the environmental footprint of AI and using AI to advance sustainability goals. While AI can accelerate climate action and strengthen decision‑making, these benefits risk being overshadowed if the footprint of AI systems themselves isn’t measured or managed. Most businesses now set ambitious sustainability targets and must comply with tightening climate regulations, yet many still struggle to square these commitments with the drive for greater scalability and productivity through AI.
This tension often stems from not knowing where to begin internally. Understanding the true impact of AI, aligning it with corporate sustainability strategies, and embedding responsible practices across teams, can feel daunting without a comprehensive plan. That’s why a structured approach matters. Methods such as STAR provide a practical framework to help organisations make informed decisions about how AI is deployed, measured and improved:
- Strategise: Identify where AI can actively drive sustainability and where its environmental impact must be mitigated. For example, set a specific goal such as “Ensure 50% of AI workloads run on renewable-powered infrastructure by 2027”, while using AI to optimise energy forecasting or reduce supply chain emissions.
- Tally: Measure impacts using tools such as Hugging Face’s AI Energy Score, which enables AI developers to evaluate and compare the energy consumption of AI models. These tools help quantify trade‑offs and identify hotspots for improvement.
- Act: Apply best practice across the AI lifecycle, from model selection to data centre location. For instance, shifting workloads to regions with renewable‑heavy grids can significantly cut emissions.
- Review: Continuously monitor, report, and refine. Incorporate sustainability metrics into ESG or annual impact reports and align to global standards.
Driving collective action
While internal decisions lay the foundation, meaningful progress requires external collaboration and initiatives to make AI genuinely sustainable. As a starting point, organisations must be willing to collaborate. No organisation can tackle AI’s environmental impact alone, so partnerships between government, industry, and academia are critical. As a first step, following the advice or engaging with institutions like Climate Change AI or The Alan Turning Institute’s Environment and Sustainability Interest Group can be helpful for aligning with global sustainability priorities and following recommended guidance.
On a more granular level, organisations must embed a collaborative approach to AI sustainability that extends beyond internal teams to suppliers and stakeholders. This means aligning with emerging standards and integrating sustainability into every stage of AI adoption, from procurement to deployment. For example, vendor selection could prioritise partners operating renewable-powered data centres or offering low-carbon AI models. Collaboration also enables access to innovation, such as pooling resources for greener computing or adopting industry-wide benchmarks for energy and water usage. Ultimately, these partnerships can ensure that a focus on sustainability goes from being a compliance exercise to providing a competitive advantage, positioning organisations as leaders in responsible AI.
Delaying action risks progress
Unchecked AI growth comes with a steep environmental cost and delaying action risks locking in unsustainable practices that will be hard to reverse. Therefore, sustainability must be treated as a shared responsibility between governments, technology providers, and end-users, not an optional extra. As regulatory pressure and public scrutiny intensify, transparent reporting on AI’s energy, water, and material impact will become a business imperative. Organisations that act now to embed sustainability into AI strategies won’t just meet compliance, they’ll lead the market in shaping responsible innovation.
Andrew Grigg
Andrew Grigg is Head of Sustainable AI at Sopra Steria Next UK and is a member of the Government Digital Sustainability Alliance’s AI working group which is defining sustainability standards to be adopted cross-government.



