Dear AI, Can You Be Green?

Illustration by Tori O'Campo
Dear AI, Can You Be Green? Reducing AI’s huge environmental impact
By
September 18, 2025

OpenAI and Donald Trump announced “Stargate” in January of this year, a private project that aims to invest $500 billion in AI infrastructure by 2029. Apple announced a similar investment, also targeting the $500 billion mark, in the manufacturing of hardware and construction of data centers in the US over the next four years. Google has plans to spend $75 billion on AI infrastructure in 2025 alone.

Although the technology has been controversial since its initial surge in December 2022, the AI boom, spearheaded in the popular consciousness by generative AI models developed by tech giants such as Meta, Google, and OpenAI, has shown little evidence of slowing down. By the end of January 2023 — less than two full months after its public release — ChatGPT reached over 50 million weekly users. By April 2025, that number ballooned to 800 million weekly users.

AI research has been ongoing for decades, and only a subsection of it has focused on the kinds of widely used, large language models that most think of when they hear “AI.” Other applications, though, include research such as a study conducted at Stanford, which explores how AI can simulate clinical trials for medical drugs, thus accelerating the process by which they’re made available to the public. Companies such as Space Intelligence are also pairing AI with satellite imagery to map out areas affected by mass deforestation. So, regardless of whether you view the continuing development of AI technology as good or evil, it’s here to stay.

Which poses a significant problem for our environment.

It’s estimated that GPT-4 — OpenAI’s most powerful large language model until it released GPT-5 this past August — used over $100 million and 50 gigawatt-hours of electricity during its training. For context, this is enough to power the city of San Francisco for three days. A 2025 study calculated that the hardware manufacturing, model development and complete training of a series of generative AI models released around 878,000 pounds of carbon emissions, roughly the same amount as 98 homes in the United States over a single year.

ChatGPT users consume about as much energy in twenty days as the city of San Francisco does in one.

Once a generative AI model is made available to the general public, energy demands largely depend on the parameter count of the model (the number of variables learned by a model during its training that affect how it generates responses), the complexity of prompts given by users and other such elements of the response-generation process. In other words, the more complicated the prompt, the more energy is used to generate a response. According to researchers at MIT, an average response from Llama 3.1 405B, a model with over 400 billion parameters, uses the same amount of energy as a microwave does for eight seconds.

That may not sound like a lot, but when a model like ChatGPT receives over 2.5 billion prompts per day, and GPT-4 is estimated to use over 1.7 trillion parameters, that energy consumption balloons quickly. Using an average electricity consumption figure offered by OpenAI CEO Sam Altman, global ChatGPT users consume about as much energy in twenty days as the city of San Francisco does in one.

Electricity consumption and thus emissions aren’t AI’s only environmental stressors. Third-party research has estimated that GPT-3 uses 50mL of water every 10 to 50 queries. OpenAI claims that an average ChatGPT query uses about 3mL of water. Based on OpenAI’s claimed 2.5 billion daily prompts, that places ChatGPT’s water usage at around 804,400 liters per day, about 0.2% of the city of Los Angeles’ daily water use. Some researchers have estimated that the training of the OLMo series of large language models, a group of large language models made by AI research non-profit Ai2, consumed roughly 2.769 million liters of water, about the same amount of water a single American uses over the course of 24.5 years.

Data like this is useful, but it’s a bit of a guessing game. While plenty of industries are required to provide information on their environmental footprint, AI projects aren’t, and thus companies tend to be tight-lipped on the subject. That’s a problem for a lot of reasons, not the least of which is that it’s difficult to give actionable, specific solutions and critiques to a problem when you’re missing such information.

What seems apparent, though, is that this level of compounding resource extraction and carbon emitting isn’t sustainable long-term for what is arguably the most significant developing technology on the planet. Optimistic supporters of AI might point to the societal progress suggested by the incorporation of AI technology into all sorts of industries, but that progress will be for naught if it irreparably damages our environment. Luckily, there’s an immense amount of work being done by developers, hobbyists, policymakers and more to build a green future for AI.

***

The development of informational tools to counterbalance the sparse amount of developer-released data has been foundational to the green-AI movement. The AI Energy Score project, for example, is “an initiative to establish comparable energy-efficient ratings for AI models.”

The project was created by a team at Hugging Face, a French-American company that develops computational tools for building software using machine learning. Founded in 2016, much of their work has focused on creating a more socially-conscious ecosystem for AI development, with a particular focus on the environmental impact of AI and the democratization of AI code and tools. Their AI Energy Score ratings are standardized on a scale from one to five, with a public leaderboard feature that allows anyone to view and compare a model’s rating against other models that have been reviewed and scored. Submissions are open and simple to submit, but so far, only open source models have been submitted for testing.

Researchers at Hugging Face also contributed to Code Carbon, a tool that allows users to estimate the CO2 emissions produced from the use of a given piece of software (including AI), and also lays out how developers can optimize their code or alter their cloud infrastructure in order to reduce emissions. ML CO2 Impact is a similar tool, targeted at AI models, that calculates a GPU’s (the primary computing component that powers AI) carbon emissions, while Green Algorithms is a project from the University of Cambridge that collects tools and information focusing on how to make computer science as a whole more environmentally friendly.

Tools like these provide vital information for anyone who interfaces with AI at any level, and while they’re mostly used by smaller and hobbyist developers, larger companies are making strides in energy efficiency, too. OpenAI has released two large language models, GPT-OSS (20B) and GPT-OSS (120B), that are notably energy efficient compared to models of a similar size. Most of the difference between models, as far as energy consumption goes, comes down to performance optimization strategies, which lead to reduced energy consumption.

Google has also bragged about the energy consumption of its LLM project, Gemini, which consumes about the same amount of energy as a microwave does for two seconds. That kind of energy efficiency is important coming from a major tech corporation, because it proves that large-scale AI models can be optimized to reduce energy consumption (thus emitting less carbon) without diminishing performance.

Most of that energy efficiency is coming down to software optimization techniques employed by developers. A technique called pruning, for example, is deployed during an AI’s training stage and involves developers finding and removing certain parameters that add nothing to the model’s overall performance. Trimming fat essentially, without affecting generated responses. Another technique, quantization, reduces the amount of computer memory taken up by a single parameter, improving energy efficiency by up to 51 percent without affecting performance. The value of these strategies to developers (and the companies that pay them) is straightforward: performance efficiency theoretically means a better user experience (thus more users, thus more money), and also usually translates to improved energy efficiency, which lowers costs but also happens to be better for the environment.

On the hardware side, there’s work being done in the tinyML space, a subfield of machine learning that focuses on optimizing AI models to run on small, low-power devices like microcontrollers, usually resulting in a hefty reduction in power consumption compared to AI models used on higher-power machines. Some devices are reported to use a thousand times less power than their larger counterparts. That, in addition to the lower cooling requirements of smaller devices, are serious wins for AI’s energy and water consumption. And while the field is still in its infancy, backing from industry organizations such as the Edge AI Foundation, combined with notable support from hobbyists and open source developers, paints a tentatively hopeful portrait.

 ***

So far, though, water efficiency is lagging behind energy optimization. The good news is that optimizing software for energy efficiency reduces the amount of water used in the cooling process, given that the amount of heat produced by a piece of hardware is proportional to the amount of energy used to power it, but in the deserts outside urban areas where data centers are growing like weeds, much of Arizona and New Mexico for example, any amount of water that isn’t used to directly support life is drastic resource drain.

Google and Microsoft have publicly pledged to make their data centers either consume zero water or even become water positive over the next few decades, but haven’t been forthcoming on how. Nor are there binding water-use limits for this sector. Still, advancements in cooling technology, such as a new method proposed by researchers at UC San Diego that uses a fiber membrane to assist in pulling out liquids used in the cooling process, are coming online regularly, and the cost benefits they offer are likely to push companies to adopt and implement them sooner rather than later.

The water-use problem can be further mitigated by the growing interest in building data centers in areas that have an abundance of water or require less cooling due to the local climate, though whether or not that interest will be offset by the short-term financial (and convenience) benefits gained from building data centers on unused land near high-population desert cities is yet to be seen.

The Apple Data Center in Mesa, Arizona. Courtesy of Jim Todd

 

Policy is another tool to lighten AI’s environmental impact, though efforts in the United States have been lackluster. The Artificial Intelligence Environmental Impacts Act of 2024, for example, was introduced to the House and Senate on February 1st, 2024. If the bill passed, it would require the EPA to carry out an AI environmental impact study that would shed light on the effect America’s AI companies are having on the environment. It would also require the National Institute of Standards and Technology to “develop a voluntary reporting system for the reporting of the environmental impacts” of AI. Given that America is currently a leading figure in the global “AI Race,” information like this would fundamentally shape the environmental trajectory of AI as a technology. Perhaps unsurprisingly, though, the bill has seen no major progress since being introduced, likely due to the current administration’s investments in AI while rolling back all manner of environmental protections, including the elimination of the EPA’s Office of Research and Development.

From the gutting of tax credits for green energy sources to his signing of several executive orders that promote America’s coal and oil industries, the Trump administration has been eager to reduce the country’s investment in renewable energy to promote the fossil fuel industry. Much of the justifying rhetoric has focused on securing America’s “economic prosperity and national security,” arguing that an investment in fossil fuels creates jobs and reduces electricity costs for consumers. Yet renewable energy is becoming the most affordable form of power in most parts of the world, and a report from the United Nations points out that the number of jobs created by the research, development and construction of renewable energy already outnumbers jobs in the fossil fuel industry.

The EU, meanwhile, is making faster and more concrete strides towards addressing AI’s environmental impact. The European Green Deal, a set of proposals to reduce net greenhouse emissions in the EU, has laid out legally binding targets for carbon neutrality by 2050, and AI technology used in energy management, environmental monitoring, and more is required to comply. A related initiative would establish an EU-wide program to rate the sustainability of EU data centers as part of the Energy Efficiency Directive.

Environmental protection is also a core value of the EU AI Act, which stipulates that the providers of general-purpose AI models should share the energy consumption of their models and of “other resources” during an AI system’s life cycle. It also encourages the adoption of voluntary codes of conduct that include measures to evaluate and minimize the environmental impact of AI systems. The adoption of the codes is non-binding, though, and not legally enforceable, but it’s a step in the right direction.

Regulation on AI technology has been a hotly debated topic, but the reality of the situation is that companies aren’t incentivized to be transparent, nor environmentally friendly; they’re just incentivized to act like they are. While that might steer them in the right direction every so often, it banks on good faith and can result in greenwashing rather than greening.

***

It isn’t enough to put the onus of greening AI solely on the shoulders of corporations and policymakers. Consumer AI usage has yet to adapt to the still-unveiling statistics behind AI’s environmental impact, but the research points to a few specific methods that would lighten the consumer footprint.

Selective model usage is one of the biggest topics being discussed in that regard. The current trend when using AI is to select one of the giant models offered by major tech corporations and use it for every task an LLM can generally perform. That ethos likely arises from a mix of misunderstanding the technology and user-acquisition advertising from those corporations, but there are a huge number of models in existence, and many of them are designed to be particularly good at a specific set of tasks, rather than generally alright at a wider variety. AlphaFold, for example, is a model used to predict the 3D structure of a protein from its amino acid sequence. LeNet is used by the U.S. Postal Service to recognize handwritten digits. Even the popular LLM’s are better suited to some tasks over others, and choosing the right one for a given task will likely involve writing simpler prompts and less prompts, which, combined with a particular model’s efficiency at performing a specific task, reduces the amount of computational power needed to generate a response.

Use AI when necessary or especially beneficial, not as a default.

The other side of the coin is that users need to be discerning about when they use AI at all. The fact that generative AI can produce text, images and audio has been treated as a license to use it whenever a person needs to write, create an image or produce audio. Aside from ethical quandaries regarding job replacement, disinformation, etc., that mindset leads to an overuse of AI tools that unnecessarily depletes Earth’s health while accomplishing little besides possibly saving a bit of time.

Ask a friend to help you check an email instead of using AI to do it, look up a tutorial and open an image editor to adjust a photo’s color yourself. This isn’t to say that those AI tools don’t have their place; it just means that, if users want to limit their environmental impact, they shouldn’t use AI as some omnipotent tool used to fix or improve every problem or task that a person has. It comes with a significant, if not immediately visible, cost. Use AI when necessary or especially beneficial, not as a default. That sort of environmentally-conscious mindset causes immediate, if small, change, which can help offset the grander, drawn-out reforms coming from governments and corporations.

Meanwhile, Amazon, Google, Meta and Microsoft are all investing heavily in nuclear power, with an eye on generating the electricity needed to power their AI. Most of the PR from the companies focuses on the money the projects will inject into local economies, the jobs the plants will supposedly generate, etc., but those projects won’t be ready to produce power for anywhere from five to ten years at least.

Researchers studying AI argue that extrapolating the energy-use data of today isn’t an effective method of predicting the energy demands of tomorrow, so massive investment in large-scale, non-carbon-emitting energy production makes a lot of sense, especially if you aren’t sure just how much energy you’ll be needing in twenty years. But paying more money to expand energy production is an incomplete solution to a complex problem. More holistic solutions are coming from work being done to lighten AI’s environmental impact more directly: tools to diagnose the problem, techniques to improve energy efficiency and efforts that are fundamentally rethinking the technology’s foundations to offer a more complete solution in support of a more sustainable future.

As attention on the issue rises, so too will the focus on solutions, along with the number of researchers working on them. Hopefully, it’ll be enough to keep up.

Help us sustain independent journalism...

Our team is working hard every day to bring you compelling, carefully-crafted pieces that shed light on the pressing issues of our time. We rely on caring supporters like you to help us sustain our mission. Your support ensures that we can continue to provide deeply-reported, independent, ad-free journalism without fear, favor or pandering. Support us today and make a lasting investment in the future.

Support the Magazine >>

Tanner Sherlock
Tanner Sherlock
Having just graduated with a Bachelor of Arts in Comparative Media Studies, Tanner Sherlock likes to consider himself a burgeoning writer and narrative designer. Outside the Collective, his work tends to focus on new media and emerging entertainment, but what he really cares about is finding and exploring stories that have an emotional and catalyzing impact on an audience. Working as an associate editor at Red Canary Magazine helps fulfill that goal for him. Tanner’s grey-furred feline writing buddy, Ash, actually does all the work, though.

COMMENTS

Support the Magazine

Comments are closed.

Red Canary Magazine non profit in portland oregon

We publish deeply reported journalism focusing on environmental, sustainability and social justice issues. Our goal is to bring you difference-making work that provokes discussions, inspires reflection and speaks to the times with stories that prove timeless.

PUBLISHER
Tracy McCartney

EDITOR-IN-CHIEF
Joe Donnelly

MANAGING EDITOR
Tori O’Campo

CONTENT CREATOR
Sam Slovick

ART DIRECTOR
Nancy Hope

CONTRIBUTING EDITORS
Erin Aubry Kaplan
Karen Romero
Tony Barnstone

ASSOCIATE EDITOR
Tanner Sherlock

Support the magazine >>

Help us sustain independent journalism…

Our team is working hard every day to bring you compelling, carefully-crafted pieces that shed light on the pressing issues of our time. We rely on caring supporters like you to help us sustain our mission. Your support ensures that we can continue to provide deeply-reported, independent, ad-free journalism without fear, favor or pandering. Support us today and make a lasting investment in the future.