A cheat sheet for why using ChatGPT is not bad for the environment

by edwardon 4/29/2025, 7:34 PMwith 50 comments

by otterleyon 4/29/2025, 8:31 PM

Can you please link to the primary source material? https://andymasley.substack.com/p/a-cheat-sheet-for-conversa...

If you all flag this article, dang will probably get around to fixing it.

by krunckon 4/29/2025, 9:01 PM

Isn't the real metric of concern the absolute amount of CO2 generated that will have an impact on the environment? That every person's AI queries contributes a small amount to the CO2 production doesn't make the sum of all CO2 production go away.

by mac-chaffeeon 4/29/2025, 9:01 PM

I agree but there's a lot of nuance to the next question of "well what IS bad for the environment" and tech's role in that question.

I've been unsatisfied with how people in tech address that complex subject so I wrote about it here: https://www.macchaffee.com/blog/2025/tech-and-the-climate-cr...

by devmoron 4/29/2025, 8:33 PM

Like every other "rebuttal" to this argument, this chooses to pretend that the complaint is about the power usage of making API calls, instead of the power usage of training models.

It's like if I said I was concerned about factory farming impacts and you showed me a video of meat packaging at a grocery store, claiming it alleviates my concerns.

by hugmynutuson 4/29/2025, 9:09 PM

I find this unconvincing. The actual discussion of LLM generation is very lacking.

The original link [1] cites a discussion of the cost per query of GPT-4o at 0.3whr [2]. When you read the document [2] itself you see 0.3whr is a lower bound & 40whr is the upper bound. The paper [2] is actually pretty solid, I recommend it. It uses the public metrics from other LLM APIs to derive a likely distribution of the context size of the average query for GPT-4o which is a reasonable approach given that data isn't public. Then factoring in GPU power per FLOP, average utilization during, and cloud/renting overhead. It admits this likely has non-trivial error bars, concluding the average is between 1-4whr per query.

This is disappointing to me as the original link [1] attempts to bring in this source [2] to disprove the 3whr "myth" created by another paper [3], yet this 3whr figure lies directly in the error bars their new source [2] arrives at.

Links:

1. https://simonwillison.net/2025/Apr/29/chatgpt-is-not-bad-for...

2. https://epoch.ai/gradient-updates/how-much-energy-does-chatg...

3. https://www.sciencedirect.com/science/article/pii/S254243512...

Edit: whr not w/hr

by reyqnon 4/29/2025, 9:05 PM

So all of the cheatsheet is basically "it's not bad because there are worse things"?

You can try explaining why it's not "that" bad for the environment, the planet is still worse off than when it didn't exist.

Let's carry on inventing new ways to spend energy, but it's ok because we still spend more energy for other stuff.

It's kinda sad how the world saw climate change, said it was bad, but in the end decided to do nothing about it.

by etchalonon 4/29/2025, 8:28 PM

The "cheat sheet" seems to address the environmental impact of using ChatGPT, not the environmental impact of training the model.

by hydragiton 5/4/2025, 6:01 PM

What about the human cost? https://futurism.com/the-byte/ai-gig-slave-labor

by malvimon 4/29/2025, 9:01 PM

And what about all the developing and testing of models? What about all the OTHER companies that can’t wait to get a piece of this cake and are training and scraping the internet like there’s no tomorrow? And all the companies that are integrating LLMs into their daily workflows using tons of api calls daily?

Come on…

by amos-burtonon 4/29/2025, 9:25 PM

https://ourworldindata.org/electricity-mix

> ...Globally, coal, followed by gas, is the largest source of electricity production....

As long as this is the case we can hardly even the debate of the impacts of those new techs on the sole topic of the climate.

Let me remind you kindly we well passed the point of this single problem, we are dealing with planetary boundaries, there is 9 of them. Another reminder is that co2 pollution alone is the direct product of the GDP, there is no update in sight about how the competing countries should deal with shared homothetic GDP cuts to reduce the gaz emissions. so even we would do something, we have not started to get to the serious business.

Why AI ? Because we are screwed. We failed on humanism, we failed on climate, we cant failed that one, we would just kick ourself out of the real game.

this is a kind of a great megalomaniac idea, but i prefer that to your pathetic bullshit. so even though you are fucking cringe, go elon,

Fire in the hole !

by aabhayon 4/29/2025, 9:01 PM

I think the better argument is about the direction of change versus the current magnitude.

If we are to believe that the models will get bigger, use more tokens, work for longer, this calculation can easily become very very skewed in the other direction.

Consider an agentic system that runs continuously for 6 hours. It is possible this system processes billions of tokens. That could more than equal a transatlantic flight in this hypothetical world.

Now compare this with non-AI work, like a CRUD app. Serving millions of queries in that same period would consume a tiny fraction of what ChatGPT consumes.

Rather than being a “win” for AI, the fact that we’re even 3 or 4 orders of magnitude away from this being a problem means that its already grounds to be concerned.

by roschdalon 4/29/2025, 8:48 PM

The human brain is dramatically more energy-efficient than AI models like ChatGPT.

Human brain: Uses about 20 watts of power.

ChatGPT (GPT-4): Running a single query can use hundreds of watts when accounting for the entire datacenter infrastructure (some estimates suggest 500–1000 watts per query on average, depending on model size and setup).

If we assume:

20 watts for the human brain thinking continuously,

1000 watts for ChatGPT processing one complex query,

then the human brain is about 50x more energy-efficient (or 5000% more efficient) than ChatGPT per task, assuming equal cognitive complexity (which is debatable, but good for ballpark comparison).