Big Tech Says Generative AI Will Save the Planet. It Doesn’t Offer Much Proof

0
6

A few years ago, Ketan Joshi read a statistic about artificial intelligence and climate change that caught his eye. In late 2023, Google began claiming that AI could help cut global greenhouse gas emissions by between five and 10 percent by 2030. This claim was spread in an op-ed coauthored by its chief sustainability officer, and subsequently quoted across the press and in some academic papers.

Joshi, an energy researcher, was shocked by the massive numbers Google was touting—especially AI’s purported ability to effectively cut the equivalent of what the European Union emits each year. “I found [the emissions claim] really compelling because there’s very few things that can do that,” he says.

He decided to track down its source. That five to 10 percent number, Joshi found, was drawn from a paper published by Google and BCG, a consulting group, which in turn drew from a 2021 analysis by BCG, which simply cited the company’s “experience with clients” as a basis for estimating massive emissions reductions from AI—a source Joshi called “flimsy.” The analysis was published a year before the introduction of ChatGPT kicked off a race to build out the energy-intensive infrastructure that, tech companies claim, is needed to power the AI revolution.

A few months after it first stood behind the five to 10 percent estimate, in its 2023 sustainability report, Google quietly admitted that the AI buildout was significantly driving up its corporate emissions. Yet it has continued to tout the numbers provided to it from BCG, most recently last year in a memo to European policymakers.

One of the most powerful tech companies in the world using this metric to make “policy recommendations to one of the biggest regions in the world—I thought that was remarkable,” says Joshi. “That instance was what got me immediately very interested in the structure of this claim and the evidence behind it.”

“We stand by our methodology, which is grounded in the best available science,” Google spokesperson Mara Harris told WIRED in an email in response to several questions about the five to 10 percent statistic. “And we’re transparent in sharing the principles and methodology that guide it.” Harris included a link to the company’s methodology on calculating emissions reductions from Google products and partnerships, but did not elaborate on how, exactly, the company applied these standards to the BCG numbers. (BCG did not respond to WIRED’s questions.)

Tech companies are locked in a battle to develop AI as fast as possible—one with potentially massive implications for climate change. In the US, the world’s biggest data center market, energy pressure from this buildout has resulted in coal plants staying open and hundreds of gigawatts of new gas power in line to be added to the grid, with nearly 100 gigawatts of that power earmarked solely to power data centers.

Tech executives have said over and over again that this energy and data center buildout will be worth it, given the possibilities that AI presents for the planet. At New York City’s annual Climate Week event last year, the Bezos Earth Fund, Jeff Bezos’s sustainability focused nonprofit, hosted a series of conversations on how “AI will be an environmental force for good.” In late 2024, former Google CEO Eric Schmidt said that since the world wouldn’t hit its climate goals, it’s more important to focus on what AI can do. (“I’d rather bet on AI solving the problem, than constraining it and having the problem,” he said.) OpenAI’s CEO Sam Altman has promised that AI will “fix” the climate.

But a lot of these claims, it turns out, have very little—if any—actual proof behind them.

Joshi is the author of a new report, released Monday with support from several environmental organizations, that attempts to quantify some of the most high-profile claims made about how AI will save the planet. The report looks at more than claims made by tech companies, energy associations, and others about how “AI will serve as a net climate benefit.” Joshi’s analysis finds that just a quarter of those claims were backed up by academic research, while more than a third did not publicly cite any evidence at all.

“People make assertions about the kind of societal impacts of AI and the effects on the energy system—those assertions often lack rigor,” says Jon Koomey, an energy and technology researcher who was not involved in Joshi’s report. “It’s important not to take self-interested claims at face value. Some of those claims may be true, but you have to be very careful. I think there’s a lot of people who make these statements without much support.”

Another important topic the report explores is what kind of AI, exactly, tech companies are talking about when they talk about AI saving the planet. Many types of AI are less energy-intensive than the generative, consumer-focused models that have dominated headlines in recent years, which require massive amounts of compute—and power—to train and operate. Machine learning has been a staple of many scientific disciplines for decades. But it’s large-scale generative AI—especially tools like ChatGPT, Claude, and Google Gemini—that are the public focus of much of tech companies’ infrastructure buildout. Joshi’s analysis found that nearly all of the claims he examined conflated more traditional, less energy-intensive forms of AI with the consumer-focused generative AI that is driving much of the buildout of data centers.

David Rolnick is an assistant professor of computer science at McGill University and the chair of Climate Change AI, a nonprofit that advocates for machine learning to tackle climate problems. He’s less concerned than Joshi with the provenance of where Big Tech companies get their numbers on AI’s impact on the climate, given how difficult, he says, it is to quantitatively prove impact in this field. But for Rolnick, the distinction between what types of AI tech companies are touting as essential is a key part of this conversation.

“My problem with claims being made by big tech companies around AI and climate change is not that they’re not fully quantified, but that they’re relying on hypothetical AI that does not exist now, in some cases,” he says. “I think the amount of speculation on what might happen in the future with generative AI is grotesque.”

Rolnick points out that from techniques to increase efficiency on the grid, to models that can help discover new species, deep learning is already in use in a myriad of sectors around the world, helping to cut emissions and fight climate change right now. “That’s different, however, from ‘At some point in the future, this might be useful,” he says. What’s more, “there is a mismatch between the technology that is being worked on by big tech companies and the technologies that are actually powering the benefits that they claim to espouse.” Some companies may tout examples of algorithms that, for instance, help better detect floods, using them as examples of AI for good to advertise for their large language models—despite the fact that the algorithms helping with flood prediction are not the same type of AI as a consumer-facing chatbot.

“The narrative that we need big AI models—and quasi-infinite amounts of energy—tries to sell us the idea that this is the only kind of AI we need, and the only future that’s possible,” says AI and sustainability researcher Sasha Luccioni. “But there are so many different, smaller and more efficient models that can be deployed for a fraction of the cost, both to people and the planet.”

In a separate piece of research also published Monday, Luccioni and Yacine Jernite, head of sustainability at AI company Hugging Face, looked at the costs of training a wide variety of AI models, finding that massive proprietary models trained on access to vast amounts of data and energy aren’t the only option for powerful AI solutions. Often, smaller models perform just as well as the more expensive ones in AI application.

“The only companies that can compete in this bigger-is-better AI race are the ones with the deepest pockets, who have hoovered up our data—consensually or not—over the last decades, and continue to do so,” she says. “Now they are selling this data back to us by convincing us that we need these mammoth models, the planet be damned.”

A key part of the problem with measuring the impact of AI on climate, experts tell WIRED, is that the public is lacking some of the most basic information we need to understand AI’s capabilities and impact. We’re still working with back-of-the-napkin estimates on how much energy AI—let alone generative AI—uses in data centers. Google only released estimates of how much energy its AI prompts use last year; other companies are still lagging behind, or not releasing key environmental information about their models. And while generative AI is being shoved into much of our consumer experience, we’re still waiting for concrete examples of how large-scale generative AI could do a better job at tackling climate issues than less energy-intensive models.

Joshi thinks the solution is simple: companies driving for more AI development should disclose more about the climate cost.

“If [tech companies] are worried that people are overstating or exaggerating the climate impacts of generative AI, then there should be nothing stopping them from saying, ‘Well, okay, our energy growth this year was six terawatt-hours, and two of them were for generative AI,’” he says. “That’s information that we push for more disclosure of in the report. I think that would ultimately be a very good thing for them.”

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: wired.com