New Study Shows AI Might Consume as Much Energy as a Country

Headlines This Week A new report from Politico suggests a “billionaire-backed network” of Silicon Valley-linked AI “advisors” are working to control the regulatory agenda in Washington D.C. Boy, I sure wish stuff like that were illegal instead of just business-as-usual. Despite apparently having lobbying money aplenty, it looks like the AI industry is still struggling to monetize some of its platforms. The Wall Street Journal reports that some major AI platforms, like Microsoft’s Github Copilot, have been hemorrhaging money.Last but not least: A newly introduced draft bill would institute protections for musical artists against AI. We spoke with a music industry executive about what the policy could mean for his business.

What Could the Future of Medical AI Look Like?

The Top Story: AI’s Growing Environmental Impact

One of the ongoing problems connected to the AI industry is just how much power it takes to run its systems. The pursuit of enough juice to run the high-octane algorithms behind apps like Bard, Bing, and ChatGPT is causing concern even among AI’s biggest proponents. Now, a new study shows the electrical resources needed for the growing AI industry could make its environmental impact even worse than previously thought.

The study, authored by Alex de Vries, who is a PhD candidate at the VU Amsterdam School of Business and Economics, claims that the AI industry’s energy needs may soon match those of a small country. “Given the expected production in the coming few years, by 2027 newly manufactured AI devices will be responsible for as much electricity consumption as my home country, the Netherlands,” de Vries told Insider this week. “This is also in the same range as the electricity consumption of countries like Sweden or Argentina.”

Image: IM Imagery (Shutterstock)

While the rate of electricity consumption in the tech industry has remained relatively steady for years, de Vries says that the advent of the AI chatbot wars between tech giants like Microsoft, Google, and OpenAI, may have spawned a new era. De Vries’ study looks specifically at the electricity consumption of the AI sector as it applies to something called the “inference phase” of AI production. While most environmental impact studies have so far focused on the amount of energy it takes to train large language models like GPT-4, less attention has been paid to “inferencing,” which is the process by which language models produce new information as the result of prompts. This phase of energy consumption can be massive and, sometimes, accounts for a majority of energy expended during the AI life cycle, de Vries writes. Due to the potential ballooning nature of AI energy needs over the next several years, the scholar argues that developers “focus [not only] on optimizing AI, but also to critically consider the necessity of using AI in the first place.”

READ MORE  It’s Not Easy Running a Geeky Business

De Vries’ study, like a number of other prominent environmental impact studies that have been published recently, begs the question: in an age of climate change and accelerating environmental distress, can generative AI really be justified? I mean, do we really need ChatGPT, automated email-writing, and AI stickers? Or are we killing the environment needlessly for too little in trade off?

At best, these platforms offer increased convenience for consumers and some cost savings for corporations, but—for now—that’s about it. Much of the mystification around AI has helped hide the fact that generative AI technologies aren’t—in many cases—particularly revolutionary and, in some cases, aren’t even new. Sure, there are some novel scientific applications of AI that, once refined, could have a major impact. But it would be hard to argue that those applications are the ones getting the lion’s share of attention or resources. Mostly, it’s stuff like ChatGPT that hogs the spotlight (and venture capital).

When viewed through the lens of the technology’s massive environmental impact, it seems hard to justify what is little more than an over-hyped form of content automation. Better deepfakes aren’t worth killing the planet over.

The Interview: Mitch Glazier on the Music Industry’s Fraught Relationship to AI

Photo: RIAA

This week we spoke with Mitch Glazier, the Chairman and CEO of the Recording Industry Association of America, which lobbies on behalf of the music industry. RIAA is supporting a newly introduced piece of legislation that seeks to institute legal protections against the use of AI to replicate the visual or audio likenesses of artists. In recent months, there’s been  an explosion of AI “deepfake” content that blatantly rips off well-known musicians and celebrities. The  Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, which is currently a discussion draft bill, would enshrine a number of legal protections for artists who don’t want their visual or audio likenesses used in unauthorized deepfakes. This interview has been edited for brevity and clarity.

READ MORE  The Marvels' Box Office, Deadpool Leaks, and More Top Pop Culture News of the Week

What role did RIAA have in connection to this bill and what do you hope this bill achieves?

We were one of several groups that were asked to give input so that they could develop the bill. SAG-AFTRA was also heavily involved. The idea was to create federal legislation because all of the protections for name, image, likeness and voice are in the states right now—and they’re all different.

What is it about AI that makes it a threat to your industry (the music business)? What’s the problem with this technology?

The problem really isn’t the technology. AI can be licensed for a variety of great purposes and uses. The problem is when AI is made based on a particular artist’s voice or image without consent, credit, and compensation. Basically, what the bill says is, ‘The technology’s great and if an artist wants to license the use of their essence—their voice, their likeness—that’s they’re choice. But what you can’t do is clone their voice without their permission.’ It’s just a basic right that exists for all other types of property—and that should exist here. It should be up to the artist to decide whether or not they want to allow you to use their image or their voice.

So it seems like this legislation actually opens the door for more AI music but the idea is that, if a particular artist is involved in an AI or a deepfake production, they would have to get credit and, presumably, get paid, right?

Well, the bill says “authorized,” so the artist would have to authorize it. If they want to negotiate compensation they can. Some may not want to. But the key is that “authorization” part. The author has to say, “Yes, you can use my voice, my image.” If they don’t say yes, you can’t do it.

READ MORE  How to Watch NASA’s Rare All-Woman Spacewalk Live

What are your hopes for the passage of the bill? It seems like there are some pretty prominent Senators supporting it.

We’re very optimistic. This issue goes beyond the music and entertainment industry and it goes beyond the issue of art. It relates to every individual’s right to control their own essence, for lack of a better term. We’ve been encouraged by great bipartisan support right off the bat. To get two Democrats and two Republicans—including the chairman of the Intellectual Property Subcommittee (Amy Klobuchar D-Minnesota)—to put out this discussion draft and to do it in a way in which they’re inviting people to respond and come into the conversation, is a great sign.

Catch up on all of Gizmodo’s AI news here, or see all the latest news here. For daily updates, subscribe to the free Gizmodo newsletter.

Leave a Comment