Response to the climate impact of AI produced | MIT News



Part 2 of the two-part series Environmental impact of generative artificial intelligence, MIT News We explore some of the ways experts are working to reduce technology’s carbon footprint.

Energy demand for produced AI is expected to continue to increase dramatically over the next decade.

For example, an April 2025 report from the International Energy Agency predicts that global electricity demand from data centers that house computing infrastructure to train and deploy AI models will more than double by 2030. Not all operations running in the data center are AI-related, but this total is slightly higher than Japan’s energy consumption.

Furthermore, an August 2025 analysis from Goldman Sachs’ study predicts that around 60% of the increase in electricity demand from data centers will be filled with burning fossil fuels, increasing global carbon emissions by approximately 220 million tonnes. In comparison, driving a gas-powered car at 5,000 miles produces about a ton of carbon dioxide.

While these statistics are phenomenal, MIT and around the world are studying innovations and interventions to mitigate AI balloon carbon footprints, from increasing the efficiency of algorithms to rethinking data center design.

Consider carbon dioxide emissions

The talk of reducing the carbon footprint of produced AI is usually concentrated on “operating carbon,” the emissions used by powerful processors known as GPUs within data centers. Vijay Gadepally, a senior scientist at the MIT Lincoln Institute, who leads research projects at the Lincoln Institute’s Supercomputing Center, often ignores “embodied carbon,” the emissions created by building data centers in the first place.

The construction and modification of data centers built from a large amount of steel and concrete and filled with air conditioning units, computing hardware and miles of cables consume a huge amount of carbon. In fact, the environmental impact of building data centers is one of the reasons why companies like Meta and Google are exploring more sustainable building materials. (Cost is another factor.)

Furthermore, the data center is a huge building that is the world’s largest China Telecom Inn Mongolia Information Park, and is caught up in about 10 million square feet, about 10-50 times the energy density of a regular office building.

“The operational aspect is just part of the story. Some of the things we’re working to reduce operational emissions may also help reduce materialized carbon, but we need to do more on that aspect in the future,” he says.

Reducing operational carbon dioxide emissions

When it comes to reducing the operational carbon footprint of AI data centers, there are many similarities to energy-saving measures for homes. One can simply defeat the lights.

“Even if you have the worst light bulbs in your home from an efficiency standpoint, turning them off or dimming them will always use less energy than running in a full explosion,” Gadepally says.

Similarly, research from the Supercomputing Center has shown that energy has minimal impact on AI models’ performance, as it “pulls” the GPU in the data center, while also making it easier to cool the hardware.

Another strategy is to use energy-intensive computing hardware.

Demanding generation AI workloads, such as training new inference models like GPT-5, typically require many GPUs to run simultaneously. Goldman Sachs Analysis estimates that state-of-the-art systems could quickly run 576 connected GPUs at once.

However, engineers can achieve similar results by reducing hardware computing accuracy, perhaps by switching to less powerful processors tailored to handle specific AI workloads.

There are also measures to increase the training efficiency of power-hungry deep learning models before they are deployed.

Gadepally’s group found that about half of the electricity used to train AI models is spent on acquiring the last two or three percent points with accuracy. Stopping the training process early can save a lot of that energy.

“Sometimes, like e-commerce recommendations, 70% accuracy can be good enough for a particular application,” he says.

Researchers can also take advantage of measures to increase efficiency.

For example, postdocs at the Supercomputing Center realized that during the training process, groups could run 1000 simulations to select two or three best AI models for their projects.

By building tools that can avoid around 80% of those wasted computing cycles, they dramatically reduced the energy demand for training without reducing the accuracy of the model, says Gadeparry.

Take advantage of efficiency improvements

Continuous innovation in computing hardware, such as dense arrays of transistors on semiconductor chips, allows for dramatic improvements in the energy efficiency of AI models.

Energy efficiency improvements have slowed down on most chips since around 2005, but the computational complexity of GPUs improving by 50-60% each year, says Neil Thompson, director of the FutureTech Research Project at MIT’s Computer Science and Artificial Intelligence Institute. Neil Thompson, an early researcher at digital economy MIT.

“The “Moore’s law” trend of gaining more and more transistors on chips remains important for many of these AI systems. “Performing operations in parallel is very valuable to improve efficiency,” says Thomspon.

More importantly, his group’s research shows an increase in efficiency from new model architectures that can solve complex problems faster.

Thompson coined the term “negaflop” to explain this effect. In the same way that “negawatts” represent electricity saved for energy savings, “negaflops” are computing operations that do not need to be performed to improve the algorithm.

These include “pruning” unnecessary components of neural networks or employing compression techniques that allow users to do more with less calculations.

“If you need to complete a task using today’s very powerful models, in just a few years, you might be able to do the same thing with a significantly smaller model. This will create a much more environmental burden. Making these models more efficient is the most important thing you can do to reduce the environmental costs of AI,” says Thompson.

Maximize energy savings

Reducing the overall energy use of AI algorithms and computing hardware reduces greenhouse gas emissions, but not all energy is the same, adds Gadeparry.

“One kilowatt-hour carbon emissions vary widely not only between days and years, but also between days and years,” he says.

Engineers can take advantage of these variations by leveraging the flexibility of AI workloads and data center operations to maximize emissions reductions. For example, some generated AI workloads do not need to be fully executed at the same time.

According to Deepjyoti Deka, research scientist at the MIT Energy Initiative, computing operations are performed later, and may be performed later if they are executed later.

Deka and his team are also studying “smartier” data centers where AI workloads from multiple companies using the same computing equipment are flexibly tailored to improve energy efficiency.

“Looking at the system as a whole, we hope to minimize energy use and reliance on fossil fuels while maintaining reliability standards for AI companies and users,” Deka says.

Mitei’s he and others have built a data center flexibility model that takes into account the different energy needs of training deep learning models compared to deploying the model. Their hope is to identify the best strategies for scheduling and streamlining computing operations to improve energy efficiency.

Researchers are also investigating the use of long-term energy storage units in data centers.

With these systems in place, data centers can avoid using preserved energy generated by renewable sources during high demand periods, or using diesel backup generators if there is variation in the grid.

“Here, long-term energy storage can be a game-changer, because the system’s emission mix can be actually changed and designed to rely on renewable energy,” says Deka.

Additionally, researchers at MIT and Princeton University are developing software tools for investment planning, called the power sector, called Genx.

Locations can have a major impact on reducing the carbon footprint of your data center. For example, Meta operates a data center in Relais, a city on the northern coast of Sweden, where cool temperatures reduce the amount of electricity needed to cool computing hardware.

With more thought into the outside of the box (and more), some governments are investigating the construction of data centres for the month that could operate on almost any renewable energy.

AI-based solutions

Currently, the expansion of renewable energy generation here is not in line with the rapid growth of AI, one of the major obstacles to reducing carbon emissions, and Jennifer Turliuk MBA ’25 was a former Sloan Fellow and former practice leader in climate and energy AI at Martin Trust Center at Martin Trust Center at MIT Antrepreneship.

The local, state and federal review process required for new renewable energy projects can take years.

Researchers at MIT and elsewhere are investigating the use of AI to speed up the process of connecting new renewable energy systems to power grids.

For example, a generated AI model can streamline interconnection studies that determine how new projects affect the power grid.

And when it comes to accelerating the development and implementation of clean energy technologies, AI could play a major role.

“Machine learning is best suited to tackling complex situations, and electrical grids are said to be one of the largest and most complex machines in the world,” adds Turliuk.

For example, AI can help you optimize predictions for solar and wind energy generation, or identify ideal locations for new facilities.

It can also be used to perform predictive maintenance and fault detection for solar panels and other green energy infrastructures, as well as to monitor transmission wire capacity for maximum efficiency.

By helping researchers collect and analyze huge amounts of data, AI can also inform targeted policy interventions aimed at getting the largest “vans” from regions such as renewable energy, says Turliuk.

To help policy makers, scientists, and companies consider the multifaceted costs and benefits of AI systems, she and her collaborators have developed a net climate impact score.

This score is a framework that helps determine the net climate impact of AI projects, taking into account emissions and other environmental costs and potential environmental benefits in the future.

At the end of the day, the most effective solutions can come from collaboration between companies, regulators and researchers, with academia leading the way, adds Turliuk.

“Everyday is important. We are on a path that is not entirely clear until the impact of climate change is too late. This is a once-in-a-lifetime opportunity to innovate and carbon-intensify our AI systems,” she says.



Source link