Do you agree we are approaching the tipping point in the history of our planet? Sounds dramatic, doesn’t it? And unfortunately, it is as serious as it sounds.
Over the years there have been many warnings about the imminent dangers looming over our cherished planet but not many tangible actions have been taken to tackle them. Here is a way to describe the situation:
Let's try to make a plan to save a children's Earth How can we change our minds and stop this madness Today is our last chance to keep a planet safe …Power and money will not follow us to the grave Hey Mr. President How will you save your loved ones from this now Tell me now "Yes, we can", "Yes, we can" . . . The situation is serious, everywhere on the planet We need to raise all voices, we need to change all minds Temperature's rising, winters are getting longer Who cares? Well, I do. Maybe you do too It's not for me or you It's for all children, our children excerpt from “Mr. President” (Mademoiselle Zhivago) by Lara Fabian
“Okay, what does it have to do with Artificial Intelligence?”, you might ask. And you are right in the sense that it’s not that obvious, so let us try to peel the layers of this onion as much as we can.
The year 2012 is as important a year for Artificial Intelligence (AI) as 1893 for electricity. In 1893 Nicola Tesla demonstrated Alternating Current (AC) electricity at the World Columbian Exposition in Chicago. “So what happened in 2012 in AI?” It was the first time after many decades when convolutional neural networks (CNN-s) finally showed their might in deep learning. Their power was demonstrated during the famous ImageNet competition where the participants competed for making accurate predictions of visual objects.
But deep learning is not the only field that has seen breakthroughs over recent years. The year 2016 was another significant milestone in the history of AI, as another branch of AI, reinforcement learning, saw a huge leap: DeepMind’s AlphaGo beat Lee Sedol in the game of Go. The initial algorithm was then successfully adapted for other game-like (AlphaZero and MuZero) and science-related applications (AlphaFold).
Then came the year 2017 with yet another breakthrough in deep learning, this time in the subfield called Natural Language Processing (NLP). A group of scientists, made mostly of Google researchers published the famous “Attention Is All You Need” paper showing that attention mechanisms made it possible to build many models – sometimes on a gargantuan scale – for building models of universal capabilities.
“Okay, why is this short survey of AI breakthroughs so important?” The reason is simple. We hear only about the successes of AI without realising the problems they bring along. Companies care more about their successful trials and consequently publish about them only when it’s lucrative to do so. However, what they do not show is how many failures it takes to produce those successes. In fact, we don’t have even a rough idea of the correct ratio of failed to successful trials. We only see great achievements on shiny covers of news magazines!
You might ask next “why do I care if Google, Meta, or Nvidia perform many unsuccessful internal trials? Isn’t it their business? It does not affect me!”. Let me challenge this thinking and show you the real problem that stems from it. When a deep–learning or reinforcement–learning experiment ends up being a failure, it is not only a computational cost or a waste of resources for a company but also an actual environmental hazard for all of us living on this planet.
Let’s look at numbers. We have learned from the OpenAI’s blog post that the amount of computing capacity required for the largest AI training runs has increased by 300,000 times since 2012 (AI and Compute ). Additionally, it was purported that GPT-3 model could have consumed 5,000-67,000 gallons of gasoline at perfect energy efficiency as it was being trained. There are also some calculations showing that even smaller models than GPT-3 consume on average 9216 kilowatt-hours (kWh) of electricity per day which is nearly as much as the yearly average household electricity consumption in the US. A University of Massachusetts Amherst study estimates that training a single NLP model can generate close to 300,000 kg of carbon emissions which is “nearly five times the lifetime emissions of the average American car (and that includes the manufacturing of the car itself)”.
“Okay, what’s the deal?”. The deal is that the intensive use of GPUs to train machine learning models contributes to rapid CO2 emissions. CO2 and other toxic greenhouse gases such as methane, nitrous oxide, and chlorofluorocarbon trap heat under the Earth’s atmosphere. This causes the temperature of the Earth to rise thus rendering imbalances in our Earth’s ecosystem. These imbalances cause many problems such as global average temperature rise, sea-level rise, and air pollution, to name a few. For this reason, there have been many formal and informal initiatives to hold countries (and consequently big private companies within) accountable for their carbon footprint.
“However, isn’t this a problem of the energy industry not being decarbonised? Why are we blaming AI/IT industry?”. The short answer is “not entirely!”. A very important nuance to understand is that this huge demand for electricity consumption puts great pressure on the electricity industry and does not support all the efforts put forward toward decarbonising (and therefore greenifying) the electricity market that many advanced economies try to achieve. Transitioning into a green economy is not as easy as flushing money into it with a hope of a sudden outcome. It requires careful planning, a gradual phase-out period of dirty polluters, and incremental changes and upgrades of the whole energy infrastructure.
Now you would agree the indirect CO2 abatement should be prioritised by big tech companies due to the fact that training big neural networks, such as GPT-3, DALL-E 2, and Gopher, or running large-scale reinforcement learning trials dampens all our collective effort toward the green environment, energy decarbonisation and climate change mitigation.
And it is not clear if cloud technology providers are making the correct amount of effort. The Three Big Cows (AWS, Azure, and GCP) are not shy about their openings of more and more virtual machine centres designed to provision and run heavy-load applications. They are adapting, albeit slowly, with their new offerings of optimised compute and/or memory services. However, those are not sufficiently sustainable approaches to cut computations and electricity to reach the green future we envision. The good news is that at TurinTech we believe that one way AI can contribute to CO2 reductions is through multiobjective optimisation of machine learning tasks. Through optimisation, we can reduce power demand, hence the computational resources. Some of the objectives we provide are called green metrics which can be set to training carbon emissions, predicting carbon emissions, training electricity consumed, and predicting electricity consumed.
However, it would be unfair to mention only AI’s adverse impact on climate change without mentioning its potential contribution to climate change mitigation. There are so many applications of AI in energy and climate studies that this article alone would not be enough to describe even the tip of the iceberg of all the potential avenues. However, a few ongoing projects and potential applications ought to be mentioned.
AI is already helping to measure the sea-level rise, and there is positive research on predicting future ozone layer levels. It has been observed that “stratospheric ozone concentrations have reduced to near-zero values over Antarctica”. The ozone layer of our planet is the protective shield against harmful and possibly deadly ultraviolet rays of the sun and its preservation puts a big conundrum on climate change mitigation issues. Whenever we replace harmful ozone-depleting substances with other substances, those other substances turn out to be greenhouse gases with their harmful impact on climate change.
“So, is there a light at the end of the tunnel?”. Yes, there is. Companies should use AI to improve their sustainability goals and green initiatives. We should pay attention to metrics such as carbon emission and electricity consumption when developing and deploying their models. There are a few promising implementations by some pioneers. Alaska Airlines, for example, is successfully decreasing the company’s carbon footprint by using AI technology to guide flights, and it is expected that more companies will follow their steps in coming years.
Electricity generation sources are diverse: power plants, windmills, solar panels, geothermal, biomass, etc. Those sources generate lots of data and AI, for example, can be used to predict the failure of a plant or a windmill, or measure the degradation of solar panel conversion rate in real-time. It can be even used to measure the collective impact of Carbon Capture and Storage (CCS) and heat-pump deployment effects on the whole energy system. Not to mention even the possibility of comparing the efficacy of competing decarbonisation strategies. Experiments can be designed to understand the impact of each strategy containing specific proportions of CCS, geothermal, and renewable technologies. Imagine also an AI experiment designed to take advantage of the adversarial neural networks architecture or reinforcement learning simulation when designing energy-policy strategies for the green energy transition.
Finally, “okay, why would I invest in green infrastructure? Are there any investment benefits from green finance?”. Yes, there are. AI can be used to make investment decisions in energy-generating technologies, especially in renewable sources. Which individual technologies are going to be an important driver for profit-making? This might be an easy problem to solve but which portfolio is going to meet your whole ESG (Environmental, social, and corporate governance) standards, is hard to say.
The key takeaway of this article is twofold. The Need for Green AI advocates the urgency of:
- building and using green AI solutions
- adopting and applying AI for green initiatives
evoML is an AI optimisation platform that enables companies to do both. As an added bonus, you will get the optimal model and code ready for deployment, optimised to achieve maximum efficiency and speed. Now it’s the right time to go green with smart and efficient AI!
About the Author
Dr Vitali Avagyan | Research
Experienced data scientist with a passion for software engineering and cloud technologies. He creates content on productivity and AI, so follow him on LinkedIn to get his regular tips.