By: David Rolnick & Tami Vasanthakumaran
15 Oct, 2021
Artificial intelligence (AI) can be a powerful tool in helping society mitigate climate change and adapt to its effects. AI can help distill raw data into useful information – for example, by using satellite imagery to automatically monitor greenhouse gas emissions or land use, or parsing corporate financial reports to identify climate-related disclosures. AI can help optimize complicated systems – for example, reducing the energy required for heating and cooling buildings, or improving the efficiency of freight transportation networks. AI can improve forecasting – for example, predicting supply and demand to help balance electrical grids, or forecasting agricultural productivity. Finally, AI can accelerate the process of scientific modeling and discovery – for example by speeding up the search for new materials like those in batteries and photovoltaic cells.
AI, however, is not a silver bullet in addressing climate change. It is useful when well-matched to specific bottlenecks in policy, energy, land use, or other areas. Many of the most impactful application areas are relatively prosaic – detecting failures in railroad systems doesn’t receive as much press as self-driving cars, for example, but from a climate perspective is likely more beneficial. As in other areas of applied AI, domain-specific knowledge is generally essential, as there are always constraints and contextual information that aren’t captured in a raw dataset. It is important at the start of any project to consider how the methodology will ultimately be deployed to ensure that it is actually useful and that any deployment considerations are built into the design. Collaboration between experts in AI and the relevant application area is necessary to avoid pitfalls and ensure a pathway to impact.
Equity considerations go hand-in-hand with impact – including who is empowered to build solutions, what problems are prioritized, and how these problems are worked on. Empowering a diverse and global set of stakeholders to shape AI’s applications in climate change is essential to ensure that technologies are owned by the people affected by them, rather than reinforcing existing power imbalances across countries and institutions. Related to the question of who is the question of what is being worked on, since problem priorities often reflect existing inequities within AI and technology. For example, AI to fight wildfires (a key problem in North America, Europe, and Australia) tends to receive more attention and funding than AI to fight locusts (which affect East Africa, the Middle East, and India), even though both problems are exacerbated by climate change. Finally, how projects are worked on is also important. Data imbalances between regions, or between communities within a region, can mean that AI solutions are only applicable to a subset of the population, or that algorithms are most effective within data-rich areas. Ideally, AI-for-climate would serve to improve equity, but this will take active work at both a high level (policy) and a low level (project design and management).
While AI can be used explicitly with the goal of fighting climate change, it is also worth noting that every application of AI affects the climate (and society more broadly) in positive and negative ways. “AI for Good” should ideally mean more than just adding beneficial applications on top of business as usual. Some applications of AI are definitely making climate change worse – for example, AI-based advertising systems that increase consumption, and AI algorithms to accelerate the discovery and extraction of fossil fuels ). Often, implicit choices made by engineers have the potential to change the impact of a new technology. For example, if we design autonomous vehicles with a focus on personal cars, then driving will become easier, people may drive more, and global carbon emissions could increase (even if each mile driven becomes somewhat more efficient). On the other hand, orienting AV technology towards vehicle sharing and public transportation could decrease carbon emissions. Our work as technologists affects climate change, whether we like it or not – the choices we make both implicitly and explicitly are meaningful.
More resources, including readings, webinars, events and opportunities can be found through the Climate Change AI initiative.
David Rolnick is Assistant Professor and Canada CIFAR AI Chair at McGill University and Mila – Quebec AI Institute, and is a Co-founder and Chair of Climate Change AI.
AI research and development that lacks representation and diversity stands the risk of being biased, unfair, and inequitable. While AI research can support the development of novel solutions for addressing the climate crisis, researchers, companies and decision-makers need to consider how AI tools have the ability to harm as well as help.
After the 2004 Boxing Day Tsunami, I raised funds in my neighbourhood, which after travelling to an affected village in India, I donated to victims four years after the disaster. Witnessing firsthand the conditions people lived in, even years later, I realized how marginalized communities are significantly impacted by natural disasters and the effects of climate change. Climate change stands to increase inequality around the globe, but responsible AI research has the potential to address that. To advocate this, I’ve outlined the ABCs of incorporating mindful EDI into AI research.
Cultural integration and intersectionality go hand in hand in apprehending global problems and how they affect our most vulnerable populations. Stanford University’s Women in Data Science initiative practices responsible AI and reduces bias by ensuring women and BIPOC representation in research. Programs such as the CIFAR OSMO AI4Good Lab are dedicated to ensuring diverse perspectives feed into the future of AI technologies for the benefit of society. Diversity is essential to a multifaceted understanding of climate issues and the democratization of AI.
Data is the lifeblood of AI technologies. Data that is unbiased and representative of the population has the potential to support robust algorithms that can deliver evidence-based social impact. Data should capture a complete picture of the problem without cropping out all that colonialism once identified as insignificant. Marginalized communities, such as Indigenous Peoples and their knowledge of the environment, needs to be included in building AI-based solutions for climate.
AI tools such as those used to track locust swarms and signal aerial pesticide sprayers are addressing the economic burden and food security issues faced by farmers in Africa. True change comes from AI research that takes into consideration outcomes that have a positive impact on all members of society. This involves applying novel AI solutions that address the unique issues different cultures and geographic regions face, and their ability to implement these tools.
Let’s implement EDI into AI climate research by apprehending, building and changing the way we do research to move forward positive solutions that will have a lasting impact.
Tami Vasanthakumaran is a youth ambassador with Plan International Canada’s Girls Belong Here program which advocates for gender equality around the world. Dr. Vasanthakumaran is enrolled in Harvard Medical School’s Global Clinical Scholars Research Training program.
CIFAR is a registered charitable organization supported by the governments of Canada, Alberta and Quebec, as well as foundations, individuals, corporations and Canadian and international partner organizations.