By: CIFAR
3 Aug, 2022
Against the backdrop of increasing use of artificial intelligence (AI) technologies in everyday life and growing private investment in the area, more researchers are entering the field of AI than ever before. The increasing relevance of AI has come with a wider awareness of its potential harmful real-world impacts, including on the environment, marginalized communities, and society at large.
How can the AI research community better anticipate the downstream consequences of AI research? And how can AI researchers mitigate potential negative impacts of their work such as inappropriate applications, unintended and malicious use, accidents, and societal harms?
In early 2022, CIFAR, Partnership on AI and the Ada Lovelace Institute brought together recent ML conference organizers and AI ethics experts to consider what conference organizers can do to encourage the habit of reflecting on potential downstream impacts of AI research among submitting authors.
“AI has amazing potential for doing a lot of good in our world. But it also carries tremendous potential for harm, if not conducted responsibly,” says Elissa Strome, Executive Director of the Pan-Canadian AI Strategy at CIFAR. “In an academic environment of ‘publish or perish’ and ‘fast science,’ the AI research community must systematize the practice of pausing to meaningfully consider the ethical implications of research prior to implementation and spread. As the central hubs of academic knowledge-sharing, conferences are a really smart place to start. Alongside our international collaborators Partnership on AI and the Ada Lovelace Institute, CIFAR is pleased to be sharing the fantastic conversations and tools developed through our workshop, which we hope conference organizers worldwide can adapt in their own activities to help spread the practice of responsible AI.”
For more information, contact:
Gagan Gill
Program Manager, AI & Society, CIFAR