In recent years, discussion on the ethics of artificial intelligence (AI) has accelerated, with a host of guidelines and principles produced by academic institutions and international organizations. While often comprehensive and thought provoking, many of them do not necessarily address how ethical thinking may be practically integrated into AI research and training environments. Fostering a culture of ethical AI among researchers and research institutions will be critical to translating these frameworks into actionable decisions. Ultimately, this change in culture can shape the way AI research and development is conducted.
Building on the success of the May 2019 roundtable on Ethical AI, CIFAR hosted a virtual roundtable on February 10, 2021, to facilitate discussions on how to create and sustain a culture of ethics in AI research and training environments, the needs and challenges of ethical thinking and practices in AI research institutions, and what steps AI research leaders need to take to support and cultivate a culture of ethics. Attendees at the meeting — AI research leaders (including members of CIFAR’s Learning in Machines & Brains program and Canada CIFAR AI Chairs) as well as experts in technology ethics and governance and in organizational culture — shared best practices, examples of successful initiatives, and gaps in current processes.This report highlights key insights and next steps from the roundtable discussion.
Impacted Stakeholders
- Academic and industrial researchers in AI and other areas of computer science
- Experts in law, policy, ethics and other social sciences who are studying AI research and applications
- Educators and administrators in university departments of engineering, computer science and mathematics and in AI research institutes
- Academic and professional organizations of engineers, computer scientists and AI researchers
- Policymakers, government regulators and international organizations developing guidelines and frameworks for ethical AI
Key Insights
- There are now more than 160 declarations and guidelines on ethical AI, many of them touching on similar high-level principles such as democracy or privacy. However, most of these are largely Eurocentric and do not really provide practical guidance on how AI researchers should incorporate the principles into their research agendas or practice. For such guidelines to be actionable in influencing the ethics of AI research globally, they must suggest specific tools, policies, metrics and enforceable mechanisms. There also need to be structures to consult internationally and make sure the perspectives of the broader citizenry, particularly vulnerable communities, can be heard so that their values can be taken into account in the interest-balancing decisions of ethics.
- A simple framework could help effect cultural change in the conduct of AI research:
- A clear, collective understanding of the current state of the field and where researchers want it to go, as well as why such changes are important to researchers as individuals, to their lab or institution, and to the broader community;
- Role models for the changes, particularly from senior or established researchers;
- Tools and training to allow young researchers to embed an ethical mindset into their practice; and
- Alignment of the incentives of the research system, such as funding grants and the publication process, with the intended goals.
- AI is an inherently interdisciplinary field, and work on the ethical and societal impact of AI requires bringing computer scientists and mathematicians together with other experts in law, ethics and the social sciences, as well as areas of downstream application (such as health or finance).
- It is particularly important for AI researchers not to see ethical issues as only problems for ethicists to “solve” for them, but instead something that they must continually engage in a joint effort.
- By collaborating with experts or practitioners in specific fields of application, AI researchers can gain a better understanding of the context in which datasets are collected and the resulting limitations and biases — so that machine learning models can be trained to properly deal with such data — as well as how models are actually being used in the field and how that affects different population groups or communities.
- AI researchers can benefit from actively engaging or collaborating with the communities who are affected by their research (including those who are involved in “mechanical turk” annotation of data) to better understand the ethical and societal implications of their work. Doing so would also increase the field’s transparency and build trust and legitimacy with broader society.
- Leaders in AI research can apply their strong spirit of scientific curiosity to the ethical domain. Though they may initially feel reluctance or vulnerability in engaging with questions or experts outside of their own areas of expertise, they can treat the process as one of hypothesis testing — if such and such incentive structures and tools are put in place, how will they impact the research community? By leading the charge in advocating for and implementing such changes, senior researchers also give young researchers the space and opportunities to conduct their work within ethical frameworks.
Priorities and Next Steps
- Focusing on incentives rather than only on risks may help AI researchers better engage with ethical issues. Giving researchers, particularly those in training, opportunities to think about why they are doing their research and how it can make an impact on causes that they care about, such as healthcare or climate change, may incentivize them to more broadly consider potential positive and negative impacts of their work. Case studies can be used early on in the training of AI researchers to help them analyze intended and unintended consequences of research and anticipate different scenarios for their own work.
- A variety of practices to foster a culture of ethics are already being implemented by AI researchers in their labs, institutions and conferences, and these and more can be even more widely adopted:
- Many AI research groups or institutions are actively embedding experts in law, ethics and the social sciences in their organizations. Other measures such as joint undergraduate courses/degrees and multidisciplinary journal clubs, reading rooms, departmental seminars or conferences can further foster interdisciplinary thinking and collaboration.
- Some AI institutes have established mechanisms modelled after bioethical IRBs (institutional review boards) to evaluate grant proposals for their ethical and societal implications. Similarly, ethical review processes are starting to be implemented in some international AI conferences such as NeurIPS. To maximize the impact of these measures, AI researchers will need to have access to tools, training and best practices in order to properly prepare their submissions.
- Some undergraduate and graduate computer science programs have begun mandating ethics courses or ethics components in AI / deep learning courses. It may be further possible to require ethics components in graduate student committee meetings or theses, or implement ethics checklists that students can use at the beginning of their projects followed by periodic check-ins. However, because each graduate student is primarily responsible to their own advisor, such actions will require broad buy-in and coordination among faculty members. University administration or departments will also need to provide resources and funding for some of these measures to be sustainable.
Roundtable Participants
- Stefan Bauer, Research Group Leader, Max Planck Institute for Intelligent Systems / Azrieli Global Scholar, Learning in Machines & Brains program, CIFAR
- Yoshua Bengio, Professor and Canada Research Chair in Statistical Learning Algorithms, Université de Montréal / Scientific Director and Canada CIFAR AI Chair, Mila / Scientific Director, IVADO / Co-director, Learning in Machines & Brains program, CIFAR
- Léon Bottou, Scientist, Facebook AI Research / Fellow, Learning in Machines & Brains program, CIFAR
- Jack Clark, Steering Committee Co-chair, AI Index / former Policy Director, OpenAI
- Allison Cohen, Applied Projects Lead, AI for Humanity, Mila
- Rebecca Finlay, Acting Executive Director, Partnership on AI
- Alona Fyshe, Assistant Professor, University of Alberta / Fellow and Canada CIFAR AI Chair, Amii / Fellow, Learning in Machines & Brains program, CIFAR
- Surya Ganguli, Associate Director, Stanford Institute for Human-Centered Artificial Intelligence, and Associate Professor, Stanford University / Fellow, Learning in Machines & Brains program, CIFAR
- Marzyeh Ghassemi, Assistant Professor and Canada Research Chair in Machine Learning for Health, University of Toronto / Faculty Member and Canada CIFAR AI Chair, Vector Institute / Azrieli Global Scholar, Learning in Machines & Brains program, CIFAR
- Raia Hadsell, Director of Robotics, DeepMind / Advisor, Learning in Machines & Brains program, CIFAR
- Will Hawkins, Research Associate, DeepMind
- Konrad Körding, Professor, University of Pennsylvania / Fellow, Learning in Machines & Brains program, CIFAR
- Aapo Hyvärinen, Professor, University of Helsinki / Fellow, Learning in Machines & Brains program, CIFAR
- Simon Lacoste-Julien, Associate Professor, Université de Montréal / Core Academic Member and Canada CIFAR AI Chair, Mila / VP Lab Director, Samsung SAIT AI Lab Montreal / Associate Fellow, Learning in Machines & Brains program, CIFAR
- Yann LeCun, VP and Chief AI Scientist, Facebook AI Research / Professor, New York University / Co-director, Learning in Machines & Brains program, CIFAR
- Sasha Luccioni, Postdoctoral Researcher, AI for Humanity, Mila
- Jason Millar, Assistant Professor and Canada Research Chair in the Ethical Engineering of Robotics and Artificial Intelligence, University of Ottawa
- Joelle Pineau, Associate Professor, McGill University / Core Academic Member and Canada CIFAR AI Chair, Mila / Co-Managing Director, Facebook AI Research / Advisor, Learning in Machines & Brains program, CIFAR
- Valerie Pisano, President and CEO, Mila
- Benjamin Prud’homme, Executive Director, AI for Humanity, Mila
- Sarah Rispin Sedlak, Senior Fellow, Duke Initiative for Science & Society, Duke University
- Graham Taylor, Associate Professor and Canada Research Chair in Machine Learning, University of Guelph / Faculty Member and Canada CIFAR AI Chair, Vector Institute / former Azrieli Global Scholar, Learning in Machines & Brains program, CIFAR
- Bernhard Schölkopf, Director, Max Planck Institute for Intelligent Systems / Affiliated Professor, ETH Zurich / Fellow, Learning in Machines & Brains program, CIFAR
- Richard Zemel, Professor and NSERC Industrial Research Chair in Machine Learning, University of Toronto / Research Director and Canada CIFAR AI Chair, Vector Institute / Associate Fellow, Learning in Machines & Brains program, CIFAR
- Joel Zylberberg, Assistant Professor and Canada Research Chair in Computational Neuroscience, York University / Faculty Affiliate, Vector Institute / Associate Fellow, Learning in Machines & Brains program, CIFAR
Further Reading
CIFAR resources:
A focus on ethics in AI research (research brief)
Ethical AI: A Discussion (event brief)
Accountability in AI: Promoting Greater Social Trust (theme paper for G7 multi-stakeholder conference on Artificial Intelligence: Enabling the Responsible Adoption of AI)
Other resources:
UNESCO: First draft of the Recommendation on the Ethics of Artificial Intelligence
Montréal Declaration for a Responsible Development of Artificial Intelligence
Global Partnership on AI – Working group on responsible AI
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
A Guide to Writing the NeurIPS Impact Statement
Undergraduate certificate on Digital Intelligence, Duke University
Embedded EthiCS course modules, Harvard University
AI for Humanity, Mila
Ethics Review Board Statement for Human-Centered Artificial Intelligence Seed Grants, Stanford University
For more information, contact
Fiona Cunningham
Director, Innovation
CIFAR
fiona.cunningham@cifar.ca