The creation and deployment of increasingly sophisticated artificial intelligence (AI) algorithms have immense societal implications. While AI technologies are contributing to new products and services across industries that bring many potential benefits, there are also questions and challenges related to the ethical design and usage of AI. Addressing these questions will require drawing on expertise from across disciplines in the physical and social sciences.
On May 4, 2019, a roundtable was convened between Fellows of CIFAR’s Learning in Machines & Brains program and other invited experts in AI, ethics, law, public policy and design to discuss the ethical dimensions of AI. The discussion focused on three main themes: the ethical design of machine learning (ML), bias in ML systems, and fake content. Through brief presentations and facilitated discussions, the goal of the meeting was to define, beyond the creation of guidelines or recommendations, tangible actions to address ethical issues surrounding AI research and applications.
- Academic and industrial researchers in AI and other areas of computer science
- Experts in law, policy and other fields studying the impact of AI in society
- Companies developing AI applications
- Policymakers and government regulators (in economics, communications, law and other areas of policy)
- Educators and administrators in engineering, computer science and mathematics departments
- Organizations of professional engineers
- AI is already creating many beneficial applications, including medical, environmental, educational and humanitarian ones. However, there are also growing concerns with certain applications for military, surveillance, and manipulative advertising. These issues require nuanced discussions about the possibilities of AI, and wild speculations about either positive and negative potential impacts should be avoided.
- Technology is not culturally neutral. Developers of technological systems carry assumptions that are based on the values and ideals of the culture in which they are situated. These assumptions can create biases in the algorithms that lead to systematic disadvantages for specific communities, often racial or other minorities.
- There is often an incorrect assumption that “ethical people” will create ethical technology. However, ethics cannot be reduced to a single issue and there is not a single solution or formula. A diverse demographic must be at the table to represent multiple ethical perspectives and identify issues that are missed. There needs to be an ongoing process to refine and think about ethical issues associated with AI.
- Generative models in ML are being used to create “deepfakes” that are increasingly realistic and difficult to detect. The presence of these faking technologies may significantly impact the population’s trust in the media and the democratic process.
- The exponential growth in computational power is creating technological changes faster than policy infrastructure is being developed. The building of standards or measurement initiatives may help create an “early warning” system that could facilitate better government decision making.
Priorities and Next Steps
- AI researchers should address the conceptual wall between fundamental and applied AI research in order to incorporate end-to-end ethics.
- An environment should be cultivated where staff or junior researchers are not afraid to speak out about ethical issues in their company, government or other organization.
- AI researchers should seize opportunities for local empowerment by promoting ethics and inclusion within their communities: advocating for better inclusivity in their department or conference, considering the diversity of the students whom they train, or socializing their students to think about the ethical implications of their work.
- Update ethics education for engineers and researchers. There are examples of engineering schools that fully integrate ethics into their curricula. Other departments of engineering and computer science, as well as professional associations, can play a role in establishing this practice more widely throughout the field of AI and ML.
- Interdisciplinary workshops or funding opportunities should be set up to develop a mutual dialogue around AI ethics between computer scientists/engineers and social science researchers.
- It may be instructive to look at how other fields, e.g., genomics, approach the ethical implications of their research and applications.
- Pieter Abbeel, University of California, Berkeley / CIFAR
- Foteini Agrafioti, Borealis AI / RBC
- Yoshua Bengio, Université de Montréal / Mila / CIFAR
- Jack Clark, OpenAI
- Aaron Courville, Université de Montréal / Mila / CIFAR
- Nando de Freitas, DeepMind / Oxford University / CIFAR
- Chelsea Finn, Google / Stanford University / CIFAR
- Michael Froomkin, University of Miami School of Law
- Timnit Gebru, Black in AI / Google
- Zaid Harchaoui, University of Washington / CIFAR
- Simon Lacoste-Julien, Université de Montréal / Mila / CIFAR
- Hugo Larochelle, Google / Université de Sherbrooke / CIFAR
- Yann LeCun, Facebook / New York University / CIFAR
- Honglak Lee, Google / University of Michigan / CIFAR
- Jason Edward Lewis, Concordia University
- Christopher Manning, Stanford University / CIFAR
- Jason Millar, University of Ottawa
- Margaret Mitchell, Google
- Osonde Osoba, RAND
- Blake Richards, University of Toronto / CIFAR
- Saeed Saremi, University of California, Berkeley
- Graham Taylor, Guelph University / CIFAR
- Pascal Vincent, Université de Montréal / Mila / CIFAR
- Joel Zylberberg, York University / CIFAR
CIFAR’s program in AI & Society
Accountability in AI: Promoting Greater Social Trust (theme paper for G7 multi-stakeholder conference on Artificial Intelligence: Enabling the Responsible Adoption of AI)
A machine learning system generates captions for images from scratch (research brief)
Understanding machine behaviour (interview with Hugo Larochelle, associate director of the Learning in Machines & Brains program, and Matthew Jackson, fellow in the Institutions, Organizations & Growth program)
For more information, contact Fiona Cunningham, Director, Innovation.