Recent advances in artificial intelligence (AI) and machine learning (ML) have opened the door to applications that could create significant positive impacts on healthcare, education, commerce, environmental protection, and a variety of other areas. Yet there is strong and legitimate concern that, improperly deployed, AI would compromise privacy, facilitate the spread of mis- and disinformation, and reproduce and exacerbate the existing biases, discrimination and inequities in our society. A plethora of ethical principles and statements have been drawn up to provide guidance and aspiration for the field, but ongoing work is needed to turn these principles into action and influence the way AI research and development is conducted.
CIFAR’s program in Learning in Machines and Brains brings together leading researchers from around the world who are conducting foundational work that has, and will continue to, shape the fields of AI and ML. Many of them have also been engaging in broader conversations about ethical AI and the technology’s impact on society. Actions taken by AI research leaders, including CIFAR fellows, at all levels and venues — from their own laboratories, to their institutions, to international conferences — could potentially have a cascading effect through the entire discipline and enable a critical transition towards greater consideration, implementation and cultivation of ethics in AI research and training environments.
Recent efforts by CIFAR fellows, scholars and advisors in the Learning in Machines and Brains program to tackle issues of ethical AI include:
In a recent commentary, Yoshua Bengio and Alexandra Luccioni highlighted the ethical concerns of AI applications, including bias that could be introduced by the data used for training algorithms, be programmed by a system’s creators, or arise from the way problems are framed. The authors argue for the need both for individual researchers to be more reflective in their work, and for collective societal changes through norms, laws and regulations. They propose four questions that AI/ML researchers should ask themselves when conducting their work: How will the technology be used? Who will benefit or suffer from it? How much and what social impact will it have? And how does one’s job fit with their values?
In 2019, CIFAR co-sponsored a Summer Institute on AI and Society. Summarizing the discussions at the Institute, Alona Fyshe and the other co-organizers emphasized the need for more discussion about the large-scale, medium-term (rather than either present-day or singularity-related) implications of AI, and the importance for AI researchers, both junior and senior, to engage in sustained, intensive interdisciplinary collaboration across the sciences and humanities, so as to bring in the broad perspectives needed to understand and tackle these issues.
Healthcare is one of the major application areas where much hope is pinned on advances in AI, but there is also significant concern that AI could perpetuate existing inequities. Marzyeh Ghassemi, Anna Goldenberg (fellow in CIFAR’s Child & Brain Development program) and their colleagues laid out a roadmap for AI researchers to responsibly research and deploy ML in healthcare, including choosing the right problems (rather than simply problems where annotated data are available, by engaging early with healthcare stakeholders and patients to better understand the data and formulate clinically meaningful problems), developing useful solutions (by scrutinizing when and why the data were collected and how representative they are), considering ethical implications (by working with ethicists and social scientists to understand biases in the data and the appropriateness of the problem statement), rigorously evaluating the model (e.g., by using clinically relevant evaluation measures, or qualitative analyses to identify potential bias and confounding), and thoughtfully reporting results (by sharing code and reporting the contexts in which the model is valid).
AI researchers could also contribute to ethical AI by developing technical solutions to tackle bias and other issues in data and algorithms. Recent work by Richard Zemel and colleagues in developing ethical AI techniques include fairness-aware causal modelling, which helps ML models learn how past decision-making leads to biases in datasets and in turn make fairer and more accurate decisions; , which allows recommender systems to create fairer outcomes that maximize social welfare; ML algorithms that learn to identify the origins of and the subsets of documents that can be removed to reduce the bias; and an framework, where an ML model learns when to defer decisions, when it does not have sufficient information to make a responsible decision, to a downstream (even if potentially biased) agent, in a way that maximizes the accuracy and fairness of the overall decision-making pipeline.
International conferences can play a key role in shaping the norms and practices of the field to promote more ethical and responsible AI/ML research. The organizing committee of the 2019 NeurIPS conference, including Joelle Pineau and Hugo Larochelle, implemented a , which included a code submission policy, a community-wide reproducibility challenge for accepted papers, and a reproducibility checklist for the submission/review process. In 2020, NeurIPS organizers including Raia Hadsell began to require submissions to include a broader impact statement, appointed an ethics advisor, and instituted an ethics review process for submitted papers.
References
Brunet M-E et al. 2019. Understanding the origins of bias in word embeddings. In Proceedings of the 36th International Conference on Machine Learning, June 9-15, 2019, Long Beach, CA, USA, PMLR 97:803-811.
Lin H-T et al. 2020. What we learned from NeurIPS 2020 reviewing process. [blog] Neural Information Processing Systems Conference. Available at https://neuripsconf.medium.com/what-we-learned-from-neurips-2020-reviewing-process-e24549eea38f
Luccioni A and Bengio Y. 2020. On the morality of artificial intelligence. IEEE Technol. Soc. Mag. 39:16-25.
Madras D et al. 2018. Predict responsibly: Improving fairness and accuracy by learning to defer. In Advances in Neural Information Processing Systems 31: NeurIPS 2018, December 3-8, 2018, Montréal, QC, Canada (pp.6147-6157).
Madras D et al. 2019. Fairness through causal awareness: Learning causal latent-variable models for biased data. In FAT* ’19: Proceedings of the Conference on Fairness, Accountability, and Transparency, January 29–31, 2019, Atlanta, GA, USA (pp. 349-358).
McCoy LG et al. 2020. Ensuring machine learning for healthcare works for all. BMJ Health Care Inform. 27:e100237.
Mladenov M et al. 2020. Optimizing long-term social welfare in recommender systems: A constrained matching approach. In Proceedings of the 37th International Conference on Machine Learning, July 13-18, 2020, Virtual, PMLR 119:6987-6998.
Parson E et al. 2019. Artificial Intelligence’s Societal Impacts, Governance, and Ethics: Introduction to the 2019 Summer Institute on AI and Society and its Rapid Outputs. UCLA: The Program on Understanding Law, Science, and Evidence (PULSE). Retrieved from https://escholarship.org/uc/item/2gp9314r
Pineau J et al. 2020. Improving reproducibility in machine learning research (A report from the NeurIPS 2019 Reproducibility Program). arXiv. Preprint.
Wiens J et al. 2019. Do no harm: A roadmap for responsible machine learning for health care. Nat. Med. 25:1337.
This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you. We use this information in order to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media. To find out more about the cookies we use, see our Privacy Policy.