By: CIFAR
6 Dec, 2018
December 6, 2018
Authors:
We gratefully acknowledge the support of CIFAR and the Ministry of Internal Affairs and Communications of the Government of Japan as well as the many stakeholders in Canada and Japan who provided feedback and advice on this paper. Please note that the ideas expressed herein are either those of the authors or were provided through the stakeholder consultation. They are not those of the governments of Canada or Japan.
This paper was developed at the request of the Government of Canada to support the G7 Multi-stakeholder Conference on Artificial Intelligence: Enabling the Responsible Adoption of AI on December 6, 2018. Co-leads from Canada and Japan developed this paper on accountability, the intent of which is to provide a starting point for discussions on the topic of Accountability in AI: Promoting Greater Social Trust at the conference. This paper and the discussion builds on work that started at the 2016 Takamatsu ICT Ministerial Meeting and led, most recently, to the Charlevoix Common Vision for the Future of Artificial Intelligence.
This paper is organized into two sections. The first provides information on work to date in this domain and sets out various concepts and distinctions worth noting when thinking about accountability and trust in AI. The second section reports on the consultation process and discusses potential actions for different stakeholder groups for the future.
Seven questions, organized under three broad headings, are proposed for framing the discussions at the conference:
With the development and proliferation of AI systems, there is an urgent need to address questions of accountability. However, there is a “lack of consensus among the broader community regarding what a ‘solutions toolkit’ would look like.” This paper surveys the topic of accountability in AI and its link to trust, proposes some key definitions and distinctions, and provides some considerations for future discussions and potential actions among G7 members, other countries, and stakeholders worldwide.
Key Term: Artificial Intelligence (AI)
“[AI is] about making computers that can help us that can do the things that humans can do but our current computers can’t” – Yoshua Bengio
“The field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition.” – Amazon
“It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.” – John McCarthy
The term artificial intelligence (AI) encompasses a broad range of technologies and approaches. Two general approaches to AI are worth distinguishing. One approach uses predefined models to accomplish goals; the other relies on machine learning to train a system to accomplish goals. There are two well-known techniques in machine learning. To define them at a very high level, they are deep learning, which uses very large artificial neural networks, and reinforcement learning, which uses a reward and punishment structure. The intent of this paper is to discuss accountability as it applies broadly to AI, while recognizing that certain ethical issues that have become associated with AI, most notably explainability, relate most directly to deep learning.
AI research has advanced rapidly in the past decade. Success in the lab has led to the proliferation of AI-based systems in certain sectors of society. Because of its ability to operate on massive data sets with speed, precision and accuracy that outpace human capacities, AI is beginning to be applied, or is being contemplated, in healthcare, transportation, law and order, defense, finance—virtually every sector of the economy—to support and in some cases substitute human analysis and decision-making. These capabilities position AI to deliver great benefits to society.
As with any new technology, we are learning that deploying AI beyond the lab might create risks for individuals and societies, raising concerns about accountability. A few examples that help to illustrate follow. AI that is trained on biased data sets can entrench and proliferate those biases in its outputs, leading to discriminatory applications. In practice, many deep learning systems function largely as “black-boxes,” and so their behaviour can be difficult to interpret and explain, raising concerns over explainability, transparency, and human control. Moreover, AI systems may have multiple components (code, sensors, data assets, etc.), any of which may malfunction, further complicating how accountability is determined. Finally, owing to the way humans often perceive AI as “superior” in its abilities, they can over-trust it. These examples are not exhaustive. As we learn more about AI and its unique characteristics, the list of potential harms is evolving. An understanding of these potential harms is beginning to be incorporated into governmental thinking on AI. Indeed, systematic research into the ethical implications of AI is progressing steadily both inside and outside of academia. Some of that emerging research focuses specifically on helping policy makers and engineers anticipate and address ethical issues related to AI, including accountability.
Anticipating and addressing these potential risks is urgent. These systems are often opaque and complex, and their potential impact is broad. Coupled with their potential use in critical, high-stakes decision contexts (e.g. judicial reasoning, healthcare, warfare, financial transactions), their potential impact is significant. For example, a routine software update to a traffic routing algorithm controlling an automated and connected mobility system could quickly redistribute risks among millions of people within the system. Determining who ought to face greater risks within a mobility system is a weighty task with broad implications. The process by which we ought to make that decision, as well as the responsibility for that decision and its systemic consequences, may exceed the capabilities of existing regimes (torts, consumer protection, etc.).
Though there is clearly potential to do harm by deploying AI in some contexts, we should be measured in our concern. In many cases, the negative societal impacts of status quo (i.e. non-algorithmic) systems are not interrogated as intensely as AI systems. In other words, it is important to understand the risks posed by AI as well as the risks posed by the status quo.
As is commonly the case, the pace of technical innovation is outpacing our policy responses with respect to accountability. Failing to establish clear guidance related to accountability could undermine trust among both experts and the public, potentially limiting the benefits of AI. At the same time, it is important to note that the goal cannot simply be to increase levels of trust in AI. This is because we can over- or under-trust an automated system. We under-trust when inaccurate assumptions (i.e. fears, misinformation) about AI prevent us from trusting it, potentially depriving us of the benefits it might produce. On the flipside, we over-trust a system when, for example, we mistakenly believe (and trust) that a system is capable of performing certain tasks that it is not. The unfortunate accidents caused by autonomous vehicles can be seen as cases of over-trust: in each case the human driver falsely believed that the automated system in control of the driving was capable of performing at a level at which it was not capable of. Thus, our aim could be to encourage appropriate levels of trust in AI, with accountability regimes taking the nuances of over and under trust into account.
Finally, with the progress of AI networking where AI systems are connected to other systems, over the Internet or other information and communication networks, it will become more difficult to identify both the causes of issues as well as where the responsibility for them lies. In order to foster trust in AI, it will be important to build on a set of shared principles that clarify the roles and responsibilities for each stakeholder in the network including developers, service providers and end users in the research, development and use of AI.
Broadly speaking, accountability is the foundation of trust in society. Accountability is about a clear acknowledgement and assumption of responsibility and “answerability” for actions, decisions, products and policies. Currently, three “senses” of accountability related to AI exist in the literature, each pointing to a different locus for action. In the first sense, accountability is a feature of the AI system itself. Building explainability into the AI systems would partially address the AI’s accountability in this sense. The second sense of accountability focuses on determining which individuals or groups are accountable for the impact of algorithms or AI. In this sense, accountability is somewhat narrowly associated with determining who is most responsible for what effect within the sociotechnical system. Finally, and perhaps most broadly, accountability is seen as a feature of the broader sociotechnical system that develops, procures, deploys and uses AI. For example, AI Now proposes an Algorithmic Impact Assessment framework (similar to a Privacy Impact Assessment) as a means of building accountability into the broader sociotechnical system in which AI is deployed, only part of which would include responsibility determinations. Along similar lines, the World Wide Web (WWW) Foundation identifies principles of algorithmic accountability, including: fairness, explainability, auditability, responsibility, and accuracy.
All three senses of accountability are being actively researched and developed.
The WWW Foundation describes a “critical” distinction between “algorithmic accountability—the responsibility of algorithm designers to provide evidence of potential or realised harms,” and “algorithmic justice—the ability to provide redress from harms.” Their reason for making this distinction is the worry that focusing on redress as a means of addressing accountability distracts from a critical opportunity available to algorithm designers and engineers to anticipate harms before the AI is deployed. While taking this advice to heart, one must also be careful not to place too much emphasis on the responsibility of algorithm designers to anticipate harms, which could distract from a broader approach for addressing accountability in AI.
The above trust and accountability considerations point to a useful distinction between trusting a system and the trustworthiness of a system. Trusting a system appropriately means having a justified level of trust in a system, that is, having just the right amount of trust in it. Thus, the trustworthiness of a system can be defined as the extent to which a system can reliably perform or fulfill its designated purpose as expected. For example, news stories shared on social media platforms are frequently trusted at a level higher than they should be, because some news stories are inaccurate and therefore are not trustworthy. Similarly, scientific publications are often trusted less than their trustworthiness would justify, and thus are often under-trusted. As a final example, people who trust flying in airplanes are trusting appropriately, because by all measures, air travel is a very trustworthy mode of transportation. When it comes to AI, various factors can cause people to not trust otherwise trustworthy AI. By developing robust accountability regimes for AI systems, including the broader surrounding sociotechnical systems, appropriate trust in AI would be promoted among experts and the public.
Transparency is often mentioned in discussions of AI accountability, because transparency allows for greater scrutiny of an AI system. However, accountability does not necessarily increase or improve simply by increasing transparency. In the absence of robust processes, principles, and frameworks, transparency alone not sufficient to ensure greater accountability.
Another challenge for accountable AI is that AI is portable across borders. It is developed and deployed in multiple jurisdictions, and in ways that cross international and cultural boundaries. Distribution and movement of digital assets is difficult to constrain. This has the effect of complicating trust, for example, when AI that is developed with one set of cultural assumptions embedded into it, is deployed in a “foreign” cultural context, where trust-building norms differ. It also complicates individual jurisdictional responses, since an AI might or might not be built to respect the local laws and appropriate cultural norms. The difficulties of dealing with cross-jurisdictional issues are not new, characterizing a number of issues in the digital age, privacy being chief among them. As we have seen with the recent European General Data Protection Regulation (GDPR), cross-jurisdictional solutions require multi-stakeholder input and would benefit from multi-lateral coordination. This coordination could not only ensure that an AI is functioning within the legal constraints of multiple jurisdictions, but also that it is functioning safely and in a trustworthy manner.
Finally, more research is needed to better support decision-making related to:
Various international policies, programs, centres, and activities have been launched to address the development of robust and global AI accountability. Some of the gaps in knowledge they are tackling include how to:
The following are some examples of work underway in standards and principles development, as well as individual jurisdictional approaches.
Governments and other multi-stakeholder groups at the national, regional and municipal levels are declaring principles that will guide various aspects of AI development, procurement and use. Additionally, a number of private organizations have introduced principles-based frameworks for the responsible adoption of AI. These include Google, SAP, and Microsoft.
Japan
The Conference of Advisory Experts of Japan’s Ministry of Internal Affairs and Communications has drafted AI R&D Principles to promote the societal and economic benefits of AI while mitigating risks, such as transparency and loss of control. The Conference’s overarching vision is that of a Wisdom Network Society:
“…a society where, as a result of the progress of AI networking, humans live in harmony with AI networks, and data/information/knowledge are freely and safely created, distributed, and linked to form a wisdom network, encouraging collaborations beyond space among people, things, and events in various fields and consequently enabling creative and vibrant developments.”
The principles for realizing this vision include collaboration, transparency, controllability, safety, security, privacy, ethics, user assistance, and accountability.
Building on that work, the Conference has introduced Draft AI Utilization Principles, which puts forward principles through the three pillars of promoting benefits, mitigating harms, and building trust:
In May 2018, the Cabinet Office of Japan began discussions toward the formulation of the social principles for human-centric AI, which will be basic principles for better social implementation and sharing of AI. The AI Social Principles will be finalized in March 2019.
Canada
The Montreal Declaration on the Responsible Development of AI, the result of a multi-stakeholder engagement process spearheaded by the Université de Montréal, seeks to outline “a series of ethical guidelines for the development of AI.” The first draft identifies seven key values to keep in mind when developing AI: “well-being, autonomy, justice, privacy, knowledge, democracy and accountability.”
Several organizations (professional and otherwise) are working towards developing standards for the ethical development and use of AI.
Institute of Electrical and Electronic Engineers (IEEE)
In 2016, IEEE, the world’s largest professional engineering organization, established the Global Initiative on Ethics of Autonomous and Intelligent Systems. In its second version, this guiding document describes the ongoing work of various standards working groups that have since been established to address a number of sub-domains, including:
International Standards Organization (ISO)
ISO has recently created a new technical subcommittee in the area of AI (SC 42), which is working to develop foundational standards as well as addressing issues related to safety and trustworthiness. SC 42 has created study groups on computational approaches and characteristics, trustworthiness, and use cases and applications.
These initiatives offer promising starting points and have the potential to contribute positive and significant results within their individual mandates; much can be learned and transferred from them. However, more work is needed to develop a fully articulated, robust and global AI accountability regime.
Below are examples of formal strategies undertaken by individual jurisdictions that may serve as precedents for other regions:
The Government of Canada is working towards releasing the first version of its Directive on Automated Decision-Making, which, in its current draft, sets out several requirements for AI development and use. Those include rules for performing Algorithmic Impact Assessments; transparency and explainability; quality assurance; ensuring human intervention; recourse and reporting.
True to its name, the GDPR a regulatory initiative that sets out general data protection rules aimed at protecting individuals’ privacy within the EU. In addition to outlining rules concerning individual consent to data use, Articles 13-15 in particular set out what has been referred to as a “right to explanation” when algorithmic decision-making occurs. That is, individuals have a right to request information explaining the algorithmic logic used to render a decision when a system uses their personal data. Some have argued that this poses a barrier to AI innovation, both in terms of direct costs associated with manual reviews of algorithmic decisions, but also in terms of limiting potential performance of AI, whereas others see the GDPR as a move towards improving AI accountability.
Billed as the first of its kind in the US, this nascent task force promises to “[recommend] a process for reviewing government automated decision systems, more commonly known as algorithms.” Their focus will be on ensuring that algorithms are “used appropriately and align with the goal of making New York City a fairer and more equitable place for all its residents.”
Building on this brief overview of AI, accountability and activities underway, the following section intends to catalyse discussion at the December 6th conference and potential actions for the future. We begin with a short and non-exhaustive list of roles for potential stakeholder groups, as well as some suggested discussion topics and potential G7 leadership opportunities to be considered at the conference.
Due to the complexity and intersectionality of issues related to AI and accountability, it will be critical that inclusive opportunities are created for diverse stakeholder groups to come together to move this work forward for the benefit of people worldwide. A number of different stakeholders could be engaged to provide role-specific input on the development and maintenance of robust global AI accountability regimes. Some examples are provided below.
Potential Roles:
An early draft of this paper was placed online for public consultation. We received feedback from a number of individuals in Canada and Japan. Much of their feedback has been incorporated into the paper, but we have also attempted to present a summary below. Please note that these have been condensed or reworded, and are intended to represent the views of those consulted, not necessarily the views of the authors.
Building on the discussion initiated by the 2016 G7 ICT Ministerial Meeting in Takamatsu, G7 members have undertaken studies on the potential social, economic, ethical, and legal issues raised by AI, as well as AI’s socio-economic impact.
The G7 also recognises the need for further information sharing and discussion to deepen the understanding of the multi-faceted opportunities and challenges brought by AI. There are a number of potential roles that the G7 and other multi-lateral groups could play in the promotion of greater accountability in the AI sector. Some examples are provided below:
The G7 Multi-stakeholder Conference on December 6th offers an opportunity for diverse stakeholder groups to come together to discuss some of these issues and to explore opportunities to move forward on the creation of robust and globally accountable AI. To begin to frame this discussion, we propose seven questions for discussion in the following themes:
We are appreciative of the opportunity to provide this overview of AI and accountability in an effort to stimulate robust discussion at the December 6th conference. As the development of AI applications expands and accelerates, it is urgent and important for stakeholders to come together from all sectors, and across borders, to better understand what accountability means in an AI-enabled world and the implications for societal trust. We hope this survey of AI accountability is a useful resource for conference participants and others, stimulating future discussions and potential actions among G7 members, other countries and stakeholders worldwide.