By: Kathleen Sandusky
24 Apr, 2024
As any parent of an iPad-obsessed tween can attest, AI technologies have become deeply enmeshed in our children’s lives and imaginations. Examples of the many impacts include “For You” algorithms that keep kids’ eyes glued to TikTok; school board debates on whether generative AI is a help or hindrance in the classroom; and the frightening spectre of sophisticated child luring technologies that could use AI to mimic children online.
Responsible AI and Children: Insights, Implications, and Best Practices, a new CIFAR AI Insights report for policymakers, argues that while there is no time to waste in regulating AI, policymakers must not skip over some crucial steps to ensure children’s rights are protected.
The authors note that although new regulations and guidelines for ensuring responsible AI are rapidly emerging in Canada and globally, children have been largely omitted from these broader policy discussions or mentioned only briefly as users vulnerable to harm. Moreover, existing regulations such as Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) tend to focus on privacy-driven, consumer-based strategies that overlook the wider range of current and potential AI impacts for children, both good and bad.
“We propose looking at existing international guidance for a wider-ranging, rights-based approach,” says Sara M. Grimes, one of the report’s authors, who is a professor and Bell University Labs Chair in Human-Computer Interaction at the University of Toronto. “If we continue to narrowly focus on privacy and consumer protection in AI regulations and omit considerations of children’s many other rights in the process, Canada risks failing to address the wider scope of children’s lived experiences and needs as they interact with AI technologies, which they will certainly continue to do.”
Among the wider rights of children in digital domains that may be underserved by strictly privacy-oriented frameworks, say the authors, are the right to access information, to play and participate in cultural life, and to be free from discrimination, commercial exploitation, and abuse.
While much work is needed to understand and address these gaps, fortunately, the authors say policymakers can build on useful guidance already available to the global community. These include the United Nations “General Comment 25” update to the Convention on the Rights of the Child, which outlines how nations can uphold the rights of children in the digital environment.
Also central to the new policy report is the importance of including children’s perspectives in the drafting and consideration of AI regulation, a process that is too often overlooked. To this end, the report includes a framework for developing AI with and for children, with suggestions for hands-on activities that can help to guide conversations with children about AI.
The report concludes with a list of six key takeaways that policymakers should bear in mind as they tackle the daunting task of regulating this fast-moving technology for a vulnerable yet highly diverse population.
“Given the speed with which AI is advancing, policymakers may feel overwhelmed in trying to protect children,” says Grimes. “But the good news is that we can work together as a global community. We already have extensive research from multiple fields about the social, ethical, and developmental impacts of data-centric technologies on diverse groups of children and adolescents. By stimulating conversations and hands-on applications of these strategies for policymakers, and including children in the development of regulations, we hope to ensure that all children are protected from potential hazards of AI technologies, and at the same time, able to access the potential benefits.”
Responsible AI and Children: Insights, Implications, and Best Practices was published today by CIFAR. The co-authors are Sara M. Grimes, Professor and Bell University Labs Chair in Human-Computer Interaction at the University of Toronto; Alissa N. Antle, Professor at Simon Fraser University; Valerie Steeves, Professor at the University of Ottawa and Co-Lead of The eQuality Project; and Natalie Coulter, Associate Professor and Director of the Institute for Digital Literacies at York University.
For more information, contact:
Gagan Gill
Program Manager, AI & Society, CIFAR
About CIFAR AI Insights
CIFAR AI Insights is a series of policy briefs inviting cross-disciplinary experts to author accessible policy briefs that discuss the practical societal and political implications of AI and emerging technologies. They are designed to develop Canada’s thought leadership on issues of importance to policy-makers, researchers, regulators and others seeking to engage with and address the societal impacts of AI.
About the Pan-Canadian AI Strategy at CIFAR
The Pan-Canadian Artificial Intelligence Strategy at CIFAR drives cutting-edge research, trains the next generation of diverse AI leaders, and fosters cross-sectoral collaboration for innovation, commercialization and responsible AI adoption. Our three National AI Institutes – Amii in Edmonton, Mila in Montréal, and the Vector Institute in Toronto – are the vibrant central hubs of Canada’s thriving AI ecosystem. Funded by the Government of Canada, we’re building a dynamic, representative, and rich community of world-leading researchers who are creating transformative, responsible AI solutions for people and the planet.