Skip to content
CIFAR header logo
fr
menu_mobile_logo_alt
  • Our Impact
    • Why CIFAR?
    • News
    • Donor Impact
    • CIFAR 40
  • Events
    • Public Events
    • Invitation-only Meetings
  • Programs
    • Research Programs
    • Pan-Canadian AI Strategy
    • Next Generation Initiatives
  • People
    • Fellows & Advisors
    • CIFAR Azrieli Global Scholars
    • Canada CIFAR AI Chairs
    • AI Strategy Leadership
    • Solution Network Members
    • Leadership
  • Support Us
  • About
    • Our Story
    • Awards
    • Partnerships
    • Publications & Reports
    • Careers
    • Staff Directory
    • Equity, Diversity & Inclusion
  • fr
AI

AI in 2063

By: Kathleen Sandusky
6 Jun, 2023
A pink, purple, and green illustration of a future city. Text in various languages show on screens, people relax in apartment style housing, walk their dogs, and meet to talk. Another man is arrested by two police officers and taken to a police station. Illustration en rose, violet et vert d'une ville du futur. Du texte en plusieurs langues s'affiche sur des écrans, des gens se détendent dans des appartements, promènent leur chien et se rencontrent pour discuter. Un autre homme est arrêté par deux policiers et emmené au poste de police.

It’s 2063. How do we use AI in our daily lives?

AI is already part of our daily lives, in ways previous generations couldn’t have imagined. As CIFAR wraps up our 40th anniversary, we asked three Canada CIFAR AI Chairs about where they believe AI will lead us in the next four decades.

ChatGPT introduction

ChatGPT prompt:
Write an introduction to a series of interviews with three people who hold the position of Canada CIFAR AI Chair. The interviews are about their predictions for the future of artificial intelligence in 40 years.

ChatGPT output:
Welcome to our series of interviews with three Canada CIFAR AI Chairs, where we ask them about their vision for the future of artificial intelligence. These experts have dedicated their careers to advancing the field of AI, and have a unique perspective on where the technology is headed. Over the course of these interviews, we will delve into their thoughts on the potential impacts of AI on various industries, as well as the ethical considerations that will need to be addressed as the technology continues to evolve. Join us as we explore the exciting and potentially transformative possibilities of AI, 40 years into the future.

JACKIE CHI KIT CHEUNG
Canada CIFAR AI Chair, Mila; Associate Professor, McGill University

It’s 2063. Will we be moving through a borderless world, communicating across languages with universal translators attached to our ears?

Language is a very complex phenomenon. It’s not just understanding that a string of words is put together in a sentence — there’s a lot more contextual processing required. Even from a young age, humans are amazingly skilled at taking that string of words and extracting a huge amount of information, like the people and objects being described, their backgrounds, relationships, and goals, and even the speaker’s attitudes and emotions. Our brains are capable of detecting all of these really abstract things at once. But these more subtle contexts aren’t easy for machines to extract using current techniques. When you consider the differences across languages, that’s even more challenging. There are considerations like the way different languages are structured or what they require to be explicit — for example, the way some languages use or don’t use pronouns. So these are really interesting challenges that we’re looking at now with natural language processing.

In 40 years, I do think we’ll be at the point where we have reasonable accuracy with instant translation, in terms of understanding the general intent. But even the best systems will always be at risk of making mistakes, and sometimes those mistakes won’t be obvious, especially as AI becomes more sophisticated. Some of those mistakes may even seem convincing. They won’t be the silly mistakes we laugh at with technologies like Alexa or Siri today — they’ll be harder to notice.

Related Articles

  • “Nothing about us, without us”
    June 06, 2023
  • CIFAR partners with Actua for Indigenous next-generation training in AI
    November 08, 2022
  • DLRL 2022: That’s a wrap, and a return to in-person learning in 2023
    September 28, 2022
  • CIFAR Workshop on Astronomy/Cosmology and Artificial Intelligence
    July 19, 2022
Photo Jackie Cheung
Jackie Chi Kit Cheung Photo by Dominic Blewett

"Even the best systems will always be at risk of making mistakes, and sometimes those mistakes won’t be obvious, especially as AI becomes more sophisticated." - Jackie Chi Kit Cheung

So I can see us using AI in 40 years to rapidly translate regular everyday conversation, and even if a little bit of the fidelity gets lost we’ll understand the gist. But if it’s for something really high stakes, like peace negotiations at the UN, then we’ll always need human oversight and scrutiny to avoid potentially disastrous outcomes.

What I’m really hoping for in 40 years is a much more widespread adoption of natural language processing technologies in our daily lives. They’ll be so widespread that we won’t even notice them.

There are lots of small, administrative and mundane tasks that we do every day that cost us a lot, like all of the emails we send or notes that we take to remember things — I’d like to see AI assistants that let us skip over those tasks so that we can focus on more interesting and useful work, like advancing human achievement in science and art.

Photo Nidhi Hegde
Nidhi Hegde Photo by Katie Astaire

NIDHI HEGDE
Canada CIFAR AI Chair, Amii; Associate Professor, University of Alberta

It’s 2063. Will we live in a more just and fair society, supported by AI? And will we finally understand the privacy agreements we sign for new technologies?

It’s hard to imagine a future where we’re all going to be very AI fluent and know exactly what’s going on when we sign up for services, because these technologies are really complex and keep moving. I don’t see that changing. We have these long unreadable agreements now because it’s easy, but it’s not fair or reasonable.

I’d like to see these responsibilities pushed back to the developers. They should have to ask: is every piece of this data really necessary for what I need to do? Am I making sure that the output of this model is still private? Some people argue this will stifle innovation, that we should just let AI develop in the best way that it can, but you have to ask — in the best way that it can for whom?

"Ultimately, privacy and fairness should be seen as part of product development, recognizing that they do not stifle innovation: they add to innovation" - Nidhi Hegde

That’s what fairness in AI boils down to: what is the real outcome of a machine learning or AI system, and do all groups benefit equally?

Maybe there’s a job you apply for, or an admissions process in a university or a mortgage risk assessment that has rejected you, and you don’t know why. This invisible force is preventing you from attaining your potential, and you don’t even know how. That’s unfair. There are so many examples of these significant harms, such as predictive policing using biased algorithms or criminal risk tools that keep certain groups of people in jail longer. These effects compound over time, creating generations of harm.

Ultimately, privacy and fairness should be seen as part of product development, recognizing that they do not stifle innovation: they add to innovation. I want to see the AI community take it up as a challenge to make sure that you can get the optimal outputs that you’re hoping for from AI, but still keep the data and outputs private. And I would like to see AI products regulated. We’ve done this in the past with consumer protection agencies that review drugs and toys and other products before they go out into the market, and we see that as necessary to prevent harm. We need to recognize that bias and lack of fairness in AI also cause real harm.

I’d like to see AI tools in the future that improve the fairness of AI itself. We already have GANs, or generative adversarial networks, that challenge their own data points to continually refine and improve, and they’re getting better all the time. So we can imagine an adversarial system for fairness, with an AI algorithm that checks and challenges bias. That would be an AI solution to an AI problem.

In 40 years, I hope we’re building fairness into new AI as a matter of course.

Photo Jeff Clune
Jeff Clune Photo by Kryani Kanavaros

JEFF CLUNE
Canada CIFAR AI Chair, Vector Institute; Associate Professor, University of British Columbia

It’s 2063. Will human workers be replaced by robots with artificial general intelligence? Will artists be replaced by technologies like DALL-E? Writers by GPT? Will even AI researchers be made redundant?

We know from history that as technology gets better, some jobs disappear. But they are usually jobs that are low-paying, less interesting or dangerous. And we adapt, because new jobs are also created. Nobody today complains that there aren’t jobs for professional blacksmiths, or drivers and street-sweepers for horse-driven carriages. Today we have jobs like search engine optimization experts, machine learning scientists or GPU CUDA kernel programmers — jobs you could never have imagined at the time of blacksmiths. So as some jobs go away, others emerge. That does not diminish the real pain that can be caused by such transitions, but typically over time we adapt and thrive with technological progress.

There is reason to believe that this time might be very different, with AI having the potential to actually become as good as humans at many, if not most, economically valuable tasks. It is possible that this time far more jobs will disappear than will be created.

"But the challenge is that we need to get the distribution of these technologies — and by extension, wealth — correct, so that it’s not concentrated in the hands of a few" - Jeff Clune

Across the board there are going to be major economic implications. What happens when we achieve artificial general intelligence, meaning AI that can do almost everything as well as humans? Could we get to a point where we have, effectively, infinite wealth and GDP with all the hard work being done by AI? What does that look like? Maybe we’ll have more time for leisure and pursuing our interests. I like rock climbing, maybe I’ll get to do more of that while AI does my work for me.

But the challenge is that we need to get the distribution of these technologies — and by extension, wealth — correct, so that it’s not concentrated in the hands of a few. We can hope that everything will become cheaper and we will have a more plentiful world. This is, of course, putting aside the many important existential risks and downsides. But if we get it right, then we might all be much better off, with better health care, better education, cleaner water… the list goes on.

[OpenAI CEO] Sam Altman has written about a future when we’ll have Moore’s Law for everything, with technology cutting the cost of everything we need to live by half every couple of years. I think that that is an interesting way to think about it. As everything gets more affordable, maybe we’ll all have more of what we want, as long as we share, as long as we get it right and ensure that powerful AI is safe and benefits everybody. That is a tall order for society, but with the potential for an immense payoff.

In short, the stakes are very high for society making sure that the development of artificial general intelligence goes well.

CIFAR header logo

MaRS Centre, West Tower
661 University Ave., Suite 505
Toronto, ON M5G 1M1 Canada

Contact Us
Media
Careers
Accessibility Policies
Supporters
Financial Reports
Subscribe

  • © Copyright 2023 CIFAR. All Rights Reserved.
  • Charitable Registration Number: 11921 9251 RR0001
  • Terms of Use
  • Privacy
  • Sitemap

Subscribe

Stay up to date on news & ideas from CIFAR.

This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you. We use this information in order to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media. To find out more about the cookies we use, see our Privacy Policy.
Accept Learn more