We have harnessed the power of artificial intelligence
to make life faster and easier.
And as people are now using AI on a daily basis
we’re hearing more and more about the ‘future of work’.
As people delegate more tasks to machines and software systems,
they are becoming concerned about job security.
There's a lot of challenges associated with artificial intelligence.
One of the ones that's gotten the most attention
is the idea that human workers are going to be replaced by computer workers.
But a changing workforce is something that society has dealt with before.
And contrary to what you might think,
we’re not really facing mass automation.
In fact, while some jobs might disappear,
new ones are emerging.
One of the interesting examples
is people were really worried that the ATM was going to get rid of
all of the tellers.
It turns out what happened is that
the invention of the ATM actually makes it much more
economical to open many more bank branches,
which then have employees associated with them.
Humans are really good at figuring out
ways to be useful.
When we look at the future of work,
it’s not really about whether humans will be ‘replaced’ by AI,
but rather, how we can support workers
in an evolving job market.
We need to consider the existing skills we should invest in,
and the new skills that might be needed in the future.
We think there will be new jobs created as usual,
but that doesn't mean that people will be able to train themselves to these new jobs.
And so there's a lot of important politics to be done there.
The future of work is just one of many pressing issues facing society,
and governments around the world are already looking for solutions.
This is where policymakers come in,
to look at how AI is impacting our social, economic and political lives.
One of the biggest challenges that has emerged
in the context of AI is privacy.
There's a struggle between
privacy and performance in artificial intelligence
because the more information that AI has on you,
the more accurate decisions it can make.
I don't think people should be worried about algorithms,
they should question though how the algorithms are used.
You know how sometimes it feels like Instagram is reading your mind?
That’s because an algorithm is collecting your information
and making predictions about what you might like.
And there’s plenty of information to collect.
We regularly provide our personal data
all the time,
simply by interacting with AI.
Every time we click a headline,
like a post, or google something,
we’re offering data about our thoughts, interests, or real-time locations.
And since AI is developing at such a rapid pace,
our privacy laws are a little outdated.
This means that companies are able to collect,
analyze, and share data
often without our knowledge.
And the way our data is used
could influence our political leanings and actions.
People need to understand that their data
is very useful to governments and companies.
If you're on social media and you're liking a lot of posts
with a particular political bent,
you have to be aware that the social media site will use that data
to manipulate what you're seeing and to show you news stories that
appeal to your political persuasion.
As more and more of our data is being grouped together,
maybe sent overseas,
we need to be a lot more aware of
where our data is going and who's taking advantage of it.
The issue of privacy raises broader questions
around the impacts of AI,
not just when it comes to data collection,
but also how this data is analyzed.
As we all know, humans are far from perfect.
But these imperfections can often carry over into our machines.
This leads us to another important challenge facing researchers and developers today:
bias.
Current systems are basically these specialized little units
that do exactly one thing and very well.
The problem is that that thing is going to depend on how
we train them and on the data that we provide for them.
So we've seen examples of these AI systems
developing problematic racial and gender biases.
When I choose to gather data
about a specific disease in a specific population,
I am embedding my positive or negative biases in those choices.
Everything from
when I ask questions, to what questions I ask,
to what data I choose to record,
how frequently I choose to record it, and about what kinds of conditions.
Those are all choices that may,
cause certain conditions to be over or under reported
or completely misunderstood.
If a data set is skewed by human judgement,
a real-world bias can be perpetuated by our systems.
So we need to consider the data we’re collecting,
but also how we build the system itself.
Models that are built
under certain assumptions will learn certain relationships
more frequently or more efficiently.
One popular model uses text to learn relationships between words.
This kind of machine learning is how Gmail can give you suggestions as you type.
Pretty handy, right?
They do astonishingly well at tasks like word completion;
they look like they're reasoning intelligently about human relationships.
But unfortunately we train these models
on human generated text.
That can mean data from the internet,
and so they'll learn relationships like
man is to woman,
as king is to queen,
as programmer is to housewife.
In a clinical context, we've looked at note template completion,
what that means it gives you the same sentence, right?
42 year old blank patient
is experiencing dementia should be discharged to blank.
So there's two blanks there.
You could change the gender and see what happens to the outcome.
You could change the ethnicity
and see what happens to the outcome.
And what we've found is that there are stark differences
in the symptoms that we predict,
in the outcomes or interventions that get recommended
if you just change ethnicity.
So Caucasian patients who
are found to be belligerent or adversarial,
it's recommended that they get sent, for example,
to more care,
but belligerent African American patients.
Our model is assuming all of these biases
and so the recommended word is “prison”.
Cases like this show how biased data
could end up jeopardizing a person’s safety in the real world.
And there's been a lot of talk about the challenge of safety with AI.
The idea that systems could potentially cause harm to humans.
The classic example is with self-driving cars.
This situation that I read about recently,
the car crashed into someone because that person walked out from behind the mailbox.
Now, the car in receiving the data it normally receives
would never have had a situation of having something walk out
from behind the mailbox all of a sudden.
Then when something walks out from behind it for the first time ever,
it doesn't react appropriately.
To be clear, self driving cars are not horribly dangerous.
The challenges that we face are precisely getting them to deal with these
unexpected situations.
Researchers are training them to better anticipate these situations,
but the issue of bias is connected to another key challenge:
ethics.
We depend on AI for lots of things
from forecasting stock markets to filtering emails.
The challenge comes if we start relying on machines
to make decisions without human oversight.
Ethics refers to the responsible use of AI.
This means considering when and how AI systems should make decisions,
or if we should be using these systems at all.
Ethics has been a big part of the discussion around ideas like self-driving cars.
When do we rely on them to make choices,
and what are the values guiding these choices?
For all the autonomous driving systems,
people always come up with a question of what
is the algorithm going to do
if they have a choice between killing an old person or baby?
I mean, what would you do?
Like you would have one second to decide what would you do?
In many ways,
we’re still defining what we consider to be “ethical behaviour” for humans.
In the context of AI,
this raises some important questions.
If a self-driving car does take a human life,
whose fault is that?
Or if a system is reinforcing inequality,
who’s liable?
This is the challenge of accountability in AI:
determining who’s held responsible
when systems make decisions that impact our safety,
property, rights, or freedoms.
We need to be vigilant
about public policy, about
big private companies having so much power.
I think definitely this should be regulated.
They should not be the one making the decisions about all these tools,
which has so much impact in people's lives.
I think it's hard to understand
all the implications that your research might have later on,
but it's important to put yourself through those processes
and be very explicit when you write up work or when you
release code about all of the limitations that a work has.
This is why researchers are working together with
industry professionals on something called explainability,
mapping out how and why machines make certain decisions.
But of course,
humans can’t always explain our own behaviour either.
At the end of the day,
most of the time people's decisions are actually
seemingly unexplainable.
There are processes that happened in their brains
and we don't really know why.
Some of these issues,
like privacy or safety,
governments have been dealing with for a long time.
But as we push forward with innovation,
policymakers will need to start looking at these challenges
through the lens of AI.
Considering how we can make sure that it’s harnessed for the greater good.
And in many cases,
this transformative technology
could be what helps us improve ourselves as humans.
Machine learning could actually help people be better.
One of the things that machine learning could be used for
is to investigate where biases exist and highlight
that human behaviours that need to change before we trust them enough
to say that they are the gold standard that we want to reproduce
with machine learning.