By: Jon Farrow
26 Apr, 2019
Machine behaviour is the interdisciplinary study of AI systems as a new class of actors with unique behavioural patterns and ecology. (Credit: Scalable Cooperation group/MIT Media Lab under a CC-By 4.0 license)
CIFAR Fellows Hugo Larochelle and Matthew Jackson argue for a new scientific discipline to study the broad effects of AI
An interdisciplinary group of researchers wants to create a new scientific discipline called machine behaviour to understand how artificial intelligence will affect society, culture, the economy and politics.
In a review article in Nature, the authors argue that the study of AI algorithms needs to expand beyond computer science departments to include perspectives from across the physical and social sciences.
Two CIFAR fellows are co-authors on the paper: Hugo Larochelle is a Canada CIFAR AI Chair, the associate director of the Learning in Machines & Brains program, a research scientist at Google Brain and a member of Mila.
Matthew Jackson is a professor of economics at Stanford University, a fellow in the CIFAR Institutions, Organizations & Growth program, a specialist in human networks and the author of a new book called The Human Network, which examines the effects of social hierarchies and interactions.
The interviews below have been condensed and edited for clarity and brevity.
(Credit: Scalable Cooperation group/MIT Media Lab under a CC-By 4.0 license)
How did you get involved in this paper?
Hugo Larochelle: The lead author, Iyad Rahwan [of the MIT Media Lab], reached out to me. He wanted to unite various perspectives on AI and the interaction of machines and society. And he wanted me to contribute to the technical computer science and machine learning components of the paper.
I have always found Iyad to be an interesting researcher doing things that are quite different from what I’m used to as a more traditional machine learning and deep learning person. And that’s partly why I’m at CIFAR. This idea of having many researchers from different backgrounds coming together and trying to see whether there are things they can learn from each other appeals to me.
This idea of having many researchers from different backgrounds coming together and trying to see whether there are things they can learn from each other appeals to me.
Matthew Jackson: I’m involved because I study networks, and I’m trying to understand how different structures of human interactions affect people’s behaviors. And that now has a large algorithmic perspective because of social media platforms. I also study inequality, mobility and other sorts of structures in human societies that are being exacerbated by human-machine interactions.
The paper has a list of example questions that fall into the domain of machine behaviour. If you had to pick one, which do you find most interesting?
Larochelle: The questions that are most interesting to me are the ones about conversational robots, because it is a much more direct interaction between a person and a machine. I’m interested because on one hand, technologically speaking the bots aren’t as good as they could be. But there’s also a deeper question about how we feel about conversations that involve machines. Do conversations with machines feel different, or similar? What are the implications of that? I think there are some really interesting questions there that we haven’t quite explored yet.
Jackson: The questions about human networks and online dating are right in my wheelhouse in terms of the kinds of topics I study.
One tendency of people is to associate with people who are similar to themselves. This is known as “homophily” in the literature. This happens on the basis of gender, religion, age, profession, ethnicity, anything. People tend to clump together with, be friends with, and talk to people who are similar to themselves.
Now you can see all the attributes of a person and pick exactly who you want.
Technology like online dating makes it increasingly possible to be more selective. Now you can see all the attributes of a person and pick exactly who you want. And if you think about the way that friends are suggested on LinkedIn or Facebook, the algorithm is trying to pick people that you’re going to want to connect to. This decreases the serendipitous random friendships that people make and increases the ones that are targeted and more similar to you.
There are positives and negatives with this. On the one hand these are people you want to connect to because they have common interests and common beliefs, but then it also increases the segregation in the society, and it increases echo chamber effects.
How has contributing to this paper impacted your work?
Jackson: I was already thinking about some of these questions. It’s pretty hard to be working on social networks now and not realize that they’re changing very fundamentally because of new technologies.
But talking to Iyad and the other co-authors has broadened my views quite a bit. He’s been very active in trying to understand the morality that’s employed in some of the algorithms, like questions about who to prioritize when you program a self-driving car. Every time a company writes down an algorithm to change your newsfeed, or to make new suggestions of who you should be friends with, it’s taking a moral and ethical stand.
Larochelle: I think it’s opened my mind a little bit more about how people are thinking about AI. And what questions they are interested in answering. This is why I participated in this paper. I feel like my role as a research scientist in AI and machine learning is to provide whatever knowledge I have to guide that conversation.
Why is studying the societal effects of AI algorithms so difficult?
Jackson: One thing is that a lot of the algorithms are privately owned, and you don’t get access to them. As social scientists, we don’t get to go inside the black box. So you have to study them indirectly. And most companies don’t want to be scraped and checked, or have researchers make fake profiles. There’s a whole series of constraints on what we get to see and how we get to see it.
There’s a whole series of constraints on what we get to see and how we get to see it.
Even trickier is to figure out what the implications are. Anytime you’re doing human subjects research in general, you have to be very careful about how you do it. And now we’re doing it on larger and larger scales, and with harder and harder things to observe, and more impact if you go wrong. So that’s a big challenge for the research, certainly.
How common is it for machine learning specialists to be interested in the societal implications of their algorithms?
Larochelle: I think now in machine learning there is definitely an increased awareness of responsible ways that we should be using technology. At Google, for instance, we published the Responsible AI Practices, which is our attempt at guiding people towards the mindset that technology can be used for good and bad.
As research scientists, I think some of our responsibility is to study the technology we create, in this case machine learning and AI, and to characterize it in the best way we can. This means that we should identify the things it does well, and the things that it does less well.
There’s a lot of research in the machine learning community around responsible AI looking at how to make systems more interpretable, which might help us have a better understanding of how these methods work and how they behave. There are also efforts to produce model cards and data sheets to be more transparent about how machine learning models and systems are created.
I think that puts us in a good position to be able to have a dialogue that’s still technically grounded with people who have backgrounds other than computer science.
As research scientists, I think some of our responsibility is to study the technology we create.
Do you have any closing thoughts?
Jackson: When you’re a researcher, a lot of times you’re studying questions that people have been studying for a long time and then boom, a big one pops up that is really new. That’s exciting. And this feels like one of those new questions.
This is really an interesting time to be a researcher. I think people in the future will point back and say, “Look, the early 2000s, that’s when people suddenly really became aware of the interplay that people have with AI.”
Larochelle: We should always look for new ways of approaching problems. And that’s very much why I’m involved in CIFAR. It’s hard, but it’s important that we keep trying to involve people of different backgrounds.