all right um good evening everyone um and a very very warm welcome um thank
you so much for joining us my name is Olivia ulvan and I am the director of chattam House's UK in the world program
so we're Gathering here a week before the UK hosts its AI safety Summit AT
Bley Park but we're also Gathering here in a year that has been possibly the
biggest ever for the development of AI as a technology so as many people in the audience will know large language models
can now pass legal exams AI systems can detect the structure of proteins in ways
that have evaded scientists for years and this could Mark a turning point not
just in the history of the technology but in the history of our human relationship with technology the
potential benefits are massive but so are the risks as the UK government has said of potentially synthesizing new
bioweapons potential for new highly effective disinformation or even some argue of AI
systems themselves becoming difficult for humans to supervise or control
meanwhile the world is geopolitically fractured into different blocks on this
the US is a leader in Innovation and digital technology China is an industrial Powerhouse developing its own
Advanced AI industry and the EU is the world's largest regulator and one of the biggest consumer markets for digital
Technologies in the world but most of the world doesn't fit into those blocks and nor does the UK um So within that
context the UK is seeking to play a role in the governance of this technology and
hosting the world's first AI safety Summit AT Bletchley Park next week bringing together over a 100 different
reps including um from the US EU potentially China from the private
sector experts and civil society and the focus of the Summit is on Frontier risks
so specifically risks arising from the development of the most advanced AI models now as an international Affairs
Think Tank chattam house is not a kind of technological Research Center but we see techn technology as fundamentally
geopolitical we want to track the latest technological developments that could reshape the global order and we see AI
in that frame so colleagues in program like international law International Security and our Digital Society
initiative are doing great work on this issue um I encourage you to check it out we in the UK and the world program are
particularly interested in how the UK is seeking to position itself as a leader in AI governance and to explore what
this might look like so today's discussion is going to focus on firstly what are Frontier risks from AI why do
they matter why should we care what does a successful Summit look like in the opinion of our panel and what sort of
Institutions and process CES do we want to see come out of The Summit in order to govern these kinds of Frontier risks
so we're delighted to have a really strong panel to speak to you about this today we have a couple of our panel joining online and others here in person
um so firstly we have Professor Yoshua Benjo who is recognized worldwide as one of the leading experts in international
intelligence um known for his pioneering work in deep learning um he's a full
professor at the University of Montreal and a winner of the touring award the Nobel Prize Computing in 2022 he became
the most cited computer scientist in the world Francine Bennett is the interim director of the ad love lace Institute
prior to that she worked at a biotech company which uses AI to find treatments for rare diseases Zoe Kleinman is
technology editor the BBC a leading technology journalist with more than 10 years of broadcasting experience she
brings text stories to Global audiences across BBC News World Service and radio fors today program Gan inis is the CEO
of the alen touring Institute she's work across the public private and nonprofit sectors to use data science and AI to
solve real world challenges last but not least it is a big panel today Katie odonovan is director of public policy at
Google UK and she's responsible for engagement with the UK government and also work on Google's kind of approach
to responsible Innovation so before I kick off and put questions to each of
the panel a brief word on how today will work um apologies if you're an old hand
at these events um but today today discussion is on the record it is being recorded please do feel free to tweet
about it using the hash CH events and the handle at chatam house I'm going to
turn to each speaker on our panel and ask them a question um and then we'll have a kind of brief discussion within the panel but I will get to the point
where I open up to audience questions as quickly as possible after that so do be thinking about the questions that you
would like to ask if you're here in person when we get to that point please raise your hand and a microphone will come to you if you're online please
submit questions using the Q&A box that will appear at the bottom of your Zoom feed um I think that's all the
housekeeping that I need to do once again a really warm welcome we're very grateful to have you all here we're very
grateful to our panel for addressing this um issue at such a timely moment so
I'm G to start if that's all right with Professor Benjo Professor Benjo I'm gon to put I'm going to sneak in two
questions to you so firstly as I've said the UK Summit is focusing on risks from
Frontier AI can you tell tell us a bit about what does a frontier risk look
like in practice and why should Ordinary People care and then on the basis of that what would you like to see come out
of the UK's Summit over to you the the risks
um are pretty uh Broad and um we can talk about examples but you know I won't
be able to cover all the things that can go wrong um so ear mentioned a couple
that uh you already talked about so uh the programming abilities of AI systems is is growing rapidly and right now
they're not as strong as the best programmers but when that happens which could be anytime in the coming few years
um there are clear risks in terms of cyber security so so that's an example of a particular kind of risk more
broadly which is misus uh Bad actors terrorists using these AI systems uh for
purposes that could really harm Society um you mentioned um bioweapons this is another area where
there's a lot of concern uh there was a recent paper on the use of AI for
Designing toxic chemicals that could be chemical weapons and actually is very easy to do um using the current AI
systems you don't even need to uh look into the future uh one systemic uh risks
uh to our society uh maybe destabilizing our financial systems democracy job
markets uh for democracy you can think of uh the the uh misinformation and
disinformation uh the already exist but could be Amplified with AI tools uh we
now have ai systems that can manipulate language well enough to pass for humans so they could be used to scale up The
Troll Army of uh uh various entities they could be um used to fake even
better with video and speech uh what politicians uh don't say or don't do so
there there are many many such um misuse and systemic risks that need to be
better understood and then as you mentioned there's the issue of loss of control um actually one thing that
relates many of these risks is that we don't know right now how to build an AI
system that will behave as intended so uh if intended means uh some particular
task but also acting in a way that's aligned with our values norms ethics uh
laws and so on uh we don't know how to do that and we we don't see like oh this
is something we're going to fix next year or something and yet that these systems are being developed very quickly
there's a lot of competition maybe a race to the bottom where safety isn't
the priority right now it's uh winning that competition and maybe this is going to transform into a competition between
countries so there are geopolitical questions as well and this is very worrisome for for a lot of people including myself and many experts in the
field given those challenges um what would you like to see come out of the UK
Summit I mean it sounds like it's going to be very difficult to create a global governance system that will manage or
govern those risks so what would you like to see well we're going to need to start with
small steps that can be implemented quickly International treaties and agreements take a lot more time than
National efforts regulation and even National regulation you know some uh
could be very um bulky I'm thinking for example about the EU AI act which is
nice it's moving in the right direction but it took too many years to to you know build it and it's not yet really
adapted to the situation with Frontier AI systems so what we need in general
are very um high level principles in those laws that then a regulator could
quickly adapt and and a regulator would have enough power just like think about in the US the FAA and the FDA and things
like that uh they can react quickly to something that goes bad a bad chemical a
bad some a problem with a plane and so on um there will be new misuses or
dangers or things we didn't foresee and we need The Regulators to be able to do that to to adapt quickly um and also
there are simple things that can be done quickly like uh registration of the largest models and the the uh comput
ational capabilities uh that that are necessary for training these systems now we're talking about like billion dollar
cost for training these next generation of systems there are not many companies that can do it uh we need to make sure
we track what they're doing I we being Society Democratic governance uh our
governments um so that um we create a a licensing and and registration regime
that could be pulled out if a system is not safe and as the regulator gets to
understand better because of the progress of science how to evaluate potential harms and decide the
thresholds of what is acceptable what is not acceptable um um you know um these
the that that system that regulatory body can uh become more complex but but
clearly we'll need governments to take ownership to develop their internal
capabilities to do that regulation to do the research to uh figure out how we
should regulate how we should make sure that more broadly um these
systems um are um uh under Democratic oversight not just from the the
country's uh government but more broadly like uh social um uh Civil Society U
academics with expertise and who are neutral independent uh Auditors the International Community we need to make
sure that developing countries maybe through the UN have a voice in how these systems are developed there's a lot that
needs to be done but we should start small and not wait to have built a very complicated Global uh governance system
before we we start doing things thanks very much for that Professor Beno so I'm I'm going to turn to Francine now from
the ad love La Institute so Professor Beno framed some really challenging risks there um that we need to kind of
somehow gain Democratic oversight of um at an event last week week on this topic
at at chattam house a participant said it's difficult to govern AI at the speed of democracy let alone at the speed of
multilateral governance and I know ad love lace has published you know some thoughts on what they would like to see
from the summit and can you tell us what what you would see as a successful outcome from the summit and do you think it's focusing on the right things uh
yeah that's a really good question and um I think just take the second question first um we we're really happy to see
engagement and interest by Regulators in in regulating Ai and regulating technology better um a of love laes the
a of Love laes M institutes mission is to make data and AI work for people in society obviously a big part of that is
is it being safe um we would say that the focus of the Summit is is actually a
bit too narrow in that regard so um Professor Benjo made a very good case for the frontier risks we would say
Frontier models are those sort of most advanced models are are part of what we want to think about in terms of safety
but if we only think about that we risk um just forgetting about the the broader set of risk and safety that we want to
think about near at hand the the algorithms that are part of our everyday lives already and that with increasingly
capable models will become increasingly part of our lives and and we don't have to think about catastrophic risks to to
need to to think about risks and harms um and to have a better life for ourselves now um so so I think you only
get good outcomes by thinking about the broadest range of of benefits and and risks and not just focusing on the out
Outer Edge and actually focusing only on the frontier risks um not getting on with the national regulation and the the
sort of near atand things that we we know we need to do so um we know that for example our Regulators probably
aren't capable right now of of regulating uses of algorithms in their in their uh scope we should get on with
that we know we know how to fix that we've got some institutions but they they need more capacity um and we know
that um it would be helpful to understand more about how Ai and algorithmic systems are used in society
and have more of a vision of what positive would look like um to work towards and to try and build that shared Vision so so actually I think professor
benju and I probably say a lot of the same things about the outcomes that we would want even though we we come from a slightly different framing of the the
risks that we would pay attention to that's really useful to understand I mean I think would you see uh are there
things that the summit can achieve you were kind of getting at this already but that would um you know the types of
measures that Professor Benjo was talking about independent Auditors more more democratic oversight regulatory
control of these kind of private it's only a few private Labs that right now have the capacity to develop these
really powerful models do you think outcomes from the summit can sort of valuably govern those risks and the more
everyday risks that you talked about or do you think those processes need to be distinct I I think they shouldn't be
distinct they they're very intertwined actually and and you know by by understanding much more about these models you both understand the the outer
oh getting a thumbs up fantastic the sort of outer um outer catastrophic risk but also the dayto day what how are we
going to use this tomorrow how do we want this to be managed tomorrow what path do we want to be on as a society to make these tools work work for us in in
whatever sense we mean working for us thanks I'm I'm pleased we've got some agreement on the panel already I might try to get you to disagree with each
other later but um I'm going to turn Jean Jean I'm going to turn to you so um
a lot of the kind of challenges here you know seem to be around how we get governments to work with private sector
to work with civil society to kind of manage these risks do you think there are sort of specific what what do you
think are some kind of good ways for all of those actors to work together what are some best practices that governments can encourage from kind of both private
and Civil Society actors in this area can I throw another risk on the table Yeah I worry about us worrying so
much about risk that we don't use these Technologies and The cheing Institute the all churing
Institute is fundamentally optimistic about what these Technologies can bring to society but we need to manage the risks
in order to unlock those benefits um you asked about the role of government working
with the the role of government working with the private sector with civil
society and I think there's a very Lively debate about risks which can get
polarized but it's not either all we have to address all of them and the thing is
that's quite hard and so you need to bring all the voices to the table so um big Tech um
startup scene Civil Society we're here at chattam house which has a fine tradition of sort of debating how we
should run ourselves as a society um startups we've got a very vibrant startup scene in the UK um and so it's
when you bring those parties together that you start navigating was essentially it's properly hard but
really worth it and we need to get on with it which to pick up the other point about Pace um having worked in
government I think it is your your preious you mentioned a previous question there is a fundamental
difference in Pace between the old world of you regulate review in three to five years and then you know sort of consider
what your next steps are we need to move a lot faster than that and that's why I'm really pleased that this group of of
of uh this Focus that the government has brought to this complicated set of
questions thanks very much Dean um Zoe I'm going to turn to you now so Gan and
some of the other panelists have talked about the value of bringing in lots of different voices one of the kind of
maybe slightly controversial things that the UK government has sought to do with the AI Summit is involve China in
various ways you know this isn't just about sort of our domestic Civil Society
private sector government relation ship it's also a geopolitical question um the
UK hosting this Summit suggests that we as the UK are looking to play a global role in governing the risks from AI do
you think that's realistic or is AI governance going to be dominated by actors like China or the US who have you
know more capacity to develop these systems and are arguably locked in quite significant geopolitical tension with
each other how how do you see that playing out well I think it's very easy to have
a bit of a downer on the UK government's Ambitions here and to think you know we're small we can't compete why are we
why are we even doing this but actually I think it's very much being driven by the Prime Minister uh Rishi sunak who is
obsessed with AI you know people who know him will say that he is obessed with it and I think he's absolutely right to be because it is coming down
the track at all of us very fast and I suppose the the thumbs up from Professor
V the argument is you know if it's coming down the track at you any way you might as well try and be involved in in
in attempting to sort of harness it and make sure that it's coming at you in the right way um I think I think the UK is
being ambitious absolutely but you know I think it it is a player here it does have a presence here we have a lot of
R&D here we're not big we don't have the Deep Pockets of big Tech we don't have the um enormous infrastructure of big
tech there is no UK Amazon web services for example um however what we do have is innovation We've Got Brains We've Got
Talent here and I keep hearing this over over again that you know we we are uh we are certainly not in the same league but
we are we are at the table and I think it's a very um admirable attempt to
place Us in this race in a position that we can play as a as a sort of part of an
Arbiter I mean lots of people are saying to me you know it shouldn't really be geopolitical at all we've got all of
these different AI acts and and and regulations and things flying in from different territories and what we should
really have people say is a un style regulator and you know Mustafa Layman the co-founder of um of deep mine has
described the sort of need for a kind of climate change type body he's compared the risks of AI to the risks of climate
change and said it needs to be managed in that way it's a global thing it's not a UK thing or a us thing um you
mentioned China being a controversial guest um I don't actually think it's
controversial at all I think it would be mad to leave China out because China is a massive massive player here and and
traditionally far more secretive than the west and I think the danger really that we face with the summit next week
is is more the other way you know what if China doesn't come what if all we've got is a cozy room full of us big Tech
mates who all know each other anyway and they're all talking about this anyway we're not going to get the diversity of thought that's really going to bring
about change unless we invite you know other people who maybe we have a difficult relationship with but we're
all still in this particular scenario we're all trying to um to do the same
thing and I think it's really important that we hear from them hear what their thoughts are and hear what they are
doing you know in a way it won't happen but you know we should invite Russia we should invite North Korea we need to know what those guys are doing right we
don't know and they're not going to tell us but but this if this is truly going to be a global conversation and these
are also big players and we mustn't forget that and we mustn't make it too cozy I think thanks very much for that
Zoe I heard some murmur from the audience at some of those suggestions so do hold your your thoughts for for
questions um and I know some other members of the panel want to come in on the geopolitics of this but first i'm
just I'm going to turn to KT um so certainly the geopolitics of this the
controversial but arguably sort of the relationship between the state and private SE and the private sector
especially big Tech has sort of been controversial over the decades in terms of how we formulate regulation and there
are people who warn of kind of regulatory capture if big Tech voices are too sort of closely involved in the
conversation about how we govern these risks on the other hand as a lot of members of the panel have have said these really powerful Frontier AI models
are mainly being developed by just a few private Labs because of the the type of computing capacity and investment that
you need so can you tell us a bit about sort of how do you think what do you think is a constructive relationship
between government and the private sector on these risks what would you like to see happen um well I think I mean boringly I'd like to Echo a lot of
the comments from other panelists in the I think the UK government is to be praised for having the summit I think
they've done so on a turnaround time in a really vibrant International environment where lots of different
organizations are thinking about what they should be doing on AI and um
bringing together and I think it is really important it's been reflected in some of the comments so far bringing
together not just the technologists and I do think it's important that that um the tech companies are there because at
the moment this is where the technology is being created and this is where um a lot of the expertise sits but if you
have that alongside civil Society Academia and the other governments I think that's the right framework for
thinking about how you know we respond really nimely and rapidly to things that are moving quickly um so I don't know
how you would have a successful meaningful Summit without the the the the companies there and I think that
provides a kind of really important Anchor Point however there's lots of other people there and there's lots of
other challenges there's you know G7 oecd un processes that would also play a
part in this I think it's obvious that the UK and the US government have worked closely together which I think is helpful um but the US government
themselves have their own White House principles that have have led on to this Summit I think it's interesting this
question of whether the summit itself looks just at the frontier risks or whether it looks at the broader risks and I think you know there's arguments
to be made on both sides um but where we think about some of the broader risks I
do think actually UK Regulators are already thinking about how they address those and so you know for example
there's already AI used in in many products that we all use as as consumers whether it's Google Maps or different
kind of chat Bots that sort of thing and actually the UK Regulators are stepping up you know with some platforms they've
already issued um engagement and I think you see both from ofcom and the CMA and
others a real appetite to actually already get stuck in on the AI that we're using um there's a really
important body in the UK which is little known I'll say um the drcf the digital
regulatory cooperation Forum which brings together the main UK Regulators to think about how do they have the
capacity the expertise and the speed to look at not just AI but but wider Tech
regulation and I think that's where the UK actually has a real potential as well not just to um engage internationally
but to set some of those standards and those work um work programs that's really useful thank you um thanks to
everyone on our panel for the kind of responses to those opening questions I know Professor Benjo you wanted to
come in particularly on the geopolitics of this you know the question of whether tensions between the US and China or
just the dominance of the US and China in this area is going to kind of undermine any attempts at Global
governance so I'll come to you for a response to that and and then I'll come to the rest of the panel for reaction to
to some of the the other answers panel members have given so so over to you Professor Ben yeah um I think it's great that the
UK is is uh taking leadership here rather than um leaving it all in the
hands of the us as far as Western nations are concerned
um it it is going to be a lot healthier for uh geopolitical kind of uh stability
and um um governance if we end up with
multilateral agreements that are not just the the you know two powers of AI
but but but a broader set of countries and um there there are reasons
for this like if all of the decision making is happening in these two countries um you know the the the
smaller voices are not going to be heard um there's also something I call the
single point of failure problem which is a something that can threaten our democracies and and also comes up in the
kind of scenario making about loss of control we we want to make sure in in
the future that the power powers that control AI are uh diverse um because if
if all the power is say in the worst case concentrated in one country like the US and and let's say there's a a
change of government for a um uh populist government that wishes to use
technology for its own political benefit or even military benefit um uh we could
all lose um and instead if if say the US
agrees with a number of other countries about some principles of how the power
that we're bringing into the world is going to be managed in a way that's aligned with let's say the
um uh the the the UN Declaration of Human Rights for example some something
that of which there's a broad agreement uh for example that AI is going to be
used for peaceful uh reasons or you know at most for defending against attacks but not uh
to attack other countries there are things that U it's going to be easier to agree
upon um if the circle is larger um and and that's very important thanks
Professor benj for that I'm gonna gonna come to audience questions shortly but I do just want to turn to the panel I mean
Francine Dean you've you've worked on these issues for a long time time that was a a kind of Spirited defense of the
idea of kind of multilateral governance of kind of international regulation of
big Tech but this has been a challenge I mean not just for AI for social media for other kind of ways that technology
has shape societies and our lives you know do you think what do you think kind of the prospects for Success
are here with to either of you you know it's it's good to hear in a way you know
speaking from a chatam house perspective a vote of confidence in multilateral governance here and multilateral regulation but it hasn't been easy to
develop that kind of governance of big Tech so far so what are your thoughts on
challenges in the past and prospects for success in the future I I can have a very initial thought on that which is
obviously this is hard it's complicated it's it's a messy technology with multiple uses and we're trying to work
out what to do um one really really important thing this time which we got
very wrong last time I would say is is having a global voice and a public voice in the conversation in in a serious way
so um to Professor beno's point you different from different countries this is going to look really different if you
ask somebody from Nigeria how are these Technologies going to play out I think their perspective on the harms and
benefits is going to be very different and I I think we can't reach a a stable agreement on what they should look like
by only bringing in the the tech technological Powers into them we we need actually a lot more public voice in
this to to make it successful um and having good domestic regulation and to your point about drcf um we we're sort
of going around saying what what should our new institutions be and our new new rules and that is great but we also have
some existing institutions and some existing rules which actually if we get those really right that's a that's a
very solid building block for getting towards the the brand new I'd say um could I I suppose give an example
of why the Global conversation is so important um the Lloyd register Foundation did a
world risk poll um and it asked people do you think AI overall is going to be of benefit to
you in your community over the next 20 years most it was about two in two in
five thought yes so you know kind of edging positive but still not great but
interestingly it was very very clear that the countries which are more involved in developing these
Technologies were much more optimistic about them and just bringing that to life for a moment um the large language
models um sort of a huge amount of excellent technical work was made was
was put into stopping them from producing content that is
troubling but the way that was done was by the data was labeled by uh people in
um uh uh outsourced into other countries where um uh individuals had to look at
some EX extremely troubling content in order to build the safety tools that means that we can all use chat GPT
without facing this stuff that's just an example of where bringing the technology to the table you can trace that supply
chain of the realities of using these Technologies and I'm afraid it just means it's difficult but it does mean
that inclusive conversation is incredibly important because we need to retain society's Trust on this um and if
I may just cite some work that actually did in partnership with the a of Love la Institute um we went out to the Great
British public um I think 4,000 representative uh individuals um and
asked them what would what would what would you like to see to to help you trust this stuff they said number one
we'd like laws and regulations that basically make make it make it safe good point we're on it um and number two they
said we would like to be able to appeal a decision made by an algorithm we want to go to an individual and I find that
quite clarifying in terms of a really human relatable
response to what can feel like quite an abstract technological problem these are the sorts of things we
need to build that's really useful yeah and I think that desire for a kind of human in the decision- making system
it's really interesting to see that that is quite common but of course we also have to think about the humans who are
part of making technology safe as you say even social media content moderation also relies on quite you know poorly
paid and difficult work in the global South so so thinking about how we make things safe and all the ways we involve
humans in AI systems I think is a really useful Point um would any of our panel
like to make any other final points otherwise I will open up for questions Katie you just wanted to very quickly on
the um multilateral approach which I think is is really welcome and I think the points that of you know who's in the
room whose voices are heard who's shaping those decisions whose impression on the potential of AI is is really
relevant I think with regard to the summit and just broader conversations it's worth being specific about the
different roles though because I think you know when the world is agreeing kind of what are the principles we want to
govern you know Frontier AI you know that's absolutely right for multilateral
organizations for you know really inclusive conversation when you're thinking about let's evaluate the risk
and is there a shared Vision uh not shared Vision sorry shared understanding the research and shared um area of
concern again I think that's you know something that you wouldn't want to limit to a narrow number of countries I
think when you then look at kind of product um deployment and use I think there it is worth thinking really
carefully about you know who has the expertise and the resources to do that and how do we do that in a way that
absolutely guards against the risk but helps um people realize the potential because if you're looking at AI that has
potential to um help identify or look at drug Discovery or look at different
disease um treatment then we need a way that also can bring that potential um to
the right communities at the right time yeah very useful thank you um I'm going to suggest that we open up for questions
now there's a few online but we let's start in the room so please do raise your
hand please could I encourage people to ask questions rather than make comments and if you're comfortable doing so
please do introduce yourself and say where you're from um so I'm going to suggest go to the person here on the end
I'll take a few at a time so if you ask your question and then I'll go to the gentleman in the tie but over to you
evening I'm I'm Alexi Drew I work as a technology policy adviser at the international Committee of the Red Cross
my question is are we able currently to technically or politically or perhaps
and politically track measure or record the potential harm that might be caused by AI systems I guess the the point here
is that in order to be able to really measure pros and cons surely we need to be able to be certain that we can gather
the data that allows us to do so first great thank you I I'll take a couple more and then I'll put that to the panel
so if we yeah go straight to the gentleman here thank you uh I'm Christopher Townsen just a private
citizen here um we we regulate Banks because we don't want them to fail we regulate aircraft because we don't want
them to fall out of the sky we regulate medicine because we don't want people to abuse um pills uh I I'm I want to
understand and what the what the big risk is that we're regulating against from the sort of consumer point of view
if the panel have got any thoughts on that thank you great thank you if I I'm going to take one more um let's go to
the lady in the shirt here oh thank you doing our job for
us hello I'm Laura Turner I'm a master student in cyber cyber policy and
strategy um I would like to ask the question uh what are your general thoughts on on uh compute monitoring and
uh whether you think it's achievable to uh get further progress in this area in
the coming a uh AI safety Summit great thanks Laura let's take those together
so are we able to um track or record the harms that we're talking about question
from the gentleman on kind of what is the big harm like how would you explain it in the way that we say we regulate
planes so they don't fall out the sky and then question from Lara on monitoring computes so just to make sure
I've got it clear the the kind of computing capacity required for the really powerful AI models is massive so
what what sort of the state of or what kind of progress can we make on monitoring access and development of
that that input to these models um I'm I'm going to open that up because I think all of our panel might well have
responses um let me go I'll go to Zoe and then Professor benier
zo thank you I'll I'll answer the question about what we should be worried about because I sometimes feel that this
discussion gets quite dystopian and sci-fi quite quickly you know lots of powerful generally men will tell you
that it's Killer Robots we need to worry about and existential threat which which obviously is part of the the story but I
feel like there are many more immediate and kind of mundane harms if you like that we need to worry about before the
Killer Robots turn up and those are things like what will you do when an AI
tool is making decisions about you and it makes what you think is the wrong decision how do you challenge it where
do you go how do you redress that um somebody told me today that her son is in trouble at school because they think
that chat GPT wrote his essay and he insists that it didn't but the onus is on him to try to disprove that he that
he didn't use it and how is he going to do that you know this is a this is a 14-year-old boy right this is difficult
and and and lowlevel but also um immediate you know unpleasantness that I
think we're going to face initially and the other thing I think think that's that that is even more worrying and well
a challenge is how dramatically I think it's going to change work particularly
the sort of admin office-based work that millions of people do I had the demo
last week of Microsoft's co-pilot which is essentially uh chat GPT Tech put into
Microsoft Office apps and I watched it draft emails replying to email chains I
hadn't read it summarized meetings that I hadn't been to it wrote a PowerPoint presentation for me in 43 seconds based
on a document it had drafted earlier about a fictional product that you know that the demo was about I mean it it was
an absolute Game Changer it was very impressive to watch very impressive and
I had so much feedback from people when we ran the story last week saying this is going to save me so much money one
one lady um messaged me and said this is going to save my business which is great in some ways but you know if your job is
to do those power power points what are you going to be doing instead Microsoft will say this is
taking the drudgery out of work right this is this is going to get rid of all the boring stuff that you don't really want to be doing anyway and that's great
but what if there's nothing left for you to do I think I think um you know another kind of immediate and everyday
harm is the jobs market now new jobs will emerge that's what they say right and and it did with the internet you
know 20 30 years ago if I said to you search engine optimization that would have meant nothing to anybody and now a
whole profession so we know that that's coming but in the short term I think that's going to be a very bumpy ride for
a lot of people yeah thanks very much though worrying for us all um Professor
Benjo can I bring you in um to respond to those questions you had your you had your hand up I think yes um I'll try to
respond quickly to all of the three questions um so regarding whether we
able technically or politically to track the potential harms at least uh a lot of
the harms um having to do with our uh you know making sure that AI does things that are aligned with our norms and
morals and so on uh no the answer is a big no and that's one of the big reasons
why uh I think at the summit we're going to try to encourage countries to invest in R&D to uh improve this um this
technical ability and then there's the we also don't have the governance tools to do this right so so that's uh we need
to do a lot more work in there regarding um the um uh uh the big risks I mean I
already talked about them a bit earlier but I would say besides the things that Zoe talked about that are very important
for for people and you know going to feel in their life um I would say the
short-term risks that's coming years that are big are potential um National Security risks uh
with terrorists Bad actors um and so on using these systems so how do we make
sure we reduce the chances of that happening uh this is a you know a big
question and then finally about the compute monitoring I think that's one of the most important tools that
governments could put in place in order to increase safety because right now it takes huge amounts of compute that
should be we should be able to track quite easily because there are very few companies like um three companies in the
world that can build the required chips and right now there's just uh less than a handful of companies that really are
able to train those syst so um and you you need these large computes so uh this
is for the frontier AI risks which I I admit is on the part of the picture um
there are things we can do on the compute monitoring and that should be part of the agenda for um regulations
can I ask you Professor Benjo would you favor restricting access to that level of compute you know if some kind of
governance regime could be agreed well that's the
point that's the point of any kind of licensing or uh registration that
governments can pull out the the rights to to use something that is not properly
um uh uh you know designed in terms of safety just like we do for any other
product um so uh the government needs to know so monitor but also be able to say
oh no like this transaction of 20,000 gpus is something that uh you know we
want more scrutiny over before we we allow it for example thanks very much did you want to
come in can I pick up on the question about the big risk um and I i' actually push back a bit on your on your framing that there is a big risk with each of
those Technologies I mean in each of those actually I think it's already more complicated so you take an example like
cars we we have cars we know how to regulate cars um you think okay we don't want cars to crash but actually we want
a bunch of other things as well we want them to have low emissions we want them to have good seat belts we we have this
whole ecosystem of roads that there allowed to drive on and places that are not allowed to drive we have parking rules about where you can park when you
park and you get in trouble if you do it wrong that there's there's this whole complicated ecosystem of things just with a relatively simple relatively
single use tool and and it's it's kind of the same for AI we we want we want lots of the benefits from AI we want to
mitigate against all the harms that we can think of and and that that make that means quite a complicated ecosystem of
things you can you can still talk about them as interl but there there's not one rule and one harm that we're thinking
about and then that goes on to to your question about you know the the measuring and monitoring of Harms in the
Red Cross context for example you can't have a single you know in the same way as you can count all the harms of cars
in in one metric you can't count all the harms of AI that you might want to track but you can have sort of theories of
different types of harms that you think are important and work out of track and and I think starting to get more Norms
about how we measure with these things how we evaluate them predeployment and also how we monitor post- deployment
know if things are working or not working um are one actually potentially really good outcome of things like this that
the summit could kick off that conversation about what what does that look like longer term for us thanks very
much I'm going to put a couple of questions that have come in from uh people online to the panel so David
Stakes is asking about kind of the role of the UN in this so so some people have
suggested that a good outcome from the summit might be something like the intergovernmental panel on climate change or kind of similar un body to do
precisely that role of tracking communicating risks so David is asking would it be beneficial if the UN took
charge with a new agency as the kind of existing multi-polar body um that you
know arguably has more legitimacy and inclusivity than anyone else um so I put
that to the panel and I'm another question that's coming online um a nice
blump one Antonello Guerrera could AI kill democracy um so who would like to take
either of those G can I take a run at uh a few a few putting together a few threads um the onl the collection of
harms is really interesting because that's about looking at the evidence and collecting information and knowing what
you're trying to deal with and um we have made a small start on this um some
work done by the touring Institute the national physical laboratory and the British standards Institute have have
have produced something called the AI standards Hub and um it it sounds very
Worthy and it is it's a place where you can collect information you can collect
information about regulatory approaches and there's actually something called the online harms Observatory where you try and basically gather information so
that you can look at what you're dealing with now um that is a sort of Grassroots
up um actually very well regarded International effort there's lots of it's been um pretty well regarded and
lot of other um countries are interested in that but it's a Grassroots effort to start putting information into the
system system so that we know what we're dealing with um picking up the point about is the UN the answer I mean the UN
is actually doing good work on this there have been sever it's brought together several um conferences and
panels and I think it's just not an either or I think it's about sort of recognizing and it it there isn't a
clean answer to this um but there will need to be I mean AI doesn't recognize borders there there's just this sort of
simple truth about AI that um you know it doesn't recognize geographical borders and so you do have to think
multilaterally and you know the UN is I think already doing some excellent work in that in that space so it's but it's
about bringing it together um which is sort of it's It's a complicated topic and then you bring the actors together
which um uh puts together all the elements that you need to sort of minimize the risks and maximize the
benefits thanks very much stean um Professor bener you had your hand up for this one do you want to come in yeah um
well so I'll just say plus one to an ipcc like organization a for AI harms
and risks um regarding democracy I think there's a very important um train of
thoughts here that I'd like to share which is um AI systems are going to be
more and more powerful in the future they they'll have more and more capability and um any powerful tool can
be um used to uh by humans organizations
countries companies to ACR more power and more powerful The Tool uh the more
we might end up with an excessive concentration of power you have to remember that democracy is the
antithesis of concentration of power it means sharing power so uh even our
market system requires uh you know avoiding uh too much concentration of
power so uh if we're not careful uh we might end up with u uh some
organizations uh having too much dominance either economically politically or military or all three um
and it's not going to happen you know in one day but but I think we need to think of powerful tools like AI of the future
is something that fundamentally um threatens democracy and
that means we need to equivalently uh democracies need to put the right uh
protections um against these concentrations of power thanks very much you want to come in Katie yeah I I think
about that question from a slightly different perspective and I think for me it draws out some of the um Tendencies
we sometimes see in the conversation about AI risks and Frontier Ai and I think um it's been touched on I think
Jean you mentioned it or maybe it was Z actually in terms of you know thinking about the existential thinking about the
very very scary and you know that the question is a challenging one and it it makes you sort of stop in your in your
tracks and say gosh you know could I AI you know um be spell the end of
democracy and that's you know that's an arresting proposition actually I sometimes think when we think about
things in such abstract terms we sort of forget our own agency and we forget the own expertise and the history that
institutions have and that we all have around dealing with these issues and you know I've worked at Google long enough
that seen sort of um brexit referendum and the you know multiple gener
elections that we've had through those and and all of those have presented challenges that have involved technology
and I think those are challenges that we as a company who allowed political ads on our systems needed to respond to and
to check that we had firstly the right rules in place and we had the right transparency we had the right engagement
with the Electoral commission um as we sort of approach elections that will you
know involve perhaps AI we look at you know whether we have the right rules in place for adverts that might have been
generated with AI or indeed Building Technology that lets you understand when a voice is um artificially created or an
image has been artificially created and I think those are really practical tools but we also in the UK have really really
um robust elector law and I think that's something that we we shouldn't forget about that when you're talking about
practical application of new technology it still exists in that framework that we've lived with for hundreds of years
so if you put out a piece of election material you have to say you know who's it published by that's accountable to
the Electoral commission and that's the same if it's a leaflet coming through your door as it is as a tweet or or
something else so I think the risks shouldn't be minimized and I think you know the for like this or the um Summit
next week are really really important but we also need to remember what are the institutions and the historic
perspectives and you know demographic and societal efforts that we can combine together to make that framework right
rather than kind of think only in existential terms yeah thanks very much
for that je and then I I I would just say I do think something's changed this
isn't about leaflets through doors it's personalized very bespoke um tested a a
a beta tested on vast volumes of of citizens and I think we probably do need
to recognize that and and sort of um gear up to respond we have the institutions in place and we have I
think learnings and expertise like we certainly have institutions in learnings but the pace of change is just
extraordinary and I mean the question about risk I mean moving from elections to cyber security just I think I am
already experiencing more fishing attacks I think the volume of attacks out in in in s of cyber attacks so I
think there is something we have to face up to about the pace of change and how that interacts with some of our systems
we haven't got time to sort of fact checked everything and there's just some new realities that we have to come
together and recognize and absolutely all four coming together and recognizing but I think
with agency and experience and a bit of optimism that we have perhaps some of
the right tools to solve them or maybe I am fundamentally optimistic and I suppose the other thing is there's never
zero risk we live in a world of risk anti microbial resistance um is something that concerns me I'm a chemist
by training so we live in a world of risk but we do have to evolve and respond yeah no absolutely I think I
agree with that I'm going to come back to the room for questions and then I'll try to sneak in a few more from people
online um going to go to the woman in the glasses at the back there first and
then I'll take a couple more questions but if you ask your question first hi I'm Meg Davis professor at University of
War thank you wonderful discussion um I'm wondering uh if you could reflect a bit on the role of the private sector in
sort of framing this discussion and in shaping legislation to protect its own interests and in particular there's been
some criticism about the use of Frontier AI as a framing device for this Summit you know coming from open AI which has
you know had kind of spoken out of two sides of their mouths in terms of Regulation and what they're willing to subject themselves to so what what are
your Reflections on the role of the private sector in all this thanks great thank you I'll take a couple more um
I'll go to the gentleman in the in the red top there thank you I'm M Kama member my my question is
AI been used for many years in the tools like iPhone photo recognition Google and
can you give some good examples of what AI has been already doing and helping our lives better so to cheer up people
okay cheer us up a bit uh can I take from Joyce
here thank you very much Joyce hack me from chattam house um so looking Beyond
uh the summit there have been some announcement that uh at the summit there would be um or there be the announcement
of establishing the AI safety Institute and potentially a global AI research body so I'm interested in the views of
the of the panelists about what is the unique added value that these bodies can
bring and how can they complement existing initiatives whether at the multilateral or Regional level thanks
very much Joyce I I'm going to sneak in a question from online um I'm just conscious of time so it's kind of
similar to the first question that was asked Chris midleton online as
uh kind of mentioning that the White House's AI Bill of Rights was introduced um a while back a kind of voluntary code
since then we've seen a shift to Growing cause for more kind of muscular regulation even among it leaders so kind
of what has changed why why have we kind of gone from a conversation about voluntary code to calls for regulation
includ including from it leaders and from the private sector um so the role
of the private sector in framing the discussion possibly to their own benefits and response to that if that's
a accurate summary of your question um so cheer us up a bit please with some examples of AI helping us what is the
added value of of a new or additional International body um and then perhaps some more Reflections on why have we
kind of shifted from maybe voluntary Frameworks to cause for regulation even from it leaders um Zoe you've got your
hand up so why don't you come in and respond to to whichever of those you prefer um I oh sorry I put my hand down
I can't I just wanted to say something quickly about um the the the push for private companies to be involved I mean
this is this is really not uncommon we see this all the time you know they want to be involved because they want to
shape it if it's going to effect and they want to shape it you can bet your botom dollar that there is a hell of a
lot of lobbying going on constantly from all of these firms I think they've looked at the lessons of the past
somebody mentioned before you know we look at how disastrously the whole social media thing went when they said we don't need regulation we're fine we
can do it ourselves and and then fails repeatedly so I think everybody wants to avoid that situation and also in a way
it takes responsibility away from them doesn't it because they then say well do you know what we followed your rules we've done everything you said and
there's less of an onus on accountability perhaps if if the rules are set um for these big companies that
said they are the companies that are building these tools and so it's absolutely right that they need to be
part of the discussion and be at the table because they know what they're capable of building what they are
building and how quickly it's going to change and I think that is the big fear with with any kind of regulation of tech
is that it cannot keep up with the pace of change you know this is evolving so very rapidly and we haven't even had
chaty p is the thing that that is the kind of the post child isn't it because it was I think for so many people the
first time they' knowingly interacted with AI it's not even been out for a year right you know we are so we are
moving so fast in this world and when you think about regulation and how traditionally slowly that's moved you
know the on Online safety bill in the UK here has been years in the making and it's only just about to come in now and
arguably when the conversation began we were in a totally different landscape so
I think all Regulators are going to struggle with that and they're cautious about it thanks very much Zoe um
Professor Benjo let me bring you in now thank you um so I I agree with Zoe I
would add that there's something that's been discussed I think that's very interesting um for example it was
discussed at the US uh Senate hearings uh where I was witnessing it's the idea
of um strengthening the liability um uh Danger from the point of
view of the company so that they they would have an incentive to invest a lot more in um protecting the public in all
the ways that AI could be harmful um so yes governments governments need to invest um companies need to invest but
how do you force a company to invest invest 30% of uh you know their effort
on on safety well it's I don't think we can like quantify that but but we can we
can scare them into doing it using using laws so I think that's something we should do um about the whether um an AI
safety Institute or something like an ipcc like thing on AI safety would complement existing initiatives I think
it would there a number of things that that are happened at the multilateral
level um but that don't include any safety components so um I've been for
two years um um leading one of the working groups of
the global partnership Oni which which now has 30ish countries I think um
including from some from the global South um and uh We've looked at at a
number of issues around AI but um safety is hasn't been on the radar screen of um
any organization yet um the oecd has done also uh quite a lot in terms of the
governance regulation um UNESCO as well um so there
are lots of initiatives um but I think one way or the other we need to make sure
to to to bring in the kind of um summary
of the science that's important regarding these bigger risks um to
decision makers and and you know including the the the
understanding of that we are um already uh have some studies on regarding
current harms like you know uh with discrimination and so on but but we need to complement that with the question of
safety which has been uh you know the media has been talking about it a lot with all the letters and so on the last
six months but but not so much that the um kind of more evidence-based
scientific evaluation that uh decision makers need to uh refer to like the apcc
thanks very much Professor we're we're bumping up on time so I'm going to come to the panel in the room now um perhaps
I could encourage final closing comments maybe we can help out the gentleman in red with some positives from AI to close
so uh just to show with you two very specific examples um people often talk about health um sort of reading of um uh
eye scans or um prediction of heart problems or cancer I I know because I
was part of um associated with a team that did it was machine learning and
data science was used to materially impact on our ability to re respond during the pandemic and make sure that
hospitals that we had an understanding of where the pressure points were and where the demand was and so that we got
what was necessary to the right places so very real and um very real and very
valid valuable contribution if I may my other if my other point of optimism is
um we have very very high uh stressed public services at the moment so if you
think about some of these tools reducing admid burdens on hardworking doctors or um taking the pressure off those
clinicians um and letting them spend more time with patients um that's where one starts to feel more optimistic about
what these things can do or at least I do thanks very much um Katie and then Fran very quick closing remarks um I
think my reason to be optimistic on my favorite use of AI is in Google Translate and it's not just for the kind
of French Spanish to English but actually now you know working with local universities we can take a dialect from
Uganda and make the whole internet therefore some of the world's knowledge
accessible to people in their local dialects and I think that's absolutely gamechanging Fran you got the final word
great um yeah I mean I'm I'm very glad about well I've got a pacemaker so I'm actually very glad about um certainly
machine learning I don't know if it counts as AI but it's keeping me alive right now so I'm very glad about that um
yeah and and more broadly you know that illustrates the these these technological advances can be very useful when deployed in the right way
and safety tested very well I hope um to to to do good things for us so so what I
want us to all figure out together is is how do we design these these and manage these tools through their life cycle to
do good things for us and to not have the bad things thanks very much right we're a few minutes over time I'm not
going to try to sum up I think all that is that to say is a really warm thank you to our Panel online and in person I
hope that was enlightening and interesting to people in this context a week before the summit but also in this year where we have seen these kind of
unbelievable advancements in this technology thank you all so much for coming and thank you for your wonderful questions I'm sorry I didn't get to them
all um a round of applause for our
panel