‘Securing our AI labs’ should be top priority: AI security expert | Canada Tonight

38

Pope Francis warned world leaders at the G7 Summit in Italy today that humans must not lose control of artificial intelligence Francis said AI must not be allowed to get the upper hand over humans and urge close oversight of the technology at 87 years old Francis made history as the first pontiff to address a G7 Summit for more on this we’re joined Now by Jeremy Harris he is the CEO and co-founder of Gladstone AI a company helping people to understand AI Trends and advances he joins us tonight from the nation’s capital Ottawa thanks for being here oh thanks for having me listen with the with the G7 creating an environment where major world leaders can discuss these regulations together is it possible for everyone to be on the same page here a and what could happen if they’re not yeah I think it’s a real challenge across the world I think in the G7 there’s a unique opportunity to find some areas of collaboration in particular in the domain of export controls right this is this idea of preventing nation state adversaries from accessing controlled Technologies you know unfortunately AI is one of those technologies that you know is increasing the destructive footprint of malicious actors of nation states that want to Target our infrastructure and other things so this is really an area I think where there’s a lot of room for collaboration sort collaboration on supply chain controls across the board we’re already starting to see indications of that you know the United States joining with the UK joining with ex sort of more and more so uh the Netherlands and Taiwan kind of setting up infrastructure to monitor the flow of advanced semiconductors which obviously are the key input to the whole AI supply chain so I think that’s one really key area of collaboration for those countries um I think also just setting norms and standards around the responsible use of this technology is really important that’s something we’ve seen a lot of discussion on the hill in uh in the US about and uh hopefully we’ll start to see more action on that as country start to figure out you know what they think the biggest threats are in this whole picture right so not everybody agrees is it the weaponization of these systems or is it something else well and and just to pick up on that the you know the pope warned against AI weapons but AI drones they’re already here if some countries regulate AI weapons and some like China or Russia don’t what do you think the danger is what could happen yeah I think it’s a it’s a giant risk quite frankly and I don’t think that regulating the you know deployment of AI weapons is necessarily going to be the most tractable solution ultimately this is why I go back to that whole issue of controlling the supply chain preventing some of our adversaries from getting their hands on the critical inputs to this technology right these are the advanced semiconductor technologies that are needed to build the most advanced systems but another thing that we got to do and you know the investigation that my team and I conducted over the last year uh working hand inhand with a team from the US state department into the world’s top AI labs in the security and safety situation there really turned up a lot of whistleblower reports of folks saying in one instance there was a running joke at this Frontier AI lab that they were and I quote um a let’s say leading nation state actors top AI lab because all their stuff is getting stolen all the time so there’s this really important domain that we got to focus on too which is securing our own AI infrastructure at home securing our own AI Labs from foreign penetration foreign Espionage that’s something that just hasn’t gotten the kind of attention that it really needs to you know there’s this illusion that we currently have that the West is ahead of the game in AI but that’s not actually true as long as you know foreign adversaries are able to hack our you know our leading lab systems and steal their IP really with you know something close to impunity which seems to be the case right now Jeremy let’s talk about that report that you mentioned there you published this report commissioned by the US state state department a few months ago stating that AI could posee an extinction level threat to humans what do you mean by that uh and do you think that there’s been progress on that front since this report was published yeah so you know the two categories of risk that we looked at that were identified not just by us but by members of the US inner agency you we’re looking here at sort of national security agencies Department of Defense uh experts in weapons of mass destruction they looked at these as weapon of mass destruction level risks as well these were the weaponization of these systems uh through for example using AI systems to help design bioweapons or carry out autonomous cyber attacks on critical infrastructure that’s one bucket the other was the loss of control of these systems and of course we’ve seen two out of the three founders of the field of modern AI come out and warn about that being a thing we got whistleblowers and all the frontier Labs essentially kind of coming out now and saying hey yeah this is a real issue and we’re not we’re not doing enough to address this risk class look there have been a bunch of high-profile departures recently you know the news has been full of folks saying hey I’m leaving my Frontier lab open AI has had a lot of these right Scarlett Johansson isn’t the only big Story coming out of open AI not the only Scandal they lost their head of AI safety recently AI alignment Yan Leica who took to Twitter and said look I’m leaving because not enough is being done at this point one of the things that makes me really optimistic though is the fact that it’s being taken more and more seriously by Pol by policy makers especially in the United States we’re starting to see people really wake up to this and set up hearings that are starting to probe at this important question you what are we doing to secure Our Lives what are we doing also to make sure we can maintain realistic control over our AI systems in the future what kind of regulations do we need what kind of oversight do we need so I I talked to uh Jeffrey Hinton who you know’s known as The Godfather of AI here on the show a couple of months ago and he was somewhat pessimistic about this he had a little bit of optimism but but do you think that we’re too far down the rabbit hole so to speak uh to to kind of turn the the clock back on this yeah it’s a really good question I think at the end of the day so first of all I’m a lot more optimistic than I was four years ago when we started working in the space really intensely um I think one of the key things that’s happened is we’ve had technical advances that are increasing our ability to control these systems we’re not yet at the point where we can plausibly claim to know how to control so-called super intelligent systems these are systems that are broadly more intelligent than human beings we’re not there yet but we are making meaningful progress and frankly hits an open question at this point whether we’ll hit that stage before we actually build the technology build the capabilities that is a risk that risk in my mind has gotten a little bit lower as we’ve seen kind of the safety side catch up a little bit which is exciting um I think that there are a lot of possibilities in terms of Regulation that get us as much of the value as we can here because that’s really what we want right there’s tremendous value here we don’t want to just kind of get rid of that and overregulation like for example introduce uh Safety First principles Safety First regulation oversight in the space where instead of having developers as they currently do build the system and then just discover in retrospect oh shoot it can automate cyber attacks as by the way just happened with open ai’s latest AI system gp4 we discovered just seven weeks ago that it a could automate a wide range of different cyber attacks without having known that despite it being deployed for years we got to change that Paradigm around and figure out ways to kind of carve out the system’s capabilities Define them and only then build towards capabilities that we’ve kind of like let’s say safety parameters that we’ve set ahead of time right so for for you know the average P person watching all of this is there anything that they can and and you know that’s concerned about all of this is there anything that the average person can do or are we just kind of bystanders well I think ultimately this is going to be uh a main election issue across the world really uh including in Canada you know the idea of reg ulations for AI and in particular regulations that Target the extreme and potentially catastrophic risks that come from these systems I think is absolutely going to enter the kind of the mainstream and we will see this featured in platforms it’s it’s already being done in the US I think Canada’s not going to be far behind on that you we may not have the world’s top Labs here but we have some right we have coher for example which is an important developer they’re they’re sort of lagging behind companies like open AI but you can imagine you know if the US brings in regulations that you know do the trick other countries have to follow suit if we domestically have certain capabilities we need to be good actors on the world stage as well and do do likewise so I think you know that’s going to be an important step is to be aware of the space to make sure that we’re informed voters and able to kind of uh weigh in on these issues in a way that reflects you the risks at hand and what we know about them Jeremy fascinating conversation and it’s an ongoing conversation hope you come back and join us again that is Jeremy Harris my pleas the CEO and co-founder of Gladstone AI joining us from Ottawa

Jeremie Harris, CEO and co-founder of Gladstone AI, says that having conversations about the use of AI is necessary to protect it from getting ‘stolen’ by foreign actors. It comes after Pope Francis raised his concerns about the technology on Friday at the G7 summit.

»»» Subscribe to CBC News to watch more videos:

Connect with CBC News Online:

For breaking news, video, audio and in-depth coverage:
Follow CBC News on TikTok:
Follow CBC News on Twitter:
Find CBC News on Facebook:
Follow CBC News on Instagram:
Subscribe to CBC News on Snapchat:

Download the CBC News app for iOS:
Download the CBC News app for Android:

»»»»»»»»»»»»»»»»»»
For more than 80 years, CBC News has been the source Canadians turn to, to keep them informed about their communities, their country and their world. Through regional and national programming on multiple platforms, including CBC Television, CBC News Network, CBC Radio, CBCNews.ca, mobile and on-demand, CBC News and its internationally recognized team of award-winning journalists deliver the breaking stories, the issues, the analyses and the personalities that matter to Canadians.

Reference

LEAVE A REPLY

Please enter your comment!
Please enter your name here