Innovation Files: Where Tech Meets Public Policy
Innovation Files: Where Tech Meets Public Policy
‘Regulation by Outrage’ Is a Detriment to Emerging Technologies, With Patrick Grady
Policy regarding new technologies can be reactionary, confused, and focused on the wrong things. Rob and Jackie sat down with Patrick Grady, former policy analyst at ITIF’s Center for Data Innovation, to discuss what the European Union’s policymaking process can teach us about regulating emerging tech.
Mentioned:
- Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence, (European Commission, April 2021).
Related"
- Patrick Grady, “The AI Act Should Be Technology-Neutral” (Center for Data Innovation, February 2023).
- Ashley Johnson, “Restoring US Leadership on Digital Policy” (ITIF, July 2023).
Rob Atkinson: Welcome to Innovation Files. I'm Rob Atkinson, founder and president of the Information Technology and Innovation Foundation.
Jackie Whisman: And I'm Jackie Whisman, a head development at ITIF, which I'm proud to say is the world's top ranked think tank for science and technology policy.
Rob Atkinson: This podcast is about the kinds of issues we cover at ITIF from the broad economics of innovation to specific policy and regulatory questions about new technologies. If you're into this stuff, please be sure to subscribe and to rate us. Today we're going to talk about how the EU or the European Union regulates the tech industry, particularly emerging technologies like artificial intelligence or AI.
Jackie Whisman: Our guest is Patrick Grady, who's a policy analyst at ITIF's Center for Data Innovation. He focuses on AI and content moderation, and he's based in Brussels, so he has a great perch to watch all these issues unfold. Thanks for being here, Patrick.
Patrick Grady: Thank you both for having me.
Jackie Whisman: Generally speaking, how does the EU approach regulating technologies like AI and how does it differ from what we see here in the US?
Patrick Grady: Well, first, quite broadly, so in the US, I think you would pick out certain capabilities of technology, and whereas in the EU, we really target the technology itself. So on the one hand, critics would argue this is quite a precautionary approach, but on the other hand, it's why the EU has its reputation for being the first mover on regulation. I mean, if I take the AI Act as an example, which is obviously what we're going to talk about. So the EU focused on the technology itself, artificial intelligence, this is the name of the bill, and this is really going to please a lot of people that there's a bill called this. Whereas in the US, you're focusing on what does the technology actually do. So in a similar proposal, you're focusing on the automated decision-making. The different approach in the EU is that we really focus on the technology itself, so AI.
Jackie Whisman: Why don't we talk specifically about the proposed law in Europe that you mentioned, the AI Act? What's the state of play as you see it?
Patrick Grady: Well, in terms of a timeline quickly, so the proposal was in 2021, and here we are two years later. It looks like it's finally being wrapped up eventually. So this bill should pass by the end of the year, and it might be enforced by 2025. There are still some big baffle grounds in the bill. I would say the two biggest ones are, how do you regulate general-purpose artificial intelligence, so things like GPT-4, and the other big battleground is around facial recognition. So the bill actually bans the use by public authorities of facial recognition, but now there are arguments over whether there should be exemptions for law enforcement, which was originally the case. As you might imagine, member countries, so the Council is the name of the institution of the member countries. They want to keep this exemption. It's how national authorities see the way of securing themselves against crime. But the Members of Parliament, they see this as a loophole. So facial recognition is still a huge battleground.
Rob Atkinson: So Patrick, on facial recognition or FR, there are two main critiques of it in the US, one is that it'll be used in a way that would violate civil liberties among the population. And the second is that it inherently has bias. There's been a claim that it's biased against dark-skinned people. In the latter case, it's pretty clear that there are FR technologies on the market that have zero bias. The National Institute of Standards and Technology showed that they do... about a year and a half ago or so, two years ago, they let all these companies commit and submit their algorithms and their systems for testing on this, and they found that the top 20 out of, say, 80 or 90 had zero bias.
Patrick Grady: Yeah. And it's a bit of a misconception really, I think, because there was a lot of scandals, I think, in the last five, even going on 10 years with how some of these systems reproduce bias. But what's happened since is actually a lot of work in this field to re-engineer the systems, or even there's algorithms that can now detect bias based on certain characteristics and make sure that doesn't influence the output. So with something like facial recognition, I think it's more important to focus on the institutions themselves and the kind of processes that police forces, for instance, are going through rather than the technology itself.
Rob Atkinson: Yeah, sure. I guess my ultimate question here is, then when you look at the application and the worry about civil liberties, I'd be worried about that if I were in China. I'm not worried about that in the EU or the US, we have laws, you can design a bill around facial recognition that has certain parameters around it, for example, you can't use it to identify people who are not suspects. So in other words, I can't walk down the street in Brussels and it says, hey, Rob Atkinson's walking down the street. Nobody wants that, nobody should have that. But you certainly want to be able to say, hey, there's an abducted child and we have their image, so we're going to have all the cameras looking for that person. I want that. I would love to have that. Or hey, there's somebody who just robbed a bank, let's see if we can identify that person.
Why do you think the European Union, the Commission and the Parliament have not understood that? Or do they understand it and just don't care and they just don't like technology? I mean, what's going on? This is a beneficial technology. I don't understand.
Patrick Grady: Well, look, I want to give them the benefit of the doubt, these policymakers because they really do want to ensure that, of course, as innovation, we're also protecting the civil rights of individuals with some of these applications like facial recognition, and another one I would say is social scoring. Really, what's happened here is as a result of sci-fi or some examples from more authoritarian countries, we assume the worst about our own member states, and we assume that if something can happen in, for example, China, then it could happen here in the EU. The challenge is the EU does have a broad spectrum of countries with different levels of democratic policymaking at the moment. So I think, at the moment, there is a bit of diversity and you want to create a minimum standard. But I hear your point, we're not the Chinas of the world and this is not how it's currently being used. It would make more sense to me to regulate the current risks than any sort of imaginary risks.
Rob Atkinson: Yeah, I mean, look, if we're worried about the abuse of technology, then we should ban steel bars because you can put people in jail, my God, we have the ability... Hungary or whatever is going to put everybody in jail. You go to Hungary for a visit, they'll put you in jail, let's ban steel, or they're going to shoot you because they have steel guns with bullets, so let's ban bullets for the police. I mean, really, at the end of the day, it really, frankly, to me, is a nonsensical notion. It's saying that somehow the technology is bad. There's lots of technologies that are out there that governments could use if they wanted to, to significantly abuse our civil liberties but they don't. And they don't because we have a wide array of protections, the free press, the courts, legislators who can pass laws, just the fact that most governments and police forces are trying to do the right thing.
So I still don't get it. I really feel like this is a case where the Europeans are precautionary principle run amuck, and they're going to suffer from it. They're going to suffer from it in terms of not developing a facial recognition industry, and they're going to suffer from it because there's going to be a lot more crime that doesn't get solved.
Patrick Grady: Yeah, no, I completely hear your comment. And we probably also underestimate how much of crime is already being tackled through these technologies, and that's why it's so concerning to the member states right now, what's being proposed, because it might be a disaster for them to take away what is already providing so much utility for this law enforcement.
Rob Atkinson: Switching over to AI, one of the big differences, although facial recognition is based usually on some AI algorithm, one of the big differences as you alluded to is they're trying to regulate the technology, in the US, we're regulating the application, so for example, the Food and Drug Administration just came out with a set of rules, that if you're going to use an AI algorithm in a medical device, it has to be regulated. Of course, it does, of course, it does because medical device after drug, if you're going to use an AI algorithm in credit reporting, it has to be regulated because of the Fair Credit Reporting Act.
If you're going to use it in cars, it has to be regulated because we regulate brakes and steering wheels and everything else in cars, but we don't regulate search engines. Why in God's name would we regulate ChatGPT in Bing or ChatGPT in Google? Is it beyond belief that the EU would want to... First of all, how do you do it? What are you looking for? Oh my God, it gave me a bad result. Let's sue Google. I got a bad result. So am I exaggerating or what's going on there?
Patrick Grady: No. Well, that is a little bit of what's going on, but it's a horizontal regulation, so it has to cover the use of AI in principle everywhere. But as you suggest, there are different use cases that are more dangerous or risky than other ones. So one of the ways the EU tried to navigate this is they proposed so-called high risk use cases, and this is really the meat of the entire argument behind the bill because these are where the high restrictions are in place, and this is where there's a lot of debate because there are two sort of problems with this approach, and so they picked out eight categories. So there's, if you use AI for public services, for biometric identification, for employment, for education, for migration and asylum services, you are subject to these really high requirements.
But the problem is, and this is something we've all learned over the last two years, is just how prevalent AI is. So we're finding that many of these use cases are much broader than we originally thought they were going to be. And there's also a lot of different risk profiles within these categories. So I mean, for example, the kind of risks of a government using AI to give welfare or social security to its citizens, it's plainly different than if it was used in an educative environment, and these are treated as exactly the same.
Rob Atkinson: So I hear another one, I don't know, I don't understand why the government would want to regulate itself, so why does the city of Paris or the French government need the EU to tell it to develop algorithms that are not biased against French citizens? I never would've thought of that, that thank you, Brussels. It is a brand new idea not to have a bad algorithm. I just don't get that one.
Patrick Grady: Well, to be clear, it's a super national body, and so it's really a combination proposing the initiative. And to your point, yes, the member states probably don't want many of these requirements. But it's almost a kind of insurance. They want to create a minimum standard that will apply even after other governments maybe in power. The government is the only one capable of regulating themselves, so they have to do it to that extent.
Rob Atkinson: I don't agree with that. I mean, first of all, if the French government wants to discriminate against Muslims, let's just say, hey, we don't like Muslims, we're going to discriminate against them, or if they're just dumb and they want to do bad things to discriminate against a certain class or demographic, they can do that. First of all, why would they do that? But secondly, why does Brussels need to be telling them not to do that? I just think it's one of those things like, why would you bother with that? Let this technology flourish. Let these governments do what they need to do. I can see with private companies maybe, but governments are governments. Why does the EU think it's the one that has the moral standing here? If it's not for the EU, boy, those Germans or those French are going to do awful things.
Patrick Grady: One of the reasons is because there has been some very public scandals with the use of AI, and the lesson doesn't need to necessarily be that it should be regulated everywhere, but it motivates a desire to harmonize rules essentially, which is the principle of the EU.
Rob Atkinson: Yeah, again, though, I don't really get that. I mean, when you talk about the scandals, one of them is the Dutch example, and what drives me crazy about these AI harms, you would be surprised with any technology like this if it's rolled out in a million different applications where there aren't some cases where it didn't go right. What you really should be worried about is the institutions or the organizations that roll it out, said, oh, yeah, it didn't go right. We don't care. Let's just keep going, man. The Dutch government stopped and they learned and developers learn, and the idea that somehow you have to be micromanaging governments, I just don't get that. They're not stupid.
Patrick Grady: Yeah, I mean, to your point about governments learning, there was actually another Dutch case quite recently, but I still think the lessons weren't quite right. When policymakers speak about AI, they're often just talking about machine learning, really. And so when we heard about many of these scandals... There's also a really famous one in Australia, we're assuming a really sort of... We're assuming like a ChatGPT, a rogue really powerful model. With many of these scandals, it's just a simple spreadsheet and all of their decisions about how much inputs should weigh and how should they affect, for example, in that case, benefits people have got, but made by human beings. And one of the risks here is that we end up putting a red tape on a technology and there's never actually any accountability for those that have complete control over the rules.
Rob Atkinson: Yeah, that's a good point.
Jackie Whisman: How have policymakers in the EU reacted to the latest buzz about ChatGPT and other large language models?
Patrick Grady: Well, listen, Jackie, I should say, as a policymaker's role in the EU, it's to try... Your number one job is to stay relevant and be seen to be making a big difference. So when there's a lot of hype, especially in the media around ChatGPT, you have technologists claiming this is AGI. There's, we're on the brink of a disinformation apocalypse. Just in the last week, there was a letter circulating to try and stop the development of AI. So in this context, we suddenly see, actually, a ChatGPT amendment pop up in the AI ads. It was called a text-text generator, but it's quite clear where the motivation came from. And really, this is disappointing because it betrays the entire approach of the bill, the risk-based approach. It also betrays all of the work done so far on general-purpose models.
The amendment would just put ChatGPT in its own category. Actually, it was even labeled in the draft, other. It was labeled as other category, which is a worrying sign of how ad hoc it was. But if this amendment went through, it'd be treated as a high risk category regardless of how it was used. So if you are using ChatGPT to write a birthday card, it's treated as equally risky if you are using it to reply to an asylum application, for example. Now, there was a lot of backlash to this amendment, and it probably won't now survive. We'll see some version of it maybe in a footnote in the bill, but it's a kind of indication that there was this regulation by outrage, as I said, which is a bit concerning.
Rob Atkinson: I want to just follow up on that because again, it goes back to the use. I mean, I gave a talk up in Canada, up in Ottawa recently to the Internet Society, and I opened up my talk with, imagine a technology that it could be biased where you wouldn't get a loan or where your drug test would be faulty or where a company might make a bad investment. We need to regulate that, right? Well, I'm talking about spreadsheets and we don't regulate spreadsheets because if you make mistakes on spreadsheets, it's your fault. If you buy a bad spread from a bad spreadsheet company, that's your fault.
What I don't understand about, how would you regulate... My son's getting married, and so I'm using ChatGPT to write the wedding speech I'm giving. I'm lazy and I want high productivity, so I don't want to waste time writing a speech. Actually, I'm doing it for fun. But I use ChatGPT to write a speech. It's an incredible speech. It's really beautiful. You'd cry if you heard it. It's amazing. But then I want to use ChatGPT to decide whether I'm going to give Jackie a raise and it tells me, no, I shouldn't because Jackie has blonde hair. How could you possibly regulate that differently? I just don't get it.
Patrick Grady: Well, it would be the responsibility of the developer, which as you say would be impossible. I think OpenAI have actually been quite clear about, don't trust it with decisions like, with giving Jackie a raise. If you want to use it to write an invitation or a speech, then go ahead. But just always know that there are some cases where it's not appropriate to use it.
Jackie Whisman: How often have you written reports for the Center for Data Innovation with ChatGPT?
Patrick Grady: Well, that's obviously-
Rob Atkinson: Come on, be honest, be honest.
Jackie Whisman: Daniel won't listen.
Patrick Grady: No. What I find interesting about it is using it for ideation. So you almost have a conversation to try and weed out a topic. What's also interesting is if you say, write it in the style of ITIF, you would actually have something that replicates our style in some way, but obviously, there's a wicked talent that you can't replace.
Rob Atkinson: Well, as head of ITIF, that's one of the things that we're working on to see if we can outsource to ChatGPT, most of our analysts. So we'll let you know if that happens. In the meantime, I wouldn't worry too much now. I think maybe by 5, ChatGPT 5, I'd start to worry.
Patrick Grady: Actually, a fun way to use it is the image generators. I think the analysts are having a quiet, unspoken competition to try and find the best cover photo for our articles. But yeah, DALL-E has been fantastic help in that regard.
Rob Atkinson: Yeah, or PowerPoints or whatever. I hope EU regulates DALL-E because imagine doing an image generator... Well, I did one on me, Rob Atkinson and technology, and it had this weird image of me that I felt quite offended by, and I think I should be able to sue somebody. It's like really pissed me off.
Patrick Grady: Well, you're lucky it knows who you are, I think it's the privilege.
Jackie Whisman: Well, in the couple minutes we have left, I wanted to get your thoughts on what this policymaking process can teach us about regulating emerging tech. I know Rob's thoughts but we want yours.
Patrick Grady: I think it's to be neutral to the technology itself. If you want regulation to survive, it has to not be confused as we found with AI acts with latest innovations that pop up. Non-discrimination laws are a fantastic example of tech neutral regulation, and so that's one point. The second point, which has come up a few times, is you really want to tackle the sectors where it's used and not necessarily the technology itself. So I encourage regulators who haven't yet got this far in the process to consider these learnings from the EU.
Rob Atkinson: We will watch with interest as to what the EU does, and hopefully, they will listen to you.
Patrick Grady: We'll share this with them widely. Thank you.
Rob Atkinson: Thank you, Patrick.
Jackie Whisman: And that’s it for this week If you liked it, please be sure to rate us and subscribe. Feel free to email show ideas or questions to podcast@itif.org. You can find the show notes and sign up for our weekly email newsletter on our website itif.org. And follow us on Twitter, Facebook, and LinkedIn @ITIFdc.
Rob Atkinson: And we have more episodes of great guests lined up. We hope you'll continue to tune in.