Innovation Files: Where Tech Meets Public Policy

Navigating Deepfakes While Promoting Innovation, With Ryan Long

Information Technology and Innovation Foundation (ITIF) — The Leading Think Tank for Science and Tech Policy Episode 91

The past few years have seen a remarkable rise in the quality and quantity of deepfakes. Rob and Jackie discussed the rise of deepfakes with Ryan Long, Vice-Chairman of the California Lawyers Association, Licensing and Technology Transactions Group, Intellectual Property Section, and explored how to harness this technology responsibly while preventing abuse.

Mentioned

Related

Auto-Transcript

Rob Atkinson: Welcome to Innovation Files. I'm Rob Atkinson, founder and President of the Information Technology and Innovation Foundation.

Jackie Whisman: And I'm Jackie Whisman. I head development at ITIF, which I'm proud to say is the world's top ranked think tank for science and technology policy.

Rob Atkinson: This podcast is about the kinds of issues we cover at ITIF, from the broad economics of innovation to specific policy and regulatory questions about new technologies. If you like this kind of stuff, please be sure to subscribe and rate us.

Jackie Whisman: Today we're talking to Ryan Long, who's Vice-Chairman of the California Lawyers Association, Licensing and Technology Transactions Group, Intellectual Property Section. His law practice focuses on helping clients create and implement effective strategies to litigate over or negotiate sophisticated tech and media transactions. He's also a Stanford Law School Non-Resident Fellow and he's based in Los Angeles, and we're happy to have you, Ryan. Welcome.

Ryan Long: Thanks for having me.

Jackie Whisman: Today we wanted to talk about deepfakes, a subject that you've written and talked a lot about. The past few years have seen a remarkable rise in the quality and quantity of deepfakes. And politicians tend to get especially nervous about them as elections approach, so maybe it's good timing. Can you describe what deepfake technology is?

Ryan Long: I can say that, describe it, I like to think of them as counterfeits, like digital counterfeits. So if you think of a regular counterfeit, either of a dollar or a painting, they are copies of originals, they could be basically fake copies. They're also original counterfeits. So there could be something that is made up on a synthetic media that is a made-up counterfeit. That is to say you take, some images are like, let's say you take Michael Jordan dunking a basketball. A deepfake could be a modification of whatever he did. Another type of deepfake that is a counterfeit but not in the traditional sense is that it's actually an original video of him. It's an original deepfake of him doing something he never did.

So regular deepfakes are basically counterfeits, but some of the deepfakes that exist or can exist are actually originals in the sense that they're original frauds. They're using somebody's image, them doing something they never did, those are the secondary. The most common are modifications of images that are out there. And the technology varies. Generally it's AI produced. So you basically, these images are crunched and then they're modified and they could be mass-produced by bots throughout the internet.

Rob Atkinson: Why wouldn't a movie that's coming out of Hollywood that's making something that's not real, isn't that, "A deepfake"? It doesn't exist, but it looks real.

Ryan Long: So that's a good question. So there's different types of deepfakes. There are malicious deepfakes, which are basically trying to mislead people, and that's fraud. So the cost of deepfakes annually, I have an estimate here, as between 243,000 and 35 million in individual cases. And then you basically take deepfakes and put them into other types of fraud, the number gets much bigger annually.

So to answer your question, traditionally deepfakes, you have basically malicious deepfakes, which are used in fraud. They can be like a bot, it can be an image. It can also be somebody that calls a CEO and says, "Hey, this is your son calling. Please, I need money and send it to this account." And that could be a voice fake. So that's one type that's malicious.

There's other types of deepfakes that are fun. So on The Onion or Babylon Bee, they're satire sites and they make fun of things and they make up fake images and they qualify them as being satire. So they are deepfakes, but they're also satire. And actually there's a third set, but I like to separate them between malicious and non-malicious. So you have in the second category, deepfakes that are basically for humor, but you also have deepfakes for educational purposes. So there are images of historical figures that are giving speeches that they never gave, or hypothetical situations and you have a hypothetical image of a historical figure and how they would respond to something.

So to answer your question, movies, those are just fiction. They are fake in the sense of if you take some historical fiction and then you make up something based upon history, then it's fake in the sense that it's not real, but it's fiction and so people understand that it's for entertainment purposes.

Rob Atkinson: Makes sense.

Jackie Whisman: But then it gets pretty tricky from a regulatory perspective if you're trying to curb the dissemination of malicious deepfakes. How do you make sure that you're not discouraging technology that supports an industry like the film industry or content creators while you're doing that?

Ryan Long: That's a good question. Another good question. I think, so what I've seen is in copyright, if you have a technology that basically can only be used for illegal purposes, then the software maker can be liable. So it's like a weaponry. If there's certain weaponry that can be only used for legal purposes, then that's one thing. If you have weaponry that can be used for legal and non-legal purposes. That's another thing.

So, in the software context, if I have a client that's created software that can only be used to hack people, hack into accounts and circumvent copyright and steal copyright, then that's one thing. If they have software that basically can be used to circumvent, but also for lawful purposes to create interoperability, which is you have old hardware and new hardware, and they create software that can create interoperability, which is lawful under the Copyright Act, then that's okay.

So, to answer your question, the answer I believe is what is a software designed to do and what ramifications does it have on the market it's supposed to enter or serve?

Rob Atkinson: Well, it's funny because you mentioned that because I was one of the first folks who did something in Washington, opposing Napster way back when. And Napster, which was a music piracy site, it ended up getting shut down. And their argument was, "Well, not everything on Napster is stolen or copyright illegal." It ended up getting shut down because when they were in court or looking at the documents, it was pretty clear that the founders intended it to be used for copyright violation.

And so later versions didn't make that mistake. And sometimes they were able to get away with it and say, "Well, there's legitimate peer-to-peer usage on this. Some of it's bad, some of it's good." But I think your point is a really good one. A lot of these technologies are going to be open source, easy to be used for and might be just used, "Hey, I want to send Jackie a birthday image of, I don't know myself next to Donald Trump or next to Joe Biden." That would be a deepfakes, but it wouldn't be malicious. And so the company who sold me that or sold me that service couldn't be prosecuted. So is there any way around that or you just basically just have to prosecute the people who are doing the bad ones and not the good ones?

Ryan Long: Well, Napster is interesting because Napster was the forerunner of digital information coming from offline sources like CDs and record players and things like that. And so there was peer-to-peer arrangements on Napster. And I think that in that case, yeah, there was a lot of copyright infringement. At the same time, there are a lot of pro-competitive benefits to Napster. And now we see the technology being used by all these streaming services. It really revolutionized music.

So, there was some really positive things there. And I believe the founder of Napster wanted to talk to the recording industry and they basically just wanted to prosecute him. And it alienated a lot of the technology community. Now we see music as priced differently. So one has to be, in my view at least, one has to be careful of how you approach these technologies and not just use a shotgun approach for a fly.

With deepfakes, it depends on the circumstances. There are technologies that allow voice fakes, and they actually came out around 2006. Where you could take somebody's voice and then mimic it by just recording it. Now what are the lawful purposes of that? You can use them to prank people for jokes, but there's also unlawful uses. So if you use it unlawfully and you use it to basically commit bank fraud, then you should be liable for whatever federal crime you commit. Should the software make be liable if you misuse a product, if they say, "Don't use this product to do X, Y and Z"? That's another thing.

If most of their users, if 90% of their users are using it for unlawful purposes, then that's a consideration. But if it's only a small percentage or a relatively small percentage, then that's another consideration.

Rob Atkinson: Yeah, I think we might differ on Napster. I hated Napster because Sean Parker knew that this was a piracy vehicle. And to be fair to your point, finally the music industry decided they wanted to move into digital and streaming, and now you can do that with Apple or whatever. Spotify. I guess on your point is interesting legal one. What if there were a service where it looked like 95% of the people were doing it illegally and then the owner made sure that his grandmother and his cousin were using it legally? Is there a legal standard of what level of illegality it has to be, or is it like we know it when we see it? How does a court decide that?

Ryan Long: If they see that the owner, if they see that the software is primarily being used for illegal purposes and that the owner knows about it, there's a quote from the case, "There's no ostrich in the sand defense." So if you are creating technology that primarily can be used to infringe, then you can't just say, "Well, I didn't know about it." You have to be aware of what your customers are doing with your technology. In terms of a per se rule, I'm not aware of one. So I believe that it's basically what is the software being used to do? If it's primarily being used for illegal purposes, then that's one thing. If it's not, then that's another. So deepfakes, there's a lot of applications of them. I worked on cases with bots where there's traffic for, we have these advertising companies, and they basically are paid for clicks. And some folks use bots to basically artificially boost their clicks. And that's not proper under usual contractual language. So those are in quotes, "deepfake" in the sense that these are bots creating traffic and it's not real traffic.

Another instance are email skinning where you have people getting hacked and people coming in saying, "Hey, I'm so-and-so," and they're not who they are. And they use an email address that usually differs from the original one. So that's a subset of a deepfake, if you will. It's a counterfeit identity. So my original point is I consider deepfakes to be digital counterfeits. That's a very good umbrella and there's a lot of different manifestations of that.

I'm not saying I was a fan of Napster or anything, I was just saying the technology was very tomorrow thinking. And I think that with these technologies, there's some positive applications. Like with deepfakes, you can have a class, a history class and have a deepfake, if you will, of a historical figure giving a speech that could be very educational.

Rob Atkinson: Yeah, I think we actually 100% agree on that. I wasn't trying to imply that we should be shutting down any software that can do these cool things. It was just in the Napster case, it was pretty clear that they intended it to be able to have music without paying, whereas being able to manipulate images or voice or there's all sorts of interesting uses, whether it's parody others as we've talked about it.

I'm just curious, Ryan, have you looked at all, some companies like Adobe, they have this content authority initiative, which is about embedding digital signatures, so you can see when something is trusted and not fake. Any thoughts on that? Have you looked at that at all? In other words, is there a technology solution that it would at least let people know that something's not fake? It's hard to do it the other way to show everything is fake, but at least you can say, "Well, we know this isn't fake."

Ryan Long: So, there are technologies depending on the content. So there's not a universal technology that can just authenticate everything. There's signature technology, there is voice technology. I know there is software that if you're working at a bank and you're the CEO and you're worried about people calling you and not being who they are, there is software that can authenticate people. There's different solutions for different problems. And to answer your question, there is software that can authenticate signatures. You have to register and there's only so much you can do because the big problem online is that I can be saying I'm somebody and using a driver's license and everything of somebody else, but then being not the person that I say I am.

In person, it's very hard. It's harder to do that. So if I meet somebody in person, they can wear a mask, they could be camouflaging who they are, but it's very difficult. If they say that they are somebody that I know, I can pick up on if they're not the person that they say they are easier. Online, it's harder. So with signatures, authentication, there's blockchain technology that helps with this that can help authenticate information and make it less susceptible to being manipulated. So there's different software for different solutions. There are software that helps with data, to clear data in terms of privacy issues, accuracy of data and things like that.

Rob Atkinson: Sure.

Jackie Whisman: There's a lot of state legislative action on deepfakes in the last few years, and our colleague Daniel Castro, who works on this issue a lot more than we do, would say that some of them may take the right approach, they make it unlawful to distribute deepfakes with a malicious intent. They create recourse for those in their state who have been negatively affected by bad actors. But our concern is really that lawmakers carefully craft these laws so as not to erode free speech rights or undermine legitimate uses of the technology, which I was talking about earlier. Is there a particular piece or pieces of legislation that you think gets this right?

Ryan Long: There is legislation in different states that criminalizes deepfakes for political purposes within a certain period of time. And answer your question, so California has a law, no deepfakes of a candidate within 60 days of an election. And then Texas, but without the sixty-day limitation. And similar statute been proposed in Washington Maine and Maryland.

So in terms of the regulations, in my view at least, they should just follow the first amendment. So if it's opinion, if it's satire, that's one thing. If you're making defamatory statements about a candidate or whatever is a product, and you're saying, "Nike products do X, Y, and Z," and they don't, and it's commercial speech, but it's defamatory then that's, so I think if you follow the contours of the first amendment and commercial speech, political speech contours, that would be my thought.

Jackie Whisman: Is the political context really the most malicious way you could use a deepfakes though? It seems like, without bashing the technology itself, it does seem like that's a little narrow and niche and a little self-serving of the people who are writing the legislation frankly.

Ryan Long: What do you mean?

Jackie Whisman: It just seems like that I don't know how to do it. I guess that's why I'm looking to people like Daniel and you, but it doesn't seem like the only malicious deepfakes would be in a political context. It seems there are other ways that you could be like the example of the CEO and a fake kidnapping attempt or whatever, or extortion attempt. But it doesn't seem like A, the only malicious deepfakes would be in a political context or that would be the only thing that we would need to protect against. But B, writing laws that are really just limiting the timeframe of something doesn't fix the problem.

Ryan Long: I haven't seen these laws. I just know that they're proposed. I agree with you that deepfakes aren't limited to just political context. My view, it's like trespass. You have a physical trespass, which the law hasn't changed. If you're going on my property and you have authorization to trespass. If you're trespassing with trees or dogs or whatever it is, it's still trespass. So in the digital sphere, there's actually digital trespass, where if I have proprietary information and have a firewall and you jump the firewall and you can see that proprietary information, even if you don't take anything or is case law regarding a digital trespass.

So to answer your question, I think the same thing falls for defamation. So if you're defaming somebody and you're using a digital deepfakes, that's still a defamation. If you're trying to commit fraud and you're using a digital deepfake, it's just another vehicle to create the fraudulent scheme.

Rob Atkinson: I guess the logic to that, Jackie, when you think about it is, Ryan, to your point, you could bring a case, you might even win the case, but if you did a deepfake three weeks before the election that showed that your opponent was a drunk, by the time you can respond, the election's over. You've lost. You maybe win your case later, but you're not in office. That may be the logic there.

But I guess the other part of that is how do you define what is a deepfake? I've seen things on The Onion where it shows Trump doing something or Biden doing something. It labels it satire. Are you not allowed to do that two weeks before the election? That seems like a restriction of free speech.

Ryan Long: Yeah, I haven't read these laws. I'm assuming that they don't prohibit satire. I don't know the answer for that. So I haven't looked at legislation. Deepfakes are traditionally synthetic media, so it's like they're generally created by AI. So if you just create a cartoon of somebody and they're doing something they're not doing, that's not really a deepfake. There's a quote about truth that, "A lie gets around the world before truth has its pants on." And so I think to answer your question, that's a danger to deepfakes is that they move fast. They can be created by bots. And in the commercial context, they can really hurt your brand. So if you're out there operating, people can put stuff up about your brand that's completely false, and before you know it, it's accepted as being accurate and it can injure your brand. And by the time you go to court and get a judgment, you can have a lot of customers leaving your company.

So that's a big, whereas in the old times there's word of mouth, somebody says, "Oh, they are selling bad coffee beans and they're moldy." And then I say that to somebody and then they repeat it. It's a linear defamation. It's linear word of mouth. Whereas this is exponential. And to me, that's the difference of degree. And what you talked about before an election, somebody puts up a deepfake of somebody doing something they've never done. There's all this case law regarding defamation and political speech, which is separate. So to me, it just would fall into that rubric.

Rob Atkinson: That's really interesting. Do you think though that, there were concerns with past technologies that they would be used to manipulate photography, television. And we've figured it out. My sense is, I guess first of all, I don't see a technology solution. It's possible that we could see a solution where you could look at something and it's not authenticated. Maybe it might be real, maybe it's not. In other words, you don't have a false negative or false positive.

And I would love to hear your reaction to this is do you think it's one of those things where we're just going to have to just get used to it and just have a little bit better bullshit detectors? And that really looks like deepfakes, I don't believe that XYZ company cars fall off the cliff every day or whatever it might be. Or is it just going to be worse and worse and worse?

Ryan Long: As you raise a good point, I think having a meter is good. I think having a sense of humor is important because I think some of this stuff is so absurd and you look at it and you're like, "This is complete bullshit." But now it's you can't say that because it's like, "Oh, you can't say that because of," so I think having that is a very good thing. There's a book called The Age of AI. It's written by Schmidt from Google, a professor at MIT and Mr. Kissinger. And they basically say that we've gotten to a point where we're getting our information in a cave, which is this screen. And that we're not conferring in person as much or what have you. And that by getting information digitally, we're getting a false view of the world. And it creates a lot of blind spots.

So when you get information digitally, it's easy to manipulate it. In person, if I meet with you in a coffee shop, when we start talking, I can say, "Well, I met him. I know who he is. He wouldn't do that. Or this is his character." And there's all these in-person human traits we have. Intuition, when you vibe with somebody, that you can't get online. So I think a bullshit meter is good. I think conferring offline is good. I think there's only so much you can do technologically to safeguard your information from being manipulated. But I think also having a radar of like, "Hey, this would make sense."

There's actually an article in the Atlantic about this Russian guy that was living outside of Moscow, and supposedly there was a bunch of missiles that came in from America. And he said to himself, like the radar said, these missiles are coming. And he said, "Wow, this is the start of a war." And I read the article and I said, "This is fascinating, because he thought to himself, if they're going to send missiles, they're not going to send just three or whatever it is." So he went to superiors and said, "There's something that's wrong with this satellite. There's something wrong with this technology."

And so to answer your question, I think that's the bullshit meter. It's like, "Hey, this doesn't make sense." So I think my view at least, and I think they talk about this in the book, is having a healthy sense of common sense, critical thinking ability and not just jumping and believing things that you see online. And also shorten the time between the time we get information and when we react to it. And by taking that time, it allows us to basically decipher whether that something is complete rubbish or not.

Rob Atkinson: Yeah, we've already learned that. There was that kid, I guess from that Cincinnati High School who with his buddies down at the mall, and supposedly he was saying racist things to this Native American, but it turns out to be that the story was very different and more complicated. But the initial thing was that, and it went around the entire country and really hurt him. And he brought a lawsuit and he won.

It seems to me we're just going to have to come up with a little bit more let's not react right away to these things. Let's just give it a little bit of time to see if it's actually real. And that can happen just with somebody posting a video where they don't have the whole context, or it could have somebody posting a fake video or a fake audio. So it seems like a lot, some of it's going to have to be just us as a society becoming a little bit more mature and getting better BS detectors, as you said.

Ryan Long: Well, I think also just what's a tolerance for partial truths or things like this. I think there's also a cultural aspect to this of how much do you encourage this stuff and how much do you basically discourage it and punish it? So if people are using deepfakes for criminal purposes to say, "Hey, if you don't pay this ransom, we're going to release this video of you doing X, Y and Z." And I never did anything in the video, but they can make it look like I did something in the video. That's basically as good as using a weapon to try to get money out of somebody. To me, that's the same as a regular crime.

I think that the thing with Napster is that it was infringement, there was infringement going on. And the fact that there's technology being used to effectuate the infringement doesn't make it any less infringing. So I think the problem, in my view at least, is sometimes because there's technology being used, people become softer on the rules that apply to what's going on.

Rob Atkinson: That's a great point. And with that, Ryan, we're going to have to wrap up, but it reminds me of that great Star Trek episode where Kirk is accused of pressing this button and the camera showed that he did it, and he says, "I didn't do it." And then he plays chess and beats the chess machine, and he proves that it was a faulty image and not a real image of him doing this. So we had the early deepfake a long time ago that Hollywood figured out, and I have severe consequences if we don't get it right.

Ryan Long: Well, hooray for Captain Kirk. That's a refresh. I got to watch that episode.

Rob Atkinson: That's great episode as many are. Ryan, thank you for being here. This was really great.

Ryan Long: Thanks for having me. Have a great weekend.

Jackie Whisman: And that’s it for this week. If you liked it, please be sure to rate us and subscribe. Feel free to email show ideas or questions to podcast@itif.org. You can find the show notes and sign up for our weekly email newsletter on our website itif.org. And follow us on Twitter, Facebook, and LinkedIn @ITIFdc.

Rob Atkinson: And we have more episodes of great guests lined up. Most of them will be real, some might be fake, but we hope you'll continue to tune in.

Jackie Whisman: Talk to you soon.

People on this episode