Innovation Files: Where Tech Meets Public Policy
Innovation Files: Where Tech Meets Public Policy
Should Section 230 Cover Algorithms? What’s at Stake in Gonzalez v. Google, With Ashley Johnson
Google doesn’t create terrorist propaganda videos, doesn’t allow them on YouTube, and takes them down as fast as it can when extremist groups post them anyway. But a question now before the Supreme Court is whether Section 230 of the Communications Decency Act protects Google and other platform operators from liability if their algorithms end up spreading harmful content. To parse the potential ramifications, Rob and Jackie sat down with Senior Policy Analyst Ashley Johnson, one of ITIF’s resident experts on Internet policy issues such as privacy, security, and platform regulation.
Mentioned
- Robert D. Atkinson. “A Policymaker’s Guide to the ‘Techlash’—What It Is and Why It’s a Threat to Growth and Progress” (ITIF, October 2019).
Related
- Ashley Johnson, “If the Supreme Court Limits Section 230, It Will Change the Way the Internet Functions” (ITIF, February 2023).
- Ashley Johnson. “Section 230 Still Isn’t the Solution to Conservative Claims of Social Media Censorship” (ITIF, December 2022).
Rob Atkinson: Welcome to Innovation Files. I’m Rob Atkinson, founder and president of the Information Technology and Innovation Foundation.
Jackie Whisman: And I’m Jackie Whisman, a head development at ITIF, which I’m proud to say is the world’s top ranked think tank for science and technology policy.
Rob Atkinson: This podcast is about the kinds of issues we cover at ITIF from the broad economics of innovation to specific policy and regulatory questions about new technologies. If you really like this, and I assume you do since you’re listening, be sure to subscribe and also give us a rating. Don’t give us a low rating. If you’re going to give us a low rating, don’t give us a rating. If you’re going to give us a high rating, you can do that. That’d be really helpful. Today we’re going to talk about Section 230 in the case of Gonzalez v. Google. If you don’t know what this is all about, you’re going to learn real quickly. It’s a really interesting and important issue and really interesting and critical case.
Jackie Whisman: Our guest today is the expert. She’s Ashley Johnson, a senior policy analyst at ITIF. She researches and writes about Internet policy issues such as privacy, security, and platform regulation. Welcome back, Ashley.
Ashley Johnson: Thanks for having me back again. I guess this means I did pretty well the first time.
Jackie Whisman: Yes, you passed the test.
Ashley Johnson: Excellent.
Jackie Whisman: But we’re talking about the same thing, so maybe we should start off with a refresher. What is Section 230 and why does it matter?
Ashley Johnson: So Section 230 of the Communications Decency Act is a law that was passed in 1996, and it deals with a concept called online intermediary liability, which sounds like a very obscure legal jargony term, but what it actually means is really dealing with who is responsible for the speech that goes up on the Internet. So an online intermediary is any service or individual that hosts or shares content that they themselves did not create, somebody else created it. In these days, usually the users of the website created it. And so this law that we have in the U.S. governing online intermediary liability draws the line on how much legal responsibility those websites and services have for the content that their users post on those websites and services. And Section 230 says that basically they’re not responsible for the content that their users created, which if you boil it down to its simplest terms, the people who created the content are responsible for the content, it’s a pretty straightforward legal concept.
But it has created a lot of discussion and debate, especially in the past few years. The rise of social media has really revolutionized the way that people share and communicate with each other online. We’re able to disseminate our viewpoints and our opinions and our thoughts much easier than we ever have been able to in the course of human history. And that’s a really good thing for political discourse, for people finding niche communities. It can also be a really bad thing for people who want to share violent, or illegal, or in any other way, quote, unquote, “Harmful content,” which is in the eye of the beholder, which I’ll touch on later. But that is where a lot of the controversy comes from.
So both of the main provisions in Section 230, and there are two of them, ensure that online intermediaries, the services and websites that people share their opinions and thoughts and other viewpoints on, they ensure that they are not liable when they fail to remove something that maybe somebody might find objectionable or believe is harmful. They’re also not liable when they do remove something that they personally believe is objectionable or harmful and doesn’t fit with their idea of what should be allowed on their platforms. And this is a very broad protection. It’s broader than the protections that exist in many other countries. Because of that, it has enabled a lot of innovation in the United States. We have so many social media and other tech companies that have flourished in the United States. It’s also meant that the debate here in the United States about content moderation and how it should be done is very complicated and has been going on for a while. And it might come to a head very soon with the events of this year.
Jackie Whisman: And if Section 230 is so important for the modern Internet, which you’ve written about a ton, why do some people want to change or get rid of it?
Ashley Johnson: It’s definitely related to the phenomenon that you’ve written about a lot, Rob. The techlash, the backlash against the tech industry, and especially companies that people would classify as big tech, which is a loose umbrella that people can put whatever company they don’t like into sometimes. And Rob has fact checked a lot of the common criticisms that people have associated with the techlash. And a lot of them are related to this debate as well, which is not surprising. They’re related to, I would argue, almost every tech policy debate that’s going on right now. So the concern over political polarization online and the creation of filter bubbles, the spread of extremism and hate speech online, potential for harm to children to take place online, alleged biasing against conservative viewpoints online, the alleged downfall of the news industry because of online news and social media. All of these ideas and many more are wrapped into people’s concerns around Section 230.
And the way that I sort of make sense of all of that and summarize it into one main point that I personally think distills the main arguments against 230 down is, there are two main groups of people who want to fundamentally change or even get rid of Section 230. Unsurprisingly, those groups are, for the most part, split along party lines. So we have Democrats on one side, they’re more likely to blame Section 230 for the proliferation of content that they believe is harmful, which includes mis- and disinformation, hate speech targeting minority groups, radicalization and extremism, content that’s harmful to minors and content promoting or depicting violence, to give a few examples. And they believe that online services, and especially social media platforms, are not doing enough to filter and remove these types of content. And so they want to change Section 230 to strong arm these companies and services into removing more content.
And then on the flip side, you have Republicans who are more likely to blame Section 230 for bias against conservative viewpoints. Again, especially focusing on mainstream social media platforms. They see prominent conservative figures getting their posts removed or their accounts suspended or banned, and they feel like social media platforms are encroaching on their free speech. And so their solution is the opposite of Democrat solution. They want social media platforms to remove less content. They want social media platforms to be more of a haven for free speech, and that does sometimes include controversial content, and they think that platforms should be liable when they remove these controversial but legal forms of content.
Rob Atkinson: So Ashley, is it maybe a simple way to understand this. You talked about 230 having two components; one, is it exempts the platforms from liability if they take something down and it exempts them from liability if they don’t take something down essentially. And it seems to me what the Democrats are saying is, “Well, let’s get rid of one of those and we’ll get rid of the one where we’re going to hold you liable for not taking something down.” And it seems like the Republicans are saying, “No, no, we’ll keep that one. We don’t like the other one,” which is if you take something down, you’ll be liable in court. Imagine if we just picked one of those, where would we be? If you just talk a little bit about that.
Ashley Johnson: Yeah, absolutely. It’s two very different images of what the Internet should look like. And taken to the extreme, I don’t think most users, regardless of their political leaning, would like where it would most likely lead us. So if we get rid of the first provision in Section 230 that protects platforms when they fail to remove something that is potentially harmful, we’re going to most likely see online services, like social media platforms, but many, many other services that rely on third-party or user-generated content and so they rely on Section 230, we’re going to see them-
Rob Atkinson: This could be things like, for example, sorry to interrupt, this would be things like TripAdvisor. I’m going to...
Ashley Johnson: Yeah, TripAdvisor
Rob Atkinson: ... Florence, and somebody said, “Oh, I don’t like this restaurant.”
Ashley Johnson: Yeah, totally. TripAdvisor, any other review website, websites like Wikipedia that rely on user editing and contributions. Any website that has a comment section. So that covers so many even smaller blogs and forums that allow visitors to comment. Yeah, that’s all third-party content. And so that’s all protected by Section 230. So in this world where platforms are liable if they fail to take things down, we would see them becoming a lot stricter with their content moderation and they’re going to remove anything that would potentially get them into legal trouble. Even if it’s not illegal content, someone could still sue them over it.
There are plenty of nuisance lawsuits that happen in this country all the time and they’re very expensive. And so even if they don’t have a very strong legal standing, platforms are still going to want to avoid them because that’s a big expense. And so we might see them getting rid of any content that could be potentially controversial. The biggest downfall of that is that a lot of that will include political content. And currently the Internet is, I would argue, the most important forum for political discourse in the modern world. And we really don’t want to lose that. That’s where a lot of underrepresented or minority groups have been able to find a voice. It’s where just any average person can find a political voice.
Rob Atkinson: And then what about the opposite if we did the other one instead?
Ashley Johnson: On the flip side, we have another sort of radical view of what the Internet might become if taken to the extreme. If platforms are liable for removing speech that’s legal but potentially harmful or objectionable to some people in some way, we’d see more of a Wild West version of the Internet. And there are some people who want that to happen. But I think there are a lot of people who value having rules and terms and conditions on the websites and online services that they use. They value being able to avoid super inflammatory content, potentially hateful content. They don’t want to see these types of things. And they would rather platforms be able to set rules that will clean up their feed and make sure that their feeds are not flooded with spam, harassment, all these sorts of things that most people would rather avoid, but that are still legal forms of speech.
Rob Atkinson: So I just want to dig into it just a little bit more. TikTok for example, or I think Instagram’s the same way, you can post a picture of yourself if you’re a woman in a very skimpy bikini, but you can’t post yourself topless or nude and they will take down things like that because it’s not a porn site. Would this affect their ability to be liable if they take down some of that kind of pornographic or lewd, if you will, content? And then would it make it less family friendly or am I wrong on that?
Ashley Johnson: I’m not sure exactly where the law would fall on that because they do allow users as young as age 13 on their platforms. I think because of that, they would still be able to filter any content that is [inaudible 00:13:02]. So I think they would still be able to filter that kind of content, but it would still apply to a lot of other content. I mentioned spam. Spam is perfectly legal and it’s very annoying and I would argue that almost everyone hates it. And that is the kind of content that platforms get rid of a ton of, and I would argue important to the modern Internet, that they get rid of it.
Rob Atkinson: Well, I’ve decided that my phone is basically now a spam machine. Because I’d say literally 80% of the calls are marketing calls now, even though I’m on the do not call list with the Federal Trade Commission. I have no idea what that does. So this is all super interesting, but there’s a case now before the Supreme Court that is going to tell us what’s going on, and that’s the Gonzalez v. Google case. Can you just tell us a little bit about that? What’s that case all about and what’s at stake?
Ashley Johnson: Absolutely. So for a long time we’ve seen, as I mentioned, these two different political parties approaching Section 230 in two very different ways. And it has, for the most part, led to a stalemate. We haven’t seen any major changes to Section 230 in the past few years. And it seemed like it was going to stay that way for, at least, maybe the next few years, except perhaps on areas where Democrats and Republicans can reach an agreement, like possibly child safety, but on just about every other form of online speech, they’re diametrically opposed. And then the Supreme Court announced that it was going to hear this case, Gonzalez v. Google, which is a case where the family of the victim of a 2015 terrorist attack are suing Google alleging that YouTube’s algorithmic recommendations lead users to ISIS recruitment videos. ISIS was the group that carried out that particular terrorist attack.
And so they’re arguing that Google is partially responsible for the attack because their algorithmic recommendations on YouTube enable ISIS to recruit members. According to Section 230, as it’s traditionally been interpreted by lower courts, Google isn’t liable for the ISIS recruitment videos on YouTube since Google played no part in creating them. And that’s how Google’s been able to successfully defend itself in lower courts as this case has made its way through the courts. But this is the first time that the Supreme Court is hearing any case related to Section 230. So they don’t necessarily have to follow the legal precedent that’s been set by lower courts. They can decide the case entirely just on how they view its merits.
And so the plaintiff’s argument that they’re bringing to the Supreme Court is that Section 230 doesn’t apply to algorithmic recommendation of content. Google might not be liable for the content of the ISIS recruitment videos themselves, but it should be liable, the plaintiffs argue, for recommending that content to users. And if the Supreme Court accepts this argument, it would be extremely significant. Because, as I alluded to, it would break with decades of precedent of how Section 230 has been interpreted by the courts.
Jackie Whisman: Why is so much of the criticism of Section 230 and social media focused on algorithms?
Ashley Johnson: I would say that a lot of people agree with the basic Section 230 principle that I outlined in the beginning of this discussion, that the person responsible for certain content, content that’s illegal or that harms someone, is the person who creates that content. That’s a simple principle and I think it makes sense to most people. But a lot of those people who are critical of Section 230, or would like to see it change, see algorithmic recommendations as a gray area. YouTube didn’t create the video that you’re watching, but maybe its algorithm recommended it to you and you might not have ever found it unless that algorithm recommended it to you. So people view it as a less passive role that YouTube is playing, or that any other service that uses algorithms, which is most of them, is playing than just hosting content.
And then underneath this more logical argument, I would say there’s also the reality of how many people view algorithms, which is very negatively. Most people don’t understand how they work. And there’s been a lack of transparency from a lot of online services that use algorithms. And in some discussions and debates on the extreme end of things, it seems like the algorithm is this mysterious dark force that knows everything about you and it manipulates your actions. Again, this is just a very negative view that a lot of people have of algorithms and many are nostalgic for the old days of the Internet when content was displayed in chronological order for the most part. Even though in today’s Internet the way it functions, that wouldn’t be a very useful way of ranking content. There’s just so much of it these days that we do need algorithms. But there’s skepticism and there’s also been a lot of fearmongering.
Rob Atkinson: So I love basketball, 6’7”, played basketball and I watch these really good basketball video things on YouTube. And so I just click on that. It just keeps feeding me basketball videos, which is great because if it randomly started feeding me hockey videos, I’d be like, “What the heck, man? I don’t like hockey. I like basketball.” So I love those algorithms because they’re basically figured out what I like and what I enjoy from what I continue to click on.
But I want to ask you a question; the algorithm would work both ways, potentially, on 230 because one of the things that I do know some platforms do rather than ban something that’s really, really bad... I mean, there’s stuff that’s really bad that crosses a line and then there’s stuff that’s bad but doesn’t cross the line. They just won’t promote it. So it’s still there, but you’re not likely to see it. If you get rid of 230, their ability to do that would potentially go away. So, objectionable stuff, you might see it much more than you would otherwise. So it’s the same thing with just take down versus keep up. The algorithm is promote versus demote. And both can be useful.
Ashley Johnson: Absolutely. And a lot of tech companies, and trade associations, and tech policy think tanks, and nonprofits have filed amicus briefs to the Supreme Court on this case. And one of them, Meta’s amicus brief, actually touches on something related to that, which is that a lot of platforms use algorithms for content moderation reasons, not just to feed you content that you might like, which as you mentioned is also a good thing. I love getting fed content that I like as opposed to content that’s completely irrelevant to my interests.
Rob Atkinson: Are you going to tell us what content you like?
Ashley Johnson: I like to watch a lot of baking and cooking videos. I don’t always bake and cook at the level that they’re baking and cooking, but it’s aspirational for me. Maybe one day I’ll be at that level, but in the meantime it all looks really delicious. So that’s what I’m into.
Rob Atkinson: Wonderful.
Jackie Whisman: And the cats on the Internet are a lot funnier than my cat.
Ashley Johnson: Yeah.
Jackie Whisman: All the cat videos I am fed are pretty great.
Ashley Johnson: Exactly. And so that is obviously a good use of algorithms. But another good use of algorithms is that platforms use it to screen for even illegal content, not just harmful content that they might want to demote instead of promote, but illegal content like the kind of terrorist content that’s at question in this case. Meta made the point in its amicus brief that it uses algorithms to screen for terrorist content and remove it. So algorithms do a lot more than, I think, the average person really thinks about.
Rob Atkinson: First of all, there’s so much content out there, you can’t have human moderation. It’s impossible. Companies use algorithms to flag things that then go to a human content moderator for the final yes/no, but there aren’t enough humans on the planet probably to do this and we’d all go crazy if we had to. So the question then becomes, does it matter if... And secondly, algorithms, they’re not magic. You can do an ISIS video without using the word ISIS. And so maybe it’s just a video of somebody shooting a gun and there’s somebody else there who’s a sportsman or sports person and they like hunting and shooting, and so maybe it gets fed that. So I guess does it make any difference whether the platform just tries their best or doesn’t care? In terms of how we should think about the Supreme Court case, the Gonzalez case? If you’re trying your best and the algorithm, it makes a mistake, it’s like, “Well, why should you be liable for that? You’ve tried your best.”
Ashley Johnson: That’s something that a lot of less well-funded organizations are worried about. I know Reddit and the Wikimedia Foundation have both filed amicus briefs. Yelp as well, which is a website we don’t really think of as being big tech. It most likely doesn’t have the resources that a lot of larger, more established companies do. But there have been amicus briefs filed from, quote, unquote, “Smaller companies,” and organizations like these and these are the kinds of companies and organizations that are very concerned that if something happens to Section 230, they don’t have quite as many content moderation resources as their more well-funded counterparts. They can’t afford to hire as many human moderators, they can’t afford to develop as sophisticated content moderation algorithms. And so they’re very worried that maybe intention won’t matter in these legal cases that they’re going to have to deal with if Section 230 gets gutted.
Jackie Whisman: So running out of time, but I want to ask if the Supreme Court does rule in Google’s favor, will anything change? Is Section 230 still at risk?
Ashley Johnson: I don’t think the debate will end, even if the Supreme Court doesn’t change anything about how the law currently works. I’ll be able to breathe a little easier for the time being and a lot of other Section 230 enthusiasts will, but we’re still going to have these two political parties with split visions of the future of the Internet. And I honestly don’t know how we’re going to be able to reconcile that. I’m hoping that there’s room for a thoughtful, and deliberate debate, and action on areas where Democrats and Republicans can find common ground. And we’ve seen some promising efforts at bipartisan compromise in Internet policy recently and in the past. So I think there’s room for compromise here too, and that is what I will be hopeful for if the Supreme Court doesn’t end up changing Section 230 in a drastic way.
Rob Atkinson: Ashley, you’re a little bit too modest because you wrote an excellent paper on how to think about moderating political speech, and right now the companies don’t have a lot of guidance. What are they supposed to do? And there are some solutions you’ve proposed in that paper and we’ll put that up on the webpage when we’re finished.
Ashley Johnson: Yeah, absolutely. There are definitely areas where Republicans and Democrats can come together, I believe.
Rob Atkinson: Hey, this was really great, Ashley. Thank you so much.
Ashley Johnson: Thank you for having me.
Jackie Whisman: And that’s it for this week If you liked it, please be sure to rate us and subscribe. Feel free to email show ideas or questions to podcast@itif.org. You can find the show notes and sign up for our weekly email newsletter on our website itif.org. And follow us on Twitter, Facebook, and LinkedIn @ITIFdc.
Rob Atkinson: We have more episodes and great guests lined up. We hope you’ll continue to tune in.