
Innovation Files: Where Tech Meets Public Policy
Innovation Files: Where Tech Meets Public Policy
The Case for Smarter AI Regulation, With Matt Perault
Regulating how AI is used—not how it's built—is the only way to protect innovation and give small startups a fair shot. Rob and Jackie sit down with Matt Perault, Head of Artificial Intelligence Policy at Andreessen Horowitz, to discuss the significant burden regulatory frameworks can place on smaller tech companies and the critical role of government in AI regulation.
Mentioned
- Ezra Klein and Derek Thompson, Abundance, (Simon and Schuster, March 2025).
- Robert D. Atkinson and Meghan Ostertag, “Congress Should Fully Fund NSF’s TIP Directorate to Make America More Competitive Versus China,” (ITIF, June 2025).
- Robert D. Atkinson, “2025 Bromley Memorial Lecture: US Science Policy at a Crossroads,” (ITIF, March 2025).
Related
- Matt Perault, “A Policy Blueprint for US Investment in AI Talent and Infrastructure,” (Andreessen Horowitz, Marc 2025).
Auto-Transcript
Rob Atkinson: Welcome to Innovation Files. I'm Rob Atkinson, founder and president of the Information Technology and Innovation Foundation.
Jackie Whisman: And I'm Jackie Whisman. I head development at ITIF, which I'm proud to say is the world's top-ranked think tank for science and technology policy.
Rob Atkinson: And this podcast is about all sorts of different technology issues and their relationship to policy. So, if you're into this stuff, be sure to subscribe and rate us.
Jackie Whisman: Our guest is Matt Perault, Head of Artificial Intelligence Policy at Andreessen Horowitz. He oversees the firm's policy strategy on AI and helps portfolio companies navigate the AI policy landscape. Matt recently released new policy proposals on behalf of the firm that outline how broad AI disclosure mandates impact small startups. That's mostly what we're going to talk about today, but I think we're going to expand it a bit so that we get a better understanding of your work. Thanks for being here, Matt.
Matt Perault: Thanks for having me on.
Rob Atkinson: If you live under a rock and you don't know what Andreessen Horowitz is, it's one of the leading Silicon Valley venture capital firms, really doing amazing work. They support startups, but also think about startup policy, which I think distinguishes them from a lot of other VC firms that kind of have their head focused only on the Valley. I really enjoy reading Mark Andreessen's prolific posts and other social media; very insightful. And Matt, really glad to have you here.
Matt Perault: Thanks so much. It's a good place to start. The focus for us really is on figuring out a policy agenda for "little tech". As you described, some people see a mismatch between our firm being large and the companies in our portfolio, and how we describe our work. We are a large venture capital firm. So, I think there are people who say, "How can this large firm be a voice for little tech?" From our perspective, the challenge in terms of the political economy of little tech is that there aren't that many voices who support small startups. You guys obviously do, and there are others out there. Engine is a wonderful organization. Obviously, Y Combinator does this work too. But there aren't that many voices for little tech firms.
So, what we see repeatedly, I think, are policy frameworks designed to regulate various different parts of the tech ecosystem, including AI, in ways that we think are disproportionately harmful for little tech. Sometimes they advantage large tech companies. Sometimes they disadvantage them, but disadvantage them less than they would disadvantage smaller companies. Our goal as a firm is to try to ensure that there's a level playing field so that small companies, which are already going to struggle to compete with larger tech platforms, are able to compete as effectively as they possibly can.
Rob Atkinson: It's such an important issue because smaller firms don't have the money, they don't have the time; they're just trying to meet payroll and do their work. They really don't have the time or resources to come to Washington to advocate. We see that certainly in other countries where they've put in place these policy regimes that purportedly are to rein in large tech, but they end up with so much collateral damage. We see that in Europe, where recent studies from good, objective, independent European academics have shown how harmful some of these policies have been to startups. That kind of gets lost a lot of times in the dialogue.
Matt Perault: I think that's exactly right. I assume you're referring primarily to the work that's been done around privacy regulation and GDPR?
Rob Atkinson: Yep.
Matt Perault: I heard a presentation a couple of weeks ago from a competition law scholar who said there are many studies showing that as a result of GDPR in Europe, there's been a decrease in VC investment and a decrease in startup formation. The way that he framed it is there's literally no study on the other side that has suggested that GDPR has actually led to more VC investment in Europe and more startup formation.
Europe has kept going with other laws that we would expect—and who knows if this will be right or not—but we would expect would have similar impacts on market concentration, like the Digital Services Act, which is focused on content moderation. And certainly, I think the EU AI Act that we're concerned about in terms of the impact that it would have in Europe, but also concerned about the way that model might get adopted into various different regimes in the United States at the federal level.
You said in your last comment, talking about the ability of startups to fly to Washington D.C., but as we know, policy isn't just made in Washington D.C. It's made in 50 state capitals. In our experience, founders struggle to have the time to do that, and then to ensure that they can advocate effectively for their interests. They certainly struggle to travel to 50 state capitals and try to ensure that state regulation isn't making it harder for them to compete.
Actually, I think that's been one of the most interesting and also most challenging components of my job so far. Figuring out how to be as effective a resource as I possibly can be to the little tech companies in our portfolio. There are several reasons that are quite challenging. The main one is that I'm the head of AI policy. Most of the firms that we work with might not have a head of policy, let alone someone who's focused just on AI regulation. Some of them don't have general counsel; many of them don't.
So, the companies that so far I've ended up doing the most direct work with are the companies that have more built-out teams. So, they have someone who's closer to and equivalent to me. I think that makes structural sense, but I'm not sure it makes sense from a long-term policy perspective. Companies that are really small, that might not have general counsel, might not have a head of policy. It's conceivable that they're going to be impacted by whatever regulatory environment is created for AI. But not only do they not have a voice in the policy conversation, it's even challenging to figure out how to staff a call with one of their investors. I think that just structurally makes it really challenging and puts them at a significant disadvantage in ensuring that the regulatory regime that we get in an area like AI is one that's workable for them.
Rob Atkinson: Absolutely.
Jackie Whisman: One thing that you proposed recently is a policy framework that starts with the principle: "regulate use, not development," which really resonates with us. Can you unpack why that distinction is so important, especially for the smaller tech companies and the broader tech ecosystem?
Matt Perault: Let me start with the second part of that—regulating development. I think, I have not done this, I don't know if you guys have, but looked at all the different AI proposals and classified them based on whether they regulate use or development. My guess is that 80 plus percent regulate development, maybe higher than that, 90 plus percent. And development-focused regulations are regulations that impose various different requirements on the development process.
I was going to say "burdens," and I don't want to necessarily use a term that sort of suggests that it's necessarily negative, but from our perspective, if you're introducing requirements on the development process, you are burdening it. It essentially functions as a tax on AI development itself.
And if AI development were problematic in most or all cases, then we might want that; we might want to tax the math and science of the creation and building of AI. But AI development is not necessarily problematic. And actually, I think if you were just at a very high level saying, "Is it the policy agenda of 50 state governments to have more innovative, more aggressive AI development or not?" I think most policymakers would say, "We want more". But the concern, I think, is "we want more of this, but we do want to address harmful use cases".
There are cases where AI could be used in a harmful way. The odd thing about focusing a policy agenda on regulating development is it doesn't really target harmful use. So, I think the idea of regulating development is: people are building this technology, as they build it, there will be harmful things that come from it. And so, as a result, what we can do is make it harder to develop the technology in ways that might be problematic and therefore will cause less harm, and it's conceivable that would be the case. But as I'm describing, you don't just dampen the harmful development, you dampen the beneficial development as well.
So, I think the question is, is there a way to target harmful use without getting in the way of the productive development that we would want to see? From our standpoint, that is a tool and a perspective that's available to policymakers. And that means focusing on enforcement of existing law where AI would be used in a harmful way. The former chair of the FTC made the point that we agree with: that there's not an exception to existing law for AI. So, if a person or a company uses AI in a way that violates consumer protection law, that would be a violation of consumer protection law. The same goes for civil rights law. The same goes for antitrust law. The same goes for criminal law. And I think importantly, there's federal criminal law, but the bulk of criminal laws are at the state level.
So states, I think, have an important role to play in regulating harmful use as well. From our perspective, the primary initial thrust for lawmakers should be focusing on ensuring that existing law can be enforced against harmful uses. I think we actually want to make sure that enforcement agencies at the state and federal level are well-equipped to do that enforcement. And that means having the personnel that they need, it means having the financial resources that they need. I think importantly, it means having the technical understanding that they need in order to enforce existing law.
Of course, it's conceivable also that existing law wouldn't be sufficient, and if existing law is not sufficient, then we think it's important for policymakers to think about how to address marginal risk that's created by AI, but focusing the new law on harmful uses that are related to AI. So to the extent AI is a contributor to problems in deepfakes or fraud or other kinds of harmful activity, punish those harmful uses. Don't make it harder to develop the underlying technology.
Rob Atkinson: You mentioned they regulate development, and a lot of proposals want to do that, and in a lot of ways it's just bizarre. Can you imagine NHTSA at DOT regulating how cars are made and they say, "We're sorry. You can't use arc welding; you have to use gas welding"?
Matt Perault: Right.
Rob Atkinson: No, what they say is, "The joint has to be this strong". You don't worry about how the metal is put together. And the reason for that is because, A: the government can never know enough about that. And B: maybe there's a third way of doing welding that's even better. And AI to me is such a dynamic field. Every day it changes. So, I don't see how the government could ever figure out the right way to do development.
Matt Perault: I think it's challenging. And again, initially I thought of that as a way to just leave space for development. That's what I thought the primary thrust of that direction of travel would be. But the more that I thought about it, the more I thought it's not just about leaving space for development. It's also that if you're serious about harmful use of AI, you wouldn't regulate development. Regulating development, I think, is problematic from our perspective for two reasons. One, it makes it harder for little tech companies to build AI tools. And the second is it's not the most efficient, direct, helpful, and likely to be a successful mechanism for protecting consumers.
Rob Atkinson: Yeah, it'd be like saying we're going to regulate traffic safety by regulating welding rather than having speed limits.
Jackie Whisman: That's right. I was just going to ask on that point, like, what are the most critical things the government needs to get right when it comes to federal AI policy?
Matt Perault: So, I think the primary focus for the federal government needs to be making clear that there's not an exception to existing law for AI. And then I think it's important to ensure that federal and also state enforcers have the resources they need to make that real, to ensure that you can enforce existing law when it comes to harmful uses related to AI. Then there are, I think, a lot of structural things that the government can do to help make it easier for little tech companies to compete.
And the position of our firm is not that little tech companies deserve a special handout. The perspective is instead, let's try to not make it more difficult for them to compete in an area where I think competition is already really challenging. Even in a world where there's a level playing field, AI is an industry with high barriers to entry. You have significant data needs, you have significant talent needs. You need the engineers in order to build the AI tools; you need access to computers and energy in order to build an AI model. And so that on its own is challenging.
There are other things I think the government can do, that kind of help to level the playing field a little bit there. So, we suggest, for instance, the creation of a national AI competitiveness institute that would provide access to data and access to compute resources for researchers and government entities, and also potentially for startups.
Rob Atkinson: This is very much tuned to our thinking. We've actually been writing about AI policy for over a decade. And from the very beginning, we said, "Don't regulate inside the engine, regulate how it's used". If you want it to be used in a certain way, then that's how the law should state it. And then related to that also is the role of government to be an enabler. That's one of our big criticisms of the Europeans, is they see this as a Gulliver's thing. You've got to tie this monster down as opposed to your point, that there are already existing laws out there. If you want to use AI to give somebody a loan, and it has a bias against a protected class —
Matt Perault: Yep.
Rob Atkinson: —you are going to get prosecuted. The law says you can't do that. You can't do it personally. You can't do it with a spreadsheet. You can't do it with AI. The $64,000 question as we speak today, which is June 25th: Congress is debating in the Big Beautiful Bill a 10-year moratorium on state-level AI regulation. And some people love it. Some people hate it. The motivation for it is particularly for small companies, but also for us. I think of us, having to comply with 50 different privacy rules as a not very big think tank. So there the spirit is to say, "Wait a minute, let's not have 50 different rules for a startup to comply with". What are your thoughts on that?
Matt Perault: We wrote a piece a couple of months ago thinking through how to think about the federal role and state role in AI governance. I think sometimes when people talk about preemption, that can sound—and sometimes people actually literally say—that it basically means the federal government should act and states should not, and that's not the way that we think about it.
We think that it's important for states and the federal government to each play the role that the Constitution assigns to them in AI governance. And in general, I think that means the federal government should play a role in governing the national AI market. That's core to Congress's competency, to regulate interstate commerce. But that doesn't mean states should do nothing.
I think it's important that states actually police harmful conduct within their borders. There are some areas where it's clear that states have primary competency. And so that's for things like state criminal law, which is the overwhelming majority of the body of criminal laws at the state level. And it's important that if AI is used in ways that violate state criminal law, that people are held to account. I think that's important not just in those kind of individual use cases, but it's important for the overall long-term health of the technology.
One thing that I didn't know coming into a venture capital firm is how long-term our interests are. Our funds have a 10-year life cycle, which means we're aiming for this industry to be successful over a long period of time. It doesn't work from our point of view if there's some sort of short-term boom and then people start to be really scared of the technology, they experience a lot of harms associated with it, it's not a good user experience, and then the market crashes. That's not going to work in terms of our fund performance.
I think states being active in terms of policing harmful conduct within their jurisdictions is important. But as you're describing for a small think tank, it's certainly true for most tech companies, but particularly the smaller ones, trying to navigate a 50-state patchwork is really challenging. Sometimes people think about it as an abstraction. It's not an abstraction in AI. There are over a thousand bills introduced, I think, in this set of state sessions. There are a significant number of them that have been modeled on the EU model for governing AI in some form. There are things that are closer to the EU model, things that are further away from it, but that include a lot of the elements.
That model requires, I think, a pretty significant investment of compliance resources. You have to do things like make determinations about whether you're engaged in high-consequential decision-making, whether the model that you're developing poses an unreasonable risk of critical harm. You have to make these kinds of assessments, and then in many of the bills that have been introduced, engage in a fairly robust compliance process.
So, you have to do impact assessments. Sometimes you need a third-party body to help you with the impact assessments, to engage an auditing function to ensure that you've done your impact assessments correctly. That's burdensome for little tech, and it's disproportionately burdensome for little tech.
If that was allowed to continue, that would make these markets more concentrated and it would make it harder for little tech to compete. I think that's clearly a problem. I think it's great that Congress is taking some action to address that. There are many different ways to do it. And the main misperception of the moratorium is that it actually does include a fairly broad exception, and the exception is for AI laws at the state level that are generally applicable.
That would mean, for instance, if there is a violation of an unfair and deceptive trade practices law at the state level where AI was used as part of that violation, an attorney general could enforce that law against a perpetrator. And I think that is a really important exception that brings this model a little bit closer to the way that we outlined it in the piece about the roles of the state government and federal governments.
Jackie Whisman: On that kind of same topic, government procurement is also a huge topic here, especially for the smaller tech companies trying to break in. What changes do you think would make it easier for government to access the best systems and also level the playing field for companies of all sizes?
Matt Perault: We've talked a little bit about that in response to the White House's call for comments on a national AI action plan. The basic idea, I think, is making sure that little tech companies are able to compete alongside larger, more established players in the procurement process. Primarily around streamlining the process so that companies that can devote fewer resources to the kind of administrative bureaucratic overhead that would be needed in order to compete in the procurement process, are able to actually compete.
Rob Atkinson: Matt, my son did a Silicon Valley startup, not funded by you guys, I have to say. It was another firm. Began with the letters KP. And he had a great time. He was doing computer software stuff, and it was just fantastic experience, and he ended up deciding he wanted to get his PhD. So he applied for and received an NSF fellowship to do machine learning. One of the problems now is that, again, I don't say this in a partisan way, but because of the funding cuts at NSF, they've cut their PhD fellowships, I believe, in half.
This is hard stuff and it's still early, and we need people, we need PhDs to be able to do this. So, I'm just curious, how do you see that? I mean, we certainly rely on and should on foreign talent, but there's this backlash against that. What are your thoughts on that, and particularly how it relates to startups? Maybe their attraction is, "Hey, we're more flexible, we're more innovative, you've got a little more freedom, and maybe you've got options". But they just certainly can't pay as much initially on average as say, a larger company can.
Matt Perault: I think it's an important question to examine, and it's something that we've been concerned about as well. I thought it was encouraging that the NSF put out a request for comments related to AI research and development. So, what would be the right way for the NSF to think about a research agenda for AI? And we submitted comments in that process, and we haven't yet seen what the NSF is going to develop in response to it. But I'm hopeful that they will be responsive to some of the concerns that you're expressing now. And I think there are a whole bunch of components of it.
Obviously, ensuring that there's sufficient talent is important. We suggest ensuring that there is opportunity for research and development really at the frontier, focused on foundational and disruptive AI research. There's been lots of discussions about open-source tools and the role that open-source tools can play in innovation and competition. We think it's important that researchers can focus on that.
There's also a lot of stuff on AI governance that's interesting as a potential research agenda. I was actually frustrated in my academic work. I was previously at UNC and before that at Duke. It felt to me like there were relatively few academics doing empirical research on various different governance regimes and trying to evaluate their effectiveness. Like we talked earlier about GDPR. It would just be so helpful if there were more and more studies really looking in a detailed way about GDPR's impact in Europe. And my hope would be there would be similar ones for other governance regimes and tech policy in Europe too, like the Digital Markets Act and the Digital Services Act, so that we can learn some things about how those different regimes play out in practice.
Hopefully that would mean the next time they're implemented, they'll be implemented more effectively, or that Europe would consider shifts to those regimes if they learn things about how they could be implemented better. And so, I think NSF could play a role in funding that kind of research. And then there were two other components of our comments as well. One was focused on the use of AI in government itself, which I think is really important, figuring out how to use AI in a way that creates government efficiencies.
And in particular, I think enables the government to deliver services more effectively to people. And then the final thing is thinking about improvements in research design. I recently read, as I think many people have, Ezra Klein and Derek Thompson's book Abundance, and there's a really interesting chapter there just thinking about not just research on its own terms, but research on research. How do we think about a research agenda that's likely to be more effective? And how do we apply research methods to the process of research itself so that government funding results in stronger research yields, more effective research yields? And I think it would be helpful if NSF was focused on that as well.
Rob Atkinson: Yeah, NSF has a program that they put in place, but also Congress in the CHIPS and Science Act called TIP—Technology Innovation Partnerships. We actually have a report coming out on it, and it's a good program that tries to do some of that.
Just a quick comment before we wrap up. I was talking to a colleague of mine who's head of a CS department in the US, and one of the things he was lamenting was the salaries are so high now and the supply is limited. A lot of faculty end up looking to, as perhaps you have done, looking to the private sector. His point was we need to have longer-term commitments to AI research centers for faculty. So they'll know for five years you're going to get a good amount of money, you're going to be able to train your PhD students. And I think that's something we should be thinking of. And you can speak to this, Matt, a lot of academics want to stay in academia because the work is so interesting, but if the funding is so insecure, et cetera, et cetera.
Matt Perault: Yeah, I have mixed feelings about this. Part of what you're describing feels to me like the kinds of things that you and I worry about when we think about antitrust law, generally, about incumbents and complacency. I'm not sure that guaranteed locked-in funding, independent of performance and essentially shielding institutions from competition, is a positive thing.
Rob Atkinson: Sorry, I was arguing you would have to go through a competition to win this. So for five years, or three years or four years, you would be able to have a program and then it would be rebid.
Matt Perault: I guess there are different mechanisms for it. My experience in academia suggested that there are really important, meaningful reforms that need to happen in academia to create stronger incentives to produce research that is oriented around real-world impacts, and specifically more focused on how to create good tech policy.
I just think there is actually a relatively small amount of academic research that's focused on that question. And traditionally there's been a trade-off for people pursuing academic careers where they know the salaries are not going to be competitive with the private sector, but they're going to have a significant amount of freedom to do research and writing and thinking. People self-select based in part on whether that trade-off seems appealing to them. That still seems generally right.
When you go into an organization, you are operating on behalf of the organization and there's a decrease in freedom that happens as a result of that. And I think researchers who really want to pursue completely independent research—there's some of that certainly that happens at tech companies, but primarily that's going to happen in academia.
I think that still continues to be the primary incentive. If there are ways to spur more interesting, dynamic research, and part of that is providing some protection to academics who engage in it, that makes sense to me, but I'm wary that the current conversation about academic reforms is somewhat more focused on protecting a prior business model.
Rob Atkinson: A hundred percent. That's something we've been pushing. I just actually gave an Alan Bromley Memorial Science Policy Lecture, which said exactly that. Matt, we are out of time. This is great. We could do this for another 30 minutes, and as we were talking about earlier, we could do a podcast just on biking.
Matt Perault: I am looking forward to that one.
Rob Atkinson: This is great. Thank you for being with us.
Matt Perault: Thanks a lot.
Jackie Whisman: And that’s it for this week. If you liked it, please be sure to rate us and subscribe. Feel free to email show ideas or questions to podcast@itif.org. You can find the show notes and sign up for our weekly email newsletter on our website itif.org. And follow us on X, Facebook, and LinkedIn @ITIFdc.
Rob Atkinson: We have more episodes and great guests lined up and we hope you'll continue to tune in.
Jackie Whisman: Talk to you soon.