Technology

a16z VC Martin Casado explains why so many AI regulations are so wrong

a16z VC Martin Casado explains why so many AI regulations are so wrong


The problem with most attempts at regulating AI so far is that lawmakers are focusing on some mythical future AI experience, instead of truly understanding the new risks AI actually introduces.

So argued Andreessen Horowitz general partner VC Martin Casado to a standing-room crowd at TechCrunch Disrupt 2024 last week. Casado, who leads a16z’s $1.25 billion infrastructure practice, has invested in such AI startups as World Labs, Cursor, Ideogram, and Braintrust.

“Transformative technologies and regulation has been this ongoing discourse for decades, right? So the thing with all the AI discourse is it seems to have kind of come out of nowhere,” he told the crowd. “They’re kind of trying to conjure net-new regulations without drawing from those lessons.” 

For instance, he said, “Have you actually seen the definitions for AI in these policies? Like, we can’t even define it.” 

Casado was among a sea of Silicon Valley voices who rejoiced when California Gov. Gavin Newsom vetoed the state’s attempted AI governance law, SB 1047. The law wanted to put a so-called kill switch into super-large AI models — aka something that would turn them off. Those who opposed the bill said that it was so poorly worded that instead of saving us from an imaginary future AI monster, it would have simply confused and stymied California’s hot AI development scene.

“I routinely hear founders balk at moving here because of what it signals about California’s attitude on AI — that we prefer bad legislation based on sci-fi concerns rather than tangible risks,” he posted on X a couple of weeks before the bill was vetoed.

While this particular state law is dead, the fact it existed still bothers Casado. He is concerned that more bills, constructed in the same way, could materialize if politicians decide to pander to the general population’s fears of AI, rather than govern what the technology is actually doing. 

He understands AI tech better than most. Before joining the storied VC firm, Casado founded two other companies, including a networking infrastructure company, Nicira, that he sold to VMware for $1.26 billion a bit over a decade ago. Before that, Casado was a computer security expert at Lawrence Livermore National Lab.

He says that many proposed AI regulations did not come from, nor were supported by, many who understand AI tech best, including academics and the commercial sector building AI products.

“You have to have a notion of marginal risk that’s different. Like, how is AI today different than someone using Google? How is AI today different than someone just using the internet? If we have a model for how it’s different, you’ve got some notion of marginal risk, and then you can apply policies that address that marginal risk,” he said.

“I think we’re a little bit early before we start to glom [onto] a bunch of regulation to really understand what we’re going to regulate,” he argues.

The counterargument — and one several people in the audience brought up — was that the world didn’t really see the types of harms that the internet or social media could do before those harms were upon us. When Google and Facebook were launched, no one knew they would dominate online advertising or collect so much data on individuals. No one understood things like cyberbullying or echo chambers when social media was young.

Advocates of AI regulation now often point to these past circumstances and say those technologies should have been regulated early on. 

Casado’s response?

“There is a robust regulatory regime that exists in place today that’s been developed over 30 years,” and it’s well-equipped to construct new policies for AI and other tech. It’s true, at the federal level alone, regulatory bodies include everything from the Federal Communications Commission to the House Committee on Science, Space, and Technology. When TechCrunch asked Casado on Wednesday after the election if he stands by this opinion — that AI regulation should follow the path already hammered out by existing regulatory bodies — he said he did.

But he also believes that AI shouldn’t be targeted because of issues with other technologies. The technologies that caused the issues should be targeted instead.

“If we got it wrong in social media, you can’t fix it by putting it on AI,” he said. “The AI regulation people, they’re like, ‘Oh, we got it wrong in like social, therefore we’ll get it right in AI,’ which is a nonsensical statement. Let’s go fix it in social.“





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *