Can legislation combat the rise of non-consensual deepfake porn?

Deepfake videos are spreading rapidly with the support of advanced open-source artificial intelligence models. MIT researchers reveal that many of these videos are non-consensual porn targeting celebrities like Taylor Swift. However, now even high school and middle school students, who are predominantly girls, are being targeted. UCLA professor John Villasenor joins The Excerpt to examine legal and technological efforts to stem this surge of illegal content. We discuss the challenges of editing AI-generated images, the importance of international cooperation, and offer practical advice for parents to protect their children from cyber sexual violence.

Press play on the player below to listen to the podcast and follow along with the transcript below. This transcript was automatically generated and later edited for clarity in its current form. There may be some differences between voice and text.

This embedded content is not available in your region.

Dana Taylor:

Hello and welcome to Quote. I am Dana Taylor. Today is Wednesday, October 30, 2024, and this is a special episode of The Excerpt.

Deepfake videos are nothing new. What is new is their prevalence. Advanced open-source AI models are now available to anyone, anywhere. According to researchers at MIT, the vast majority of these videos are non-consensual porn. Big celebrities like Taylor Swift are popular targets, but so are high school and middle school students, who are now almost all women.

Unfortunately, there is no way to put the deepfake genie produced by artificial intelligence back in the bottle. Is there a way to combat this wave of illegal, and in many places illegal, content? To help me unravel this complex and rapidly evolving story, I’m joined now by John Villasenor, professor of electrical engineering, law, public policy and management at UCLA. John, thank you for joining me on The Excerpt.

John Villaseñor:

Thank you very much for accepting me.

Dana Taylor:

Let’s dive right in, starting with how various government agencies are trying to combat the spread of non-consensual AI-generated porn. California Governor Gavin Newsom, where the vast majority of AI-focused companies operate, has passed 18 bills to help regulate the use of AI, focusing specifically on AI-generated images of sexual child abuse. In August, the San Francisco Attorney General filed a lawsuit covering 16 separate websites that allow users to create their own porn. Many laws are being passed to solve the problem. Is it enough and will it work?

John Villaseñor:

It’s still early days. Legally, there are still a lot of things that people talk about doing but don’t actually do. And also there is a question… There are several questions. There is a technology question; Will it work in this regard? And then there’s sort of the legal question.

I think I’ll start with the technology question. As you say, it is true that technology has made it easier to create deepfake videos and these videos can be used for harmless purposes such as making a documentary of Abraham Lincoln or making Abraham Lincoln look very realistic. They can also be used for terrible purposes, like some of the purposes you mentioned there. One of the challenges is that it can be difficult to find the people who create them, and even if it is illegal, it can actually be really difficult to get a handle on that content.

And on the legislative side, of course, these bills have a very important goal, which is to reduce the very, very problematic use of this technology. But one of the challenges will be that they are subject to court challenges, not because people are against the goal of addressing a particular way of using these technologies, but because sometimes, and I haven’t gone into all the details of all of this. laws, but sometimes with technology regulation, there’s a risk of writing a law that solves the problem you’re trying to solve, but it also has some sort of collateral damage and other things, and so could be open to some legal challenges.

For example, deepfakes regarding political information, regarding the anti-deepfake law, I know that this has already been subject to a legal challenge.

Dana Taylor:

International cooperation to rein in bad actors in the AI ​​deepfake space is clearly an important aspect of this fight. Will existing crime-fighting technology-focused alliances be effective here, or do we need new infrastructure and new agreements?

John Villaseñor:

There is sufficient infrastructure to combat international crime. This has a history of decades or more. I think the challenge is just the technology itself, right? So someone can post something on the internet. It may not be obvious where this was created and where the person is.

And for example, there needs to be a person in one country who makes a video and sends it to a server in another country, and the person depicted in the video is also in a third country. So there may be three countries involved and it may be difficult to understand who is behind these events. Also, this might turn into a bit of a game of whack-a-mole, right? For example, if it is removed from one server someone can put it on a different server in a different country.

And this can be very difficult to keep track of. And especially when you have the volume that you probably will, you can go after one of these videos, but if there are hundreds or thousands, all the alliances in the world won’t necessarily have to go. Doing it at the pace you actually want to do it will be enough.

So I think the long-term solution should be automated technologies that are used and hopefully run by the people running the servers on which they are hosted. Because I don’t think any reputable social media company would want this kind of content on their site. So it is within their control to develop technologies that can detect and automatically filter out some of these things. And I think that will go a long way towards mitigation.

Dana Taylor:

This podcast has a large millennial audience; Many of these may have young children and be justifiably concerned about deepfake porn. John, how can or can parents protect their children from cyber sexual violence?

John Villaseñor:

There’s no perfect measure, but I definitely think it’s a good thing for everyone, especially young people these days, to know how to use the internet responsibly and be careful about the types of images they share online. And of course, I mean, it goes without saying that you don’t want any of these people to be sharing sexually explicit images on the internet, but even images that don’t cross the line of being kind of overtly explicit, but are close to it, enough that it wouldn’t be that hard to change awareness of that kind of thing.

But I think it’s a broader change, and maybe it’s naive, that hopefully as we get more education about the harms of this type of content… I mean, there are some bad actors, and there will never stop being those actors. bad actors, but I think some people with some training would be less likely to participate in the creation of such videos… the dissemination of such videos. Again, this isn’t a perfect solution, but it could be part of the solution. On the one hand, education, on the other, awareness, and third, the companies themselves have better automated tools to detect these things. While it’s not perfect, I think there can be real progress with these three things coming together.

Dana Taylor:

WIRED magazine recently published an article about how deepfake detection tools, including those using AI, are failing in many cases as AI-generated videos become increasingly sophisticated. Is this an endlessly repeating game of whack-a-mole as you say?

John Villaseñor:

Yeah, no, that’s a great point. You’re right, it’s kind of an arms race and the defense is always a few steps behind the offense, right? In other words, let’s say you build a detection tool that is good at detecting deepfakes today, but tomorrow someone will have a new deepfake creation technology that is even better and will be able to fool existing detection technology. And so you update your detection technology so it can detect the new deepfake technology, but then the deepfake technology evolves again.

So with these detection technologies you will always be a few steps behind. That doesn’t mean it’s not worth investing in these, because again, if you can detect 85 or 90%, then that’s much better than detecting zero, right? So it’s still a good idea to have these detection technologies available. It is also important to be realistic and understand that these detection technologies will never be perfect. They will always be a little behind.

And on the other side there is another risk; That is, there is a kind of false negative and false positive. There is also the possibility that some of these detection technologies may mistakenly flag content identified as deepfake that is not actually deepfake, which is clearly something you want to avoid. I’m really thinking more in a political context and things like that. You don’t want a real video of a real speech by a politician to be flagged as a deepfake. This is another type of trade-off that people doing sensing technology need to be very careful about.

Dana Taylor:

As you said, deepfake videos are not only about child sexual abuse and revenge porn, they have also infiltrated the political world. And as they say, it’s impossible not to see a video that influences how voters view a candidate. Are there any tools that can help here?

John Villaseñor:

I think the same. They are the same type of tools. For example, a deepfake detection tool will also be able to detect a deepfake of a politician. And so the same tools are useful. The challenge in the political context is that deepfakes can do their damage quite quickly.

Let’s say someone makes a deepfake that shows a politician saying something he never actually said. It may take days for such a system to step in, identify it as a deepfake, and remove it. But if 500,000 people have seen it by then, maybe only 50,000 of those people will read that it’s actually a deepfake. So you still end up with 450,000 people who saw it, never heard of it being a deepfake, and maybe believe it’s real. This is one of the challenges that deepfakes face in the political context.

Dana Taylor:

John, you’ve covered many different aspects of artificial intelligence, including issues related to law and public policy. Where do you think the debate is heading on how to rein in deepfake porn?

John Villaseñor:

I think the conversations are more mature and advanced now than they were a year ago. Unfortunately, as this has been happening a lot more in the last year or two, there’s a lot more awareness about it. One result of the awareness is that legislators, policy makers, parents, and young people are much more aware, I think, than they were a year or so ago that this is a phenomenon that exists. I’d like to think this will yield some good results in terms of better detection technologies and better awareness by policy makers, and hopefully a dramatic reduction in the amount of this content coming out. However, with technology, I have learned not to predict the future because it is very difficult to predict where technologies will go. So I don’t know.

Dana Taylor:

Thanks so much for joining The Excerpt, John.

John Villaseñor:

Thank you.

Dana Taylor:

Thank you to our senior producers Shannon Rae Green and Kaely Monahan for their production assistance. Our executive producer is Laura Beatty. Let us know what you thought of this episode by sending a note to [email protected]. Thanks for listening. Taylor Wilson will be back with another episode of The Excerpt tomorrow morning.

This article first appeared on USA TODAY: Can legislation combat the rise of deepfake porn? | Quotation