What legal questions are raised when someone shares an altered image? What if the White House shares it?

Some image alteration is protected under free speech. But the issue gets more complicated when the farce is harder to spot and when due process rights are in question.

By Laura RiceFebruary 11, 2026 8:49 am, , ,

The Trump administration often uses manipulated photos to make a point.

Take the image of President Trump walking alongside a penguin. Another image involving the Obamas and a racist trope has generated far more attention.

But one you may not have heard of happened a few weeks back. It was an image of a female protester being arrested in Minnesota.

Unless you compare the manipulated photo posted by The White House account on X with the real image which was shared by Homeland Security Secretary Kristi Noem, you might not have known that the image was altered at all. That’s what The New York Times discovered as they ran both photos through an AI-detection system.

To help us understand what this situation raises in terms of larger issues, Texas Standard turned to Kevin Frazier, inaugural director of the A.I. Innovation and Law Program at the UT School of Law. Listen to the interview in the player above or read the transcript below.

This transcript has been edited lightly for clarity:

Texas Standard: We should talk about that manipulated image for those who haven’t seen it. This was of an ICE protester being arrested in Minnesota. How would you describe the difference between the manipulated image and the original?

Kevin Frazier: So in the original image, we see a woman who is fairly composed and fairly stoic going through a building and doesn’t have a sort of emotional salience to the photo itself.

But the altered image instead suggests someone who is in emotional distress and crying. And so there’s definitely a difference in the emotional tenor of these two photos.

It seems like there are many issues, ethical issues, legal issues that are raised by image manipulation, depending on what’s being manipulated, how it’s used, where it’s published, who’s putting it out there, all those sorts of things.

With an image like this, where do you begin thinking about what lines were crossed here?

Image manipulation is a subset of free speech and that, in and of itself, is always going to lend itself to what any good law professor would say is a “it depends” analysis. As you mentioned, there’s so many contextual factors that we have to pay attention [to] here.

So, first and foremost, one of the questions we’ll have to raise is what extent of manipulation did we actually see? Listeners will probably know that when they are using their own phones and uploading photos to social media, they’ll see little buttons where they can improve their own image or alter it in subtle ways. And so just getting to a mere distinction as what qualifies as actual image manipulation that raises some additional need for inquiry is a threshold issue that’s pretty questionable and very difficult to parse out.

But in terms of this specific image, what we really have to be attentive to here is the fact that this is an image of someone who was arrested and perhaps going to be subject to criminal sanction by the government.

And so this is obviously a highly sensitive matter in which somebody’s due process rights, for example, may be implicated because potential jurors or even the judge themself may have seen this image and may, as a result, be sort of biased as a result of having seen that image.

We’re basically talking about, what, libel? I guess slander would be the verbal defamation, but we’re talking about the publication of something that is obviously false because we’ve seen the original image.

We know that this was published on X. This could cause damage to the subject’s reputation, no?

Yes, most certainly. And I think, to your point, we have to pay attention to the fact that we’ve seen common law protections of an individual’s reputation exist since arguably the dawn of the law itself.

Today, what we have ask about is, does the image, for example, depict a real person? Is it in a real event? Is there some degree of undisclosed manipulation? And, as you raised, what degree of reputational harm are we seeing?

The president is saying this was parody, it’s obviously farce. And I think that, generally speaking, with legal defenses to libel claims, opinions, parody, fair comment on matters of public concern, those are all considered protected by the First Amendment, no?

When we’re talking about parody or satire or, in particular, political speech, that’s often where we see a particularly wide berth being afforded by the first amendment. And so, when we talk about manipulated images, we can think all the way back, for example, to cartoon depictions of candidates where we start to have questions about what is proper speech in that domain.

Now I’ll flag that in other contexts – for example, in California – there was a law passed there that tried to really limit the promulgation of AI-manipulated images right in the lead-up to an election, for example. And that law has been challenged in court as infringement on free speech rights because delineating the context in which someone may alter an image and share that image before an election raises a lot of thorny questions that we’re not sure should be litigated, for example, by courts, but maybe something that has to be hashed out by you and me and by voters in a state by state fashion.

Are we talking about AI in a way changing or altering libel law as you see it or are we talking about the image manipulation when it comes to defamation cases not really having that much of an impact?

So what I would suspect is that we’re going to see different courts take different approaches to these questions.

And one of the underlying trends that I think will be particularly important to pay attention to is the extent to which we see AI labs themselves – think OpenAI, think Anthropic or Google – the extent which they make tools available to help identify when an image has been manipulated.

If you as a user, for example, take a screenshot or otherwise try to remove some degree – some indicator, what’s often referred to as a watermark on that image, that it was AI-generated – then we could start to see courts approach this question with a little bit more nuance because that would show that a user took additional steps to try to shield the fact that they had generated that AI image.

I think a lot of listeners may be thinking, “all right, I want to know the bottom line, who’s going to be liable for putting a manipulated image out there?” And what I hear you saying is a lot depends on the context.

It’s very much going to be contextual.

Some would say, well look, no president has put out something that is as clearly or patently false as something like this particular image before. That this is certainly new.

Would the president or the White House be held accountable for libel or defamation in a case like this as you see it?

So as I see it and, unfortunately, I’ll give you the boring law professor answer, which is going to be it indeed does depend, particularly how this image is ultimately used or not used in relation to the potential criminal sanctions on this individual.

What we also are going to have to pay attention to is the fact that this image was initially released on social media and not introduced, for example, as the government’s assertion of valid evidence or authenticated evidence. And so there, the argument that it was indeed parody or not intended perhaps as a accurate depiction of the image perhaps has more credence.

Whether or not a court will buy that, of course, is a open question. And whether or not this image even is implicated in sort of criminal proceedings is also an open question at this point.

If you found the reporting above valuable, please consider making a donation to support it here. Your gift helps pay for everything you find on texasstandard.org and KUT.org. Thanks for donating today.