Monday

February 16th, 2026

Insight

Grok fakes are digital assault. Make it a crime

Noah Feldman

By Noah Feldman Bloomberg View

Published February 16, 2026

 Grok fakes are digital assault. Make it a crime

SIGN UP FOR THE DAILY JWR UPDATE. IT'S FREE. Just click here.

The horrifying episode in which Elon Musk's Grok chatbot generated and posted millions of sexualized images of real people, including women and children, has a clear lesson: It should be illegal to use anyone's photograph to create a fake image intended to depict that person.

Last summer, Congress passed the Take It Down Act, which prohibits posting deep fakes that depict people engaged in intimate sexual acts. Now Congress should expand the the act to cover any misappropriation of a person's likeness.

This can be accomplished in a manner consistent with the First Amendment. There is a long-standing common-law right of individuals to control the commercial use of their image and to prevent others from using it for gain. That right should provide a basis for outlawing the kind of image appropriation that occurred on Grok.

It will be necessary to have exceptions for political commentary or satire. But fake images should not count as newsworthy for First Amendment purposes. Moreover, the benefits of protecting people from the misappropriation of their images outweighs the risks of chilling the lawful publication of news images.

What makes the Grok situation so immediately upsetting is that it permitted anyone to produce salacious images of anyone at any time. Widely available AI technology makes it easy to do so. It is also clear that market pressures alone will not suffice to stop the practice. Even if Grok has made changes that make it more difficult to produce such images, publicly available, open-source AI can, in principle, be used to achieve the same result.

This state of affairs can't be right, and the law must find a way to protect against it. The violation is connected to privacy in the sense that it certainly feels like an invasion of privacy to be depicted naked in an image that looks like a photograph. There is also a common-law right of privacy that could serve as a basis for justifying a new law.

But the separate common-law right to control your own image and prevent its misappropriation is an even closer fit to the modern wrong. If I am in a bathing suit and someone takes a photograph of me, then perhaps it isn't a violation of my privacy to post it. But if you take a photograph of me fully clothed and then transpose my face onto a picture of someone who is naked, that is literally a misappropriation of my face. So long as you're doing that for your benefit, not for mine, you've infringed on my right to ownership of my image.

The right against the misappropriation of my image shouldn't be limited to nude or otherwise sexualized images of me, however. Just as there is a common-law right against somebody taking my image without my consent and using it to promote their own product or otherwise make money, there should also be a legal right against somebody using my image without my consent for their own purposes. AI-generated images of made-up people would not be protected under this legal principle. The protection would, however, extend to an actually identifiable person - the person whose image was taken.

To ensure that free speech is protected under such a law, the statute should exclude constitutionally protected speech. Editing a news photograph in a way that doesn't misrepresent the original image shouldn't be illegal, because a ban might chill the editorial process.

The Supreme Court has also protected parodies under the First Amendment, treating them as exempt from intellectual property restrictions such as copyright law. Parody images based on photographs of real people should therefore also be protected, to the extent that they aren't intended to mislead the viewer but merely to comment on public figures and matters of public concern. The law should allow you to make all the memes you want using images of any politician or other public figure. Similarly, cartoon figures would be fully protected. The law should be restricted to the appropriation of photographic images or to AI-generated images that are indistinguishable from photographs.

I am a strong defender of First Amendment rights. AI-generated words and images deserve the same protection as any other forms of speech or expression. But the First Amendment has never been understood to protect the misappropriation of one's image by another. Your image is your property. And if you can't stop that image from being taken without your consent and transformed into something you don't want, you don't really own it.

(COMMENT, BELOW)

Noah Feldman, a Bloomberg View columnist, is a professor of constitutional and international law at Harvard University and the author of six books, most recently "Cool War: The Future of Global Competition."

Columnists

Toons