The introduction of new media technologies is typically accompanied by new concerns about privacy. The first US legal theories of privacy emerged in response to new developments in photography in the late 19th century. Photography allowed something novel: images of people could be created and circulated without their consent more quickly and easily than before. Today, digital photos and videos are even easier to create and distribute, and more recently, it has become possible to create realistic synthetic images of people without their knowledge or consent. New AI apps enable users to create nude or sexual images of a person that are realistic and recognizable but completely fake.
Synthetic Sexual Images Violate Privacy
While apps that create synthetic sexual images have legitimate uses, some people use them in ways that cause harm. For instance, boys at a high school in New Jersey created and distributed synthetic nude images of girls at their school without their consent.
Whether real or synthetic, the effects of nonconsensual sexual images can be as severe as in-person sexual violence. Scholars have described this kind of harm as radically disrupting victims’ lives, with profound psychosocial consequences.
What makes the nonconsensual distribution of sexual images so powerful as a weapon to harm people is the victim-blaming and social stigma that attaches to many forms of sexual expression. Whether the image was originally consensually created or not, and whether it was real or synthetic, many people report victim-blaming from peers, family, responders, employers, and others. The harm of this kind of sexual shaming exacerbates existing inequalities: women of color, LGBTQ and gender-nonconforming people, immigrant women, and Indigenous women are generally more likely to experience image-based sexual abuse, as are younger single women and adolescent girls.
Criminalizing Image-Based Abuse
Policymakers have turned to criminal law to address the problem of image-based abuse. Most US states have passed laws criminalizing image-based sexual abuse; more recently, some states have begun adding synthetic images to these statutes. Like other sexual violence laws, image-based abuse laws that apply to adults are based on consent; what distinguishes legal from illegal sexual acts is the consent of the people involved.
However, unlike laws about sexual violence, many of the image-based abuse laws also require that the perpetrator had an intent to harass or to realize financial gain. For example, Colorado’s image-based abuse law describes a violation as a lack of consent or based on a “reasonable expectation” of privacy. Still, it has two other requirements for a conviction: an intent to “harass, intimidate, or coerce” the person in the image and proof of that person’s “serious emotional distress.” These laws are much narrower than the ones against in-person sexual violence. As such, they do not establish any general right to sexual privacy or autonomy.
Like in-person sexual violence, only a tiny minority of people who experience image-based abuse achieve a criminal conviction against the person who violated their privacy.
And just like in-person sexual violence, many who report to the authorities experience secondary victimization when officials fail to respond or even blame the victims who report. Poor police response, including a failure to treat victims with dignity and respect, is a global problem that researchers have identified in Australia, Canada, New Zealand, Israel, the UK, and the US.
Alternative Approaches to Image-Based Abuse
If criminal laws against image-based abuse are barely applied, what else can be done to prevent and address it?
Some scholars have suggested stronger regulations for technology companies. These companies are generally protected from liability for what their users post by The Telecommunications Act of 1996. Social media companies like Meta rely on user-generated content to capture users’ attention, which they then monetize through targeted ads. The prevailing attitude of free speech above all – except for some sexual speech on some platforms – and weak regulation, means that these companies profit from all content their users post, including content that is abusive or violates privacy.
Currently, all sexual images of minors that are reported to the National Center for Missing and Exploited Children are blocked from most platforms; a similar system exists for adults’ reported nonconsensual sexual images, though fewer platforms participate and the image must be found to violate policy. However, neither of these systems prevents images from being created or shared until after they have been reported.
Balancing Privacy and Speech Rights in the Digital Age
In the case of sexual images, perhaps the value of personal privacy should outweigh unmitigated free expression interests. Why should any platform allow a sexual image to be posted without first ensuring that all the people depicted consent to it?
Such standards are used and generally enforced in the legal pornography industry.
As a starting point, policymakers can explore ways to compel both the AI apps that create synthetic nude images and social media platforms that host nonconsensual sexual images to develop ways to verify consent before a sexual image of a recognizable person can be developed or distributed. Free speech enthusiasts may balk, but sexual expression could indeed still be well protected if that expression is verifiably consensual.
To be fair, this kind of policy could limit the distribution of images for which consent is intended but cannot be confirmed. The interest in preventing the potential harm of a nonconsensual image being distributed, however, may outweigh the costs of delaying posting until consent can be verified. Newsworthiness may be another common objection. But even if reporting on the fact of a politician’s affair is so vital to the functioning of democracy, the public’s desire to see their sexual images does not outweigh the individuals’ rights to some sexual privacy, even if they are public figures.
The most important objection might be: how could consent be verified while still preserving anonymity? Any policies requiring consent for the production or distribution of sexual images should not rely solely on official identity documents but should include other techniques, such as automated anonymous liveness and likeness checks. These mechanisms can ensure that marginalized populations, including sex workers and trans people, can still verify consent and enjoy free expression without revealing their identities to the platform. With such a system, someone who wants to create or share images of themselves could, for example, be required to pass this kind of anonymous check to verify that they are indeed the person in the images; likewise, such a request could be sent to a partner for their consent.
The issue of minors poses additional challenges to anonymity. Though a government identity document can be used to verify a person’s age, doing so anonymously would require privacy-preserving decentralized identity management systems that need more development and wider adoption.
Relying on consent alone without any age restrictions poses risks because adults could use the technology to create images of minors, who are especially susceptible to giving consent that is coerced. Images of minors can’t simply be banned either. It can be difficult to distinguish minors from adults without a corresponding identity document and exceptions would be needed for content produced by a minor and shared consensually with close-in-age peers to avoid infringing on minors’ free speech rights and sexual autonomy.
Corporate Accountability for Image-Based Abuse
There are no simple solutions to the complex problem of image-based abuse. The issue will never be fully solved by automated tools, because there’s no way to determine, based on the content of an image alone, if creating or distributing that image would violate privacy. The only way to ascertain consent is to ask the person depicted in the image, and interpreting the answer requires accounting for contextual factors, such as age differences or extortion, that can make genuine consent impossible. Though consent is imperfect, it is still the most reasonable standard for policies about sexual violence, including image-based abuse.
Given the failures of criminal law and the difficulty of creating the cultural changes needed to address the roots of gendered and sexual violence, for the time being, the most practical way to prevent and redress this harm might be to target technology companies’ immunity from liability.
While apps and platforms currently benefit from frictionless free expression, it’s worth considering that personal privacy and sexual autonomy – and the prevention of serious sexual violations – may warrant consent-based restrictions on the creation and distribution of both real and synthetic sexual images.