What ‘nude’ means now
A new report on Telegram's image-abuse networks is serious and important. So is the word being used to describe it.
The women in the Telegram channels don’t know they’re there. Their photographs — pulled from Instagram, TikTok, Snapchat — were uploaded by someone else, traded for cash, paired with threats, used as raw material for AI tools that stripped or replaced or fabricated. Tens of thousands of them, according to a report released this week by the research nonprofit AI Forensics. The word the report uses for what was done to them is “nude.”
The word is accurate, technically. The images are nude. They are also non-consensual, fabricated, weaponized, and in some cases illegal. They are components of coordinated harassment campaigns. They have ended careers, destroyed relationships, and according to researchers who study this ecosystem, driven women from public life entirely. “Nude” is the least interesting thing about them. It is also, somehow, the word that keeps appearing in the headlines.
I want to be careful here, because the easy version of what I’m about to say is wrong. The easy version goes: the coverage is conflating nudity with harm, which is unfair to nudists. That’s not my argument. The harm in those Telegram channels is real and it is serious, and the nudity—or what passes for nudity in AI-generated images of women who never consented to be photographed, let alone undressed—is genuinely part of the harm. When someone’s likeness is fabricated into a sexual image and distributed without their knowledge, the violation isn’t merely abstract. It is intimate and it is specific and it is, in the fullest sense of the word, bodily.
What I’m interested in is something subtler. Not whether the word “nude” is being misused in these stories, but what happens to the word itself—and everything it touches—as it absorbs this meaning by accumulation.
AI Forensics spent six weeks this winter studying 16 Telegram groups operating in Spain and Italy, analyzing nearly 2.8 million messages. What they found was not a fringe subculture but what they describe as a thriving economy: over 18,000 references to surveillance tools appearing alongside more than 82,000 abusive images and videos, with users trading spyware marketed as “parental controls” alongside photographs of women they knew personally—partners, former partners, sisters, colleagues. The most extreme content depicted children in incest and rape scenarios. The Wired headline, which was more direct than the AFP wire version most outlets ran, called it what it was: men sharing nonconsensual images of women and girls, buying spyware, and engaging in doxing.
According to the report, most of the users sharing nonconsensual sexual content are young, heterosexual men sharing content of women who are their partners, former partners, or acquaintances. This is not, in other words, a story about strangers. It is a story about the intimate surveillance of people who are known, by people who know them, using tools that have become startlingly accessible. In 2023, creating a convincing deepfake required a powerful GPU and coding knowledge. By 2026, it requires only a Telegram account.
The policy response has been swift, at least by European standards. Three weeks ago, the European Parliament voted 569 to 45 to back a ban on “nudifier” systems—AI applications that create or manipulate images to depict identifiable real people in sexually explicit or intimate ways without their consent. The vote was framed, across virtually every outlet that covered it, as a win for women’s rights and child protection. “Every day, women across the EU are targeted by deep-nude AI tools that strip them of their dignity, intimidate them online and make them vulnerable to blackmail and abuse,” said Dutch Green MEP Kim van Sparrentak after the vote.
She is right. And the ban, as written, is carefully scoped: it targets systems that create non-consensual intimate images of identifiable real people, with a carve-out for platforms that have effective safeguards preventing such misuse. It is, as these things go, a reasonable piece of legislation aimed at a genuine harm.
And yet.
The word “nudifier” is doing something interesting in that legislation. It is naming an application by what the application produces—a nude image—rather than by what makes the application harmful, which is the non-consent. This is not a trivial distinction. It is, in fact, the distinction the legislation itself tries to encode in its carve-out clause. But in the public conversation surrounding the vote—in the headlines, in the MEP statements, in the advocacy framing—“nude” and “harmful” have become functionally synonymous. The nudity is the harm. The harm is the nudity.
I want to add here that I write about nudity for a living. I publish a daily newsletter called Planet Nude that covers nonsexual nudity and nude expression through history, culture, politics, and art—not sex or pornography, a distinction I have now made so many times it has lost all meaning to me personally. I have been doing this for three years. If you’re reading this now, you most likely already know all this already. You’re most likely a subscriber. Anyway, I mention it not to establish credentials but because it gives me a particular vantage point on what happens when “nude” becomes a dirty word again. Because it does, periodically. And the consequences are not abstract.
Platform moderation is perhaps the most immediate example. The Guardian’s January investigation identified at least 150 Telegram channels distributing AI-generated sexual images of women, and noted that channels shut down by the platform would reopen under near-identical names within hours. Telegram’s moderation, in other words, fails the people it most needs to protect. But platform moderation, when it does engage, tends to operate on keywords and image-detection heuristics rather than context. “Nude” trips the filter. Whether the image is a fabricated sexual assault or a photograph taken on a clothing-optional beach with the full knowledge and consent of everyone in it does not always register as a meaningful difference to an algorithm.
This is not a hypothetical. Naturist organizations, figure photographers, body-positive health advocates, and artists working in the tradition of the nude have all had content removed from major platforms in recent years—not because their content was harmful, but because it was nude, and “nude” had been defined, by accumulating association, as inherently problematic. Instagram’s nudity policies, notoriously applied with spectacular inconsistency, have resulted in the removal of mastectomy photographs, breastfeeding images, classical art, and documentary photography. The content moderation systems that were ostensibly built to protect women have, with some regularity, been turned against women’s bodies instead.
Legislation follows the same logic, slowly. When the EU Parliament's nudifier ban moves through negotiations with the Council, its careful carve-outs may or may not survive intact. What will survive, reliably, is the cultural association that has been built around the word itself. Nudity, in the discourse that produced the legislation, is something that is done to people. It is a vector of harm. It is the thing that needs to be banned. In Arizona, AANR is currently opposing HB 2133, a bill targeting non-consensual image sharing whose language is broad enough to create liability for naturist clubs and members posting consensual photographs. The bill passed the House in February and is advancing through the Senate. Nobody writing that bill set out to criminalize nudism. That's the point.
AI Forensics researcher Silvia Semenzin, speaking about the Telegram findings, made a point that the headlines largely missed: “We tend to forget that most victims are ordinary women. The majority of this violence is directed towards people who the perpetrators know.” This is important for several reasons, the most obvious being that it reframes the harm away from stranger danger and toward intimate partner abuse—which has different causes, different patterns, and different remedies than platform-level content moderation.
But it also says something else. The harm in those Telegram channels is not the nudity. The harm is the violation of trust, the weaponization of intimacy, the use of a woman’s image against her by someone who has access to it precisely because she trusted him. The nudity is the instrument. The harm is something older and more specific and, in many ways, harder to legislate away.
This matters because we are in the middle of writing the rules. The Telegram report will inform platform policy discussions. The EU nudifier ban will move through negotiations. The US Take It Down Act, signed into law last year, requires tech companies to remove non-consensual sexually explicit images within two days of notification. All of this is happening now, and the language being used to describe the problem will shape the solutions that emerge from it.
If the problem is “nude images,” the solution is to restrict nude images. If the problem is non-consensual violation of a person’s body and dignity, the solution looks different—and leaves considerably more room for the bodies and dignities of people who have chosen, freely and deliberately, to live without clothes.
I am not arguing that nudism is equivalent to the harm documented in AI Forensics’ report. I am not arguing that naturists are the real victims here, or that the concerns of the beach volleyball community should weigh against those of women whose images are being traded like commodities in Telegram channels. That would be obscene.
What I am arguing is that language matters, and that the language being used to describe this harm is imprecise in ways that have consequences beyond the immediate policy debate. Every time “nude” absorbs another connotation of violation, it becomes a little harder to talk about the body in any other register. Every time the headlines reach for the most viscerally alarming word rather than the most accurate one—and “non-consensual” is both more accurate and more alarming, if you think about it—the word “nude” does a little more work it was never meant to do.
The women in those Telegram channels deserve better than the coverage they received this week. They deserve reporting that names what was done to them with precision—not because precision is a journalistic virtue in the abstract, but because precision is what produces legislation and platform policy that actually protects them, rather than legislation and platform policy that protects everyone from nudity.
Those are not the same thing. They have never been the same thing. 🪐



