Technology

Trauma of AI misuse: AI tools are being used to generate sexual images, with woman & kids becoming victims

One of the most alarming threats invading the digital space today is the misuse of generative AI — artificial intelligence that mimics human learning patterns to create non-consensual explicit imagery. At the receiving end recently was Chhattisgarh-based Tanvi Vij, whose one casual photo post on social media turned into a harrowing experience.

The 33-year-old Chandigarh-born writer was on a vacation to Puducherry when she ran into YouTuber and stand-up comedian Kunal Kamra and clicked a photograph with him. Back home in Raipur, she posted the picture on her X account, tagging Kamra. What followed shattered her sense of safety online.

Anonymous accounts began appearing under her post, issuing prompts to X’s generative AI tool @Grok. What started with requests to remove the comedian from the image soon turned into instructions to alter Tanvi’s clothes, change camera angles and sexualise her body. To her horror, she discovered that one account had generated nearly nude images of her in full-length, despite her having posted a cropped image.

“I was full of fear and helplessness. I was looking at a picture of myself almost naked on the Internet. I panicked and immediately deleted all my images on X and other social media platforms. I even locked my digital profile,” she told The Tribune.

“They were generating pornographic content in my comments,” she says. “It was happening publicly and there was nothing I was able to do about it. I felt so vulnerable.”

Tanvi turned to her father, a retired IPS officer who had served as a Special DGP, for support. “For half an hour, I just cried. He held me and said it’s fine. We need to report this.”

She took screenshots of the comments and images and filed an FIR with the Raipur police, besides reporting the posts to X and writing to the platform’s grievance officer. After more than five days, she received a notification saying that the flagged account had been blocked for violating platform rules.

“For five to six days, those pictures were just up there for everyone to see. I haven’t received any reply from the grievance officer so far,” shares Tanvi.

It took much courage to confront the terrifying reality but the experience has changed how Tanvi views online spaces. “When you see yourself like that on Internet, you feel ashamed, as if somebody has taken off your clothes. But to all the women who have been victims of such cowardly acts, I want to say it is not your shame. Such perpetrators need to be called out and reported.”

The misuse of generative AI to produce sexually explicit content has become rampant, particularly on platforms like X, where such images can be publicly posted. A 24-hour analysis conducted on January 5-6 by social media and deepfake researcher Genevieve Oh revealed that images generated by the @Grok account averaged 6,700 sexually suggestive or nudified images per hour. The findings were published in Bloomberg News.

Similarly, nearly three-quarters of X posts collected and analysed by PhD researcher Nana Nwachukwu of Dublin’s Trinity College found requests for non-consensual images of real women or minors, with clothes removed or added.

Globally, the issue is gaining traction. The US Senate has passed a Bill allowing victims of non-consensual deepfakes to sue tech firms. The UK has demanded accountability from the platform, while Indonesia and Malaysia have banned X’s chatbot @Grok.

In India, X recently admitted to lapses after the Ministry of Electronics and Information Technology flagged accounts posting explicit, non-consensual content. More than 3,500 posts were removed and 600 accounts deleted. The accounts were exposed after Shiv Sena (UBT) Rajya Sabha MP Priyanka Chaturvedi took up the issue of @Grok misuse with the Centre.

While acknowledging swift government action, Chaturvedi insists it is not enough. Talking to The Tribune, she says, “Blocking accounts is not where it ends. We need accountability. Tech firms must build guardrails. Accounts that misbehave should be withdrawn, withheld and not reinstated ever again. When you deny such people spaces, they realise that they have crossed the boundaries of acceptability.”

“Till now, most tech platforms, under the guise of innovation, have largely escaped from taking responsibility. AI tools like @Grok have been created to empower citizens, but these are being abused. Generative AI is based on data inputs. When a problematic prompt is given, the tool should not respond. Even when it responds, it should clearly warn users that they’re indulging in criminal behaviour and are liable for punishment or action can be taken against them by the government,” says the MP.

Chaturvedi sees Tanvi’s case as part of a continuum. “We saw this with Sulli Deals and Bulli Bai app case in 2021-22 with the online mock auction of Muslim women. Young people felt they could get away with it under anonymity.”

What alarms her most about @Grok is the scale and automation. “From images, it could lead to videos. Once it’s out there, it’s up for download, sharing — everything — without woman’s consent,” she says.

The problem lies not with generative AI itself, but the lack of governance, stresses cybercrime investigator Ritesh Bhatia, who has been warning about such a possibility since as early as 2018.

“The artificially intelligent machine can’t be held responsible for commands given to it. But it is 100 per cent the responsibility of Elon Musk to ensure that his platform is not being misused to create objectionable content. If the platform is able to ‘undress’ someone online, it is because the command to do so has been incorporated into the system. The tech platform’s governance team needs to check this,” he says, adding that even though in cyberspace there are no borders, every country has its own ethical and legal framework which social media platforms must respect.

Generative AI is increasingly becoming a tool for abuse, says V Neeraja, Special DGP, Cyber Crime, Punjab. “Most women complainants only want the content removed and do not want to register an FIR. Women facing online harassment should immediately report it on the National Cyber Crime Reporting Portal (NCRP) or at the nearest cybercrime police station. They should also seek content removal directly on platforms or report it at www.StopNCII.org. Social media platforms are generally cooperative. As per IT rules, it can take up to 36 hours to remove such content — the sooner it is reported, the faster action can be taken to remove objectionable content,” says the IPS officer.

Once the complaint is made, all existing images on any social media platform are identified and reported. “On the NCRP site, www.cybercrime.gov.in, all cybercrimes can be reported. National Helpline 1930 is for reporting financial cyber frauds only. In Punjab, we have added an IVR (interactive voice response) facility on 1930 to guide on how to report cybercrime against women and children,” adds Neeraja.

Most of the times, women are hesitant to report fearing their identity will be revealed. “It’s a long-drawn process and they’re not sure how society would react. By the time the content is taken down, it’s been downloaded multiple times. Every share is a further humiliation. Till we, Indian women, don’t seek accountability, there would be men out there who will continue to use the silence and misuse these platforms, believing they can get away with it,” says Chaturvedi.

“If women are hesitant to come forward and file an FIR, the same should come from the police authorities. Justice has to be seen to be done,” she adds.

Following the global public outcry over the past fortnight, Elon Musk’s X has announced that it has stopped its AI-powered chatbot @Grok, including for premium subscribers, from editing pictures of real people to show them in revealing clothes like bikinis. This admission, however, doesn’t change the fact that AI tools are capable of generating non-consensual sexual imagery at scale and every minute thousands of women and children are possibly becoming victims of this.

The vital question, thus, is: who bears responsibility for safety online?

No one should be asking women to stop posting pictures, argues Bhatia, “That’s like asking people to wear a fire suit before entering a theatre.” The onus, he says, lies solely and squarely with tech platforms and law enforcement agencies to make Internet a safer place.

Quote/Unquote

Need to fix accountability

“Hold the tech firms accountable. Tell them that we work under certain laws and they need to operate within that framework, or their access to our markets would be limited. India needs to leverage the market that we give to such firms to operate in our country, to ensure that they follow the prevailing law mechanism here.”

— Priyanka Chaturvedi, Rajya Sabha MP

Big security lapse

“What about the people who prompt the platform to post such content? Just because the platform has that capability, are you going to misuse it? This is a big security lapse. Government needs to take action. The platform is accessed everywhere. But no one would think of doing such a thing in, say, Saudi Arabia, where laws are very stringent.”

—  Ritesh Bhatia, Cybercrime investigator

Related posts

Tamil Nadu successfully deploys AI tools to prevent human-animal conflicts

Nicole A. Murphy

Study finds how brain waves helps distinguish between one’s own body and external world

Nicole A. Murphy

Countdown for launch of PSLV-C62 rocket commences

Nicole A. Murphy