When technology fails women: Online abuse and Nigeria’s digital weak points


Grok has become a powerful tool for amplifying online gender-based violence (OGBV) Originally published on Global Voices Advox A screenshot from X (formerly Twitter) showing some X users using the AI chatbot Grok to sexualize women. By Victory Brown This post is part of Global Voices’ April 2026 Spotlight series, “ Human perspectives on AI .” This series will offer insight into how AI is being used in global majority countries, how its use and implementation are affecting individual communities, what this AI experiment might mean for future generations, and more. You can support this coverage by donating here . As a Nigerian who has spent years on X (formerly Twitter), I’ve seen a lot. I’ve witnessed trends come and go, policies shift, and communities build and dissolve. For a long time, I considered myself a “ conscious” internet user. I curated my timeline carefully, avoided unnecessary engagement, muted triggering keywords, and accepted the uncomfortable truth that the internet, especially for women, was never designed with our safety in mind. My work at Superbloom (a design non-profit and studio) — particularly on human-centered design projects and the tech policy design lab playbook on online gender-based violence — was my first real exposure to the scale and intensity of violence occurring online. I came to see how these forms of violence persist online: Victims remain scared and vulnerable, while perpetrators are rarely held accountable. A temporary ban is often the extent of the response, and they soon return with a new account and a new victim. Social media, once a place for connection, community-building, and entrepreneurship, has now become a battleground and hostile environment, with women often bearing the brunt of unprovoked abuse . Furthermore, according to UN estimates , only 40 percent of countries have legislation protecting women and girls from online abuse, leaving much of the global population exposed . As a user experience (UX) designer, I’ve learned in my role how design decisions, sometimes invisible to users, shape real-world outcomes. Whether it’s a platform’s UX design or unseen aspects like data retention and user privacy, design choices can either protect or expose users. I began to understand that good design is not just intuitive; it protects, it is transparent, and it is accessible. However, I have noticed that most social platforms like X have allowed subtle, passive-aggressive content to proliferate . I have experienced this both personally and as an observer. This is especially the case with Grok, an AI assistant developed by Elon Musk’s xAI and built directly into X. It enables users to generate text and edit images from simple prompts, including transforming photos. Because it is embedded within a highly networked platform, Grok does not operate in isolation: its outputs can be instantly shared, amplified, and weaponized, raising new concerns about how AI-generated content circulates in already unsafe online environments. Before Grok, the internet was already hostile . Women were attacked for their bodies , their religion, their political opinions , their accents, their identities, and more. Being outspoken, visible, or simply existing online as a woman — especially a Black Nigerian woman — often came with consequences. Studies show that while women all over the world often face online abuse, Black women face more online harassment than white women. Harassment became normalized, dogpiling was entertainment, and abuse was framed as “free speech.” These crimes, though they may appear subtle on the surface, often cause immeasurable harm to victims. Globally, research has consistently shown that women experience disproportionately high levels of online harassment, including but not limited to misogynistic abuse, cyberstalking, and coordinated harassment campaigns. Nigerian data suggests these patterns are even more pronounced locally. For example, ActionAid Nigeria reports that about 45 percent of Nigerian women have experienced cyberstalking, while broader studies on online harms indicate women and girls are among the most frequently targeted victims of digital violence. But after the introduction and widespread use of Grok, something shifted. What was once human-driven harassment began to feel systematized. Although Grok did not invent online gender-based violence (OGBV), it has become a powerful tool in amplifying it at speed and scale. Structural conditions that can inform AI harm There are structural vulnerabilities embedded in Nigeria’s digital and institutional landscape that not only shape how AI is adopted but also amplify the harms that poorly governed systems can inflict. It is hardly news that our legal system is weak in fulfilling its duties under the rule of law. In Nigeria, the structural conditions shaping digital life do more than define access to technology. They actively shape who gets harmed, how quickly harm spreads, and how difficult it is to seek solutions. When poorly governed systems are introduced into this landscape, existing gender-based vulnerabilities are not only exposed but also magnified. Nigeria’s regulatory environment for AI remains fragmented ; current policies and frameworks addressing digital issues are spread across multiple agencies, creating confusion rather than clarity about who is responsible for which risks and how to enforce safeguards. One solid example of this dynamic is the recent global controversy around Grok. Designed to generate and edit images from simple user prompts, Grok is being rapidly misused to produce millions of non-consensual, sexualized images, including images of women and minors. There have been a series of investigations and reports that documented how users on X openly used Grok’s image-editing features to manipulate photos of women and girls, including transforming ordinary photos into revealing or sexualized versions without consent. Even after platform policy updates, journalists and researchers reported that the tool could still generate sexualized imagery, in some cases, highlighting persistent moderation and enforcement gaps. In a country like mine,  where online harms are already pervasive, and platform accountability remains weak, such AI tools are amplifying and accelerating existing patterns of abuse. The 2024 “ State of Online Harms in Nigeria report ” by Gatefield reveals that women were targeted in 58 percent of online abuse cases in the country, with X and Facebook identified as the primary platforms where abuse occurs (34 percent and 29 percent of reported cases, respectively). Yet only 24 percent of Nigerians find X responsive to complaints about online harm. Recent research and projections also suggest the problem may grow significantly as generative AI tools become more widespread. A new report published by Gatefield in February 2026, “ Industrialized Harm: The Scale of AI-Facilitated Violence in Nigeria ,” estimates 70 million Nigerian women and girls could be exposed to AI-facilitated online abuse annually by 2030, with 30 million directly targeted. This gap between harm and accountability creates fertile ground for AI-enabled abuse to flourish. When platforms fail to enforce policies effectively, generative AI tools lower the barrier to producing and distributing exploitative content, compounding an already severe gendered digital safety crisis. In an environment where women are disproportionately targeted and reporting mechanisms are widely viewed as ineffective, AI does not introduce a new problem. Instead, it magnifies an existing one. Without effective moderation and enforcement on platforms like X, AI tools become accelerants of gender-based harm, lowering the barrier to producing and circulating exploitative content. Under  X’s monetization policies , in which viral creators can earn money for clicks and engagement, perpetrators stand a chance to earn money from their actions. This is where algorithms matter. Platforms are not neutral spaces. Algorithms decide what is surfaced, what trends, and what is rewarded with visibility, potentially at the expense of those most vulnerable. When AI-generated harassment is treated as “engaging content,” it gets amplified, and women, especially women from the Global majority, become collateral damage. How to design against harm instead of reacting to it At Superbloom, through our Tech Policy Design Lab work with Tope Ogundipe of TechSocietal, we developed a Gendered Privacy Evaluation Framework to help answer that question. Grounded in human rights principles and participatory research, the framework offers tech companies a practical way to assess whether their tools — especially AI systems — are reinforcing or reducing gendered harm. It pushes platforms to move beyond surface-level safety features and examine deeper systems: governance commitments, staff training on technology-facilitated gender-based violence, accessible grievance mechanisms, meaningful opt-in consent, data minimization, encryption, and genuine stakeholder engagement with women’s rights groups. AI can scale already existing inequalities. Applying a gendered privacy lens before and after deployment ensures that innovation does not come at the cost of women’s safety. The tools to design more responsibly already exist. The real question is whether platforms are willing to use them. Written by Guest Contributor View original post (English)

Published: Modified: Back to Voices