It's Not a Bug, It's a Feature: Are common GenAI harms intentional?


Code & Conscience #019

In this Issue

AI does a lot of incredible things, but some of the ways it’s being used right now are hurting people, and maybe it's not accidental. In this issue, I look at how repeated generative AI harms suggest intention, why regulation hasn’t caught up, and what frameworks can help us prevent these harms going forward.

Are Common GenAI Harms Intentional?

It’s not new that technology always finds a way to harm people, but it should shock us when the harm continues even after we saw it coming.

When a harm happens once, it might be a mistake. When it happens over and over, and no one stops it… You have to ask if it's really a bug or if it is a feature of the system because it now looks deliberate.

One striking example isn’t just about a tool or a glitch; it’s about what people do with the tool. Recently, AI-generated influencers on Instagram have been posting fake images portraying celebrities in sexual situations they never consented to, linking those posts to adult content sites. This seems like a profit strategy that bends rules and ignores human costs.

This trend shows something important: many companies and platforms are building blindly toward engagement and monetization, and the real people harmed by the misuse of AI come second.

Nowhere has this pattern been more visible than with Grok, the AI chatbot developed by xAI and integrated with X. When the tool first launched, people quickly found ways to use it to generate sexual content involving women and children (CSAM) without consent. There was backlash, and some restrictions were added.

For a while, it seemed like the issue had been addressed.

But reports and user experiences suggest that the behavior has resurfaced. Critics argue that instead of eliminating the capability, the platform monetized it behind a paywall, a move some see as prioritizing revenue over safety. Advocacy groups have called this “monetizing abuse” and are pressing Apple and Google to remove X and Grok from their app stores. Experts warn that the use of AI to harm women and children is only just beginning and that harmful outputs continue to spread despite guardrails.

As jarring as this may seem, the problem doesn’t stop at the software. xAI’s Memphis AI training center scaled its infrastructure by skipping standard data center procedures without accountability, releasing pollutants into neighborhoods already burdened by environmental racism. Just like Grok, people argue that the company intentionally prioritizes speed and growth at the expense of real people.

On the other hand, regulation continues to remain thin. In the U.S., recent political moves have sought to limit states' abilities to pass their own AI laws, while federal rules remain minimal. The message this sends to companies is that they can move fast, by any means necessary, and they’re unlikely to be stopped.

But just because the law steps back doesn’t mean responsibility disappears. Often, it shifts to whatever drives growth or revenue and not to protect people from harm.

These examples are proof that repeated harms are not accidental. They hint at a system where harm is deliberately baked in if it serves business objectives.

So Now What?

Good intentions don’t scale. If repeated harms feel intentional, we need structured ways to prevent them. This is where frameworks matter: anticipating misuse and embedding safeguards directly into the design.

Researchers and policymakers have been working on ways to identify and categorize AI harms before they spiral.

Frameworks That Work

1. Microsoft: Microsoft is often cited as an example of what structured responsibility can look like in practice. Their approach has several working features that teams actually use:

  • Built-in tooling and governance: Microsoft Azure includes safety tools like content safety APIs that scan outputs for harmful language, such as violence, or other risky content, before it reaches users. These tools can help stop unsafe outputs automatically.
  • Responsible AI pillars baked into engineering: Fairness, accountability, transparency, and reliability aren’t just buzzwords. They’re practical checkpoints in development, where teams have to show they’ve considered and tested for harm before a product goes live.
  • Automated harm measurement research: Microsoft Research has built frameworks that can measure harm automatically in large language models. This lets teams keep an eye on how systems actually behave in the real world, not just in theory.

2. Anthropic: Anthropic takes a different route. They embed safety and harm thinking directly into how models are trained and reviewed:

  • Constitutional or principle-guided controls: Instead of just filtering harmful outputs after the fact, Anthropic builds rules into the model itself. These “principles” guide how it responds, nudging it toward safer, less harmful answers from the start.
  • Continuous risk disclosure and transparency: There’s a push for clearer transparency rules, asking AI developers to share their risk assessments and safety practices publicly. It’s a small but important step toward real accountability for powerful models.

Additionally, IBM has built a set of tools and practices that help engineers spot bias and unfair results before AI goes live. Google also designed a Secure AI Framework to make AI safer from the ground up.

Although these are not the paragons of people-first AI frameworks (especially because Microsoft has publicly committed to strong responsibility standards, while its partner OpenAI doesn’t uphold the same standards), it does present a glimmer of hope for a safer future.

Around the Web

​📖 Analyzing Regulatory Gaps Revealed by India’s Response to the Grok Debacle

​📖 Ireland told to use EU presidency to push for stronger AI deepfake law

​📖 Irish, EU laws broken on AI child abuse images, says minister

▶️ xAI is Poisoning Memphis. The Pentagon Just Handed Them a $200 Million Contract

​📖 Adding Structure to AI Harm: An Introduction to CSET's AI Harm Framework

📖 Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research

Good news, everyone! I'm now partnering with Bookshop.org to bring you recommendations based on books I'm reading on this issue's topics. That means I earn a commission if you click through and make a purchase, and your purchases support local bookstores! Bookshop.org has raised over $40M for independent book stores around the world!

Take a look at the new reads this week, all available on Bookshop.org

Book Cover: Empire of AI by Karen Hao Book Cover: Mastering AI Governance by Rajendra Gangavarapu

Erica Stanley

Engineering Leader, Community Builder, Speaker, Contributor

Code & Conscience is my way of thinking out loud with friends (that’s you, btw) about how we as technologists–builders, creators, leaders–impact the world around us with what we choose to build and how we build it.

Code & Conscience

This is my way of thinking out loud with friends (that’s you, btw) about how we as technologists–builders, creators, leaders–impact the world around us with what we choose to build and how we build it.

Read more from Code & Conscience
An image of a compass pointing to the text "Better World" by Olivier Le Moal on Adobe stock.

Code & Conscience #018 Listen to this issue on YouTube In this Issue I’m so excited to wrap up this three-part series! As many of you know, I've been wrestling with how to build tech products that people genuinely love without accidentally setting the world on fire. That wrestling match turned into my upcoming book! This newsletter series is our third in-depth exploration of the ideas inside. And my book is an attempt to give the practical guide I wish I had: one that connects Tech Strategy...

An illustration of large light bulb surrounded by many connected gears of varying sizes

Code & Conscience #017 Listen to this issue on YouTube In this Issue I’m so excited to continue this three-part series! As many of you know, I've been wrestling with how to build tech products that people genuinely love without accidentally setting the world on fire. That wrestling match turned into my upcoming book! This newsletter is our second in-depth exploration of the ideas inside. And my book is an attempt to give the practical guide I wish I had: one that connects Tech Strategy (what...

A cartoon-like sketch of hands on laptops and different charts by CACTUS Creative Studio/Stocksy on Adobe Stock

Code & Conscience #016 Listen to this issue on YouTube In this Issue Short-term thinking in tech is creating massive "Societal Debt." This issue shows why leaving the old playbook behind for an ethical approach to tech strategy is the only way to build tech that lasts and drives durable value instead of risk. How Tech Strategy Can Help Dig Us Out of The 'Societal Debt' Trap By CACTUS Creative Studio/Stocksy on Adobe Stock I’m so excited to kick off this new three-part series! As many of you...