This is my way of thinking out loud with friends (that’s you, btw) about how we as technologists–builders, creators, leaders–impact the world around us with what we choose to build and how we build it.
Share
It's Not a Bug, It's a Feature: Are common GenAI harms intentional?
AI does a lot of incredible things, but some of the ways it’s being used right now are hurting people, and maybe it's not accidental. In this issue, I look at how repeated generative AI harms suggest intention, why regulation hasn’t caught up, and what frameworks can help us prevent these harms going forward.
Are Common GenAI Harms Intentional?
By Ronstik on Adobe Stock
It’s not new that technology always finds a way to harm people, but it should shock us when the harm continues even after we saw it coming.
When a harm happens once, it might be a mistake. When it happens over and over, and no one stops it… You have to ask if it's really a bug or if it is a feature of the system because it now looks deliberate.
One striking example isn’t just about a tool or a glitch; it’s about what people do with the tool. Recently, AI-generated influencers on Instagram have been posting fake images portraying celebrities in sexual situations they never consented to, linking those posts to adult content sites. This seems like a profit strategy that bends rules and ignores human costs.
This trend shows something important: many companies and platforms are building blindly toward engagement and monetization, and the real people harmed by the misuse of AI come second.
Nowhere has this pattern been more visible than with Grok, the AI chatbot developed by xAI and integrated with X. When the tool first launched, people quickly found ways to use it to generate sexual content involving women and children (CSAM) without consent. There was backlash, and some restrictions were added.
For a while, it seemed like the issue had been addressed.
As jarring as this may seem, the problem doesn’t stop at the software. xAI’s Memphis AI training center scaled its infrastructure by skipping standard data center procedures without accountability, releasing pollutants into neighborhoods already burdened by environmental racism. Just like Grok, people argue that the company intentionally prioritizes speed and growth at the expense of real people.
On the other hand, regulation continues to remain thin. In the U.S., recent political moves have sought to limit states' abilities to pass their own AI laws, while federal rules remain minimal. The message this sends to companies is that they can move fast, by any means necessary, and they’re unlikely to be stopped.
But just because the law steps back doesn’t mean responsibility disappears. Often, it shifts to whatever drives growth or revenue and not to protect people from harm.
These examples are proof that repeated harms are not accidental. They hint at a system where harm is deliberately baked in if it serves business objectives.
Good intentions don’t scale. If repeated harms feel intentional, we need structured ways to prevent them. This is where frameworks matter: anticipating misuse and embedding safeguards directly into the design.
Researchers and policymakers have been working on ways to identify and categorize AI harms before they spiral.
Built-in tooling and governance: Microsoft Azure includes safety tools like content safety APIs that scan outputs for harmful language, such as violence, or other risky content, before it reaches users. These tools can help stop unsafe outputs automatically.
Responsible AI pillars baked into engineering: Fairness, accountability, transparency, and reliability aren’t just buzzwords. They’re practical checkpoints in development, where teams have to show they’ve considered and tested for harm before a product goes live.
Automated harm measurement research: Microsoft Research has built frameworks that can measure harm automatically in large language models. This lets teams keep an eye on how systems actually behave in the real world, not just in theory.
2. Anthropic:Anthropic takes a different route. They embed safety and harm thinking directly into how models are trained and reviewed:
Constitutional or principle-guided controls: Instead of just filtering harmful outputs after the fact, Anthropic builds rules into the model itself. These “principles” guide how it responds, nudging it toward safer, less harmful answers from the start.
Continuous risk disclosure and transparency: There’s a push for clearer transparency rules, asking AI developers to share their risk assessments and safety practices publicly. It’s a small but important step toward real accountability for powerful models.
Although these are not the paragons of people-first AI frameworks (especially because Microsoft has publicly committed to strong responsibility standards, while its partner OpenAI doesn’t uphold the same standards), it does present a glimmer of hope for a safer future.
Good news, everyone! I'm now partnering with Bookshop.org to bring you recommendations based on books I'm reading on this issue's topics. That means I earn a commission if you click through and make a purchase, and your purchases support local bookstores! Bookshop.org has raised over $40M for independent book stores around the world!
How Bookshop.org purchases help local bookstores
Take a look at the new reads this week, all available on Bookshop.org
Engineering Leader, Community Builder, Speaker, Contributor
Code & Conscience is my way of thinking out loud with friends (that’s you, btw) about how we as technologists–builders, creators, leaders–impact the world around us with what we choose to build and how we build it.
This is my way of thinking out loud with friends (that’s you, btw) about how we as technologists–builders, creators, leaders–impact the world around us with what we choose to build and how we build it.
Code & Conscience #018 Listen to this issue on YouTube In this Issue I’m so excited to wrap up this three-part series! As many of you know, I've been wrestling with how to build tech products that people genuinely love without accidentally setting the world on fire. That wrestling match turned into my upcoming book! This newsletter series is our third in-depth exploration of the ideas inside. And my book is an attempt to give the practical guide I wish I had: one that connects Tech Strategy...
Code & Conscience #017 Listen to this issue on YouTube In this Issue I’m so excited to continue this three-part series! As many of you know, I've been wrestling with how to build tech products that people genuinely love without accidentally setting the world on fire. That wrestling match turned into my upcoming book! This newsletter is our second in-depth exploration of the ideas inside. And my book is an attempt to give the practical guide I wish I had: one that connects Tech Strategy (what...
Code & Conscience #016 Listen to this issue on YouTube In this Issue Short-term thinking in tech is creating massive "Societal Debt." This issue shows why leaving the old playbook behind for an ethical approach to tech strategy is the only way to build tech that lasts and drives durable value instead of risk. How Tech Strategy Can Help Dig Us Out of The 'Societal Debt' Trap By CACTUS Creative Studio/Stocksy on Adobe Stock I’m so excited to kick off this new three-part series! As many of you...