All the AI, None of the Dystopia?


Code & Conscience #014

In this Issue

Is it possible to keep developing AI without harming the environment and society? I think it is and I share how. Keep reading!

All the AI, None of the Dystopia?

Change is the only constant in life, but it’s hard. It requires discomfort, a shift from the norm, and a battle with resistance. At the start of the year, you probably vowed to change your sedentary lifestyle. Leave your workstation more often, go for runs, maybe even sign up at the gym. Then, after your muscles ached in protest, you went straight back to your comfort zone. Does this sound like you? Don’t worry, I’m not here to judge, lol!

The point I’m trying to make is, change is difficult! Not just in our personal lives, but in everything. Even the shifts from handwritten letters to email to mobile phones (that we’re all now addicted to) were heavily resisted. Some scholars even warned that the advent of telephones would turn us into heaps of jelly. Well, it’s been hundreds of years since then, and I’ve yet to spot a human jello!

Today, we're seeing this play out again with AI. There are plenty of fortune tellers saying it’ll usher in a better world, and doomsday prophets claiming it’ll be the end of our world. Either side could be right, but what and how we build with AI will determine the answer.

Accepting that AI has already brought irreversible change leads us to my question of the day: How do we build without the dystopia?

Building Harm-less AI

There must be a way to build sustainably, and here’s how I think we can do that:

1. Decentralized AI: Decentralization involves transferring control and decision-making from a single authority to multiple entities. A classic example is blockchain technology, where the control of transactions isn’t determined by one central authority (like a bank) but by a network of computers operating transparently and securely.

In AI, decentralization looks like having smaller data centers located closer to where data is generated, rather than massive centralized ones operating in silos as we have today. This new approach would reshape energy consumption patterns and potentially reduce reliance on large-scale, energy-intensive infrastructure.

One of the most practical applications of this is federated learning. Instead of transmitting vast volumes of raw data to a central location for training, models are trained locally on devices, and only the model updates are aggregated centrally.

The benefits of Decentralized AI include reduced energy consumption for data transfer, increased trust, and the empowerment of smaller, globally distributed innovators — rather than AI's promise and capabilities being controlled by a select few billionaires and big tech companies.

2. Low-Resource Language Models: These are AI models designed for languages that have limited digital resources, such as text corpora, annotated datasets, and speech data. They can address the imbalance caused by the overrepresentation of dominant languages like English and Chinese in existing AI systems.

Small Language Models (SLMs), on the other hand, challenge the conventional belief that effectiveness in AI requires billions of parameters. Instead, they are built to perform efficiently with limited data and computational resources.

The development of African multilingual SLMs like InkubaLM (a 0.4-billion-parameter model) shows this important shift. InkubaLM was trained on five African languages: Swahili, Hausa, Yoruba, isiZulu, and isiXhosa.

SLMs are particularly valuable in resource-constrained environments where access to high-end infrastructure is limited. Models like InkubaLM are easier to refine, fine-tune, and deploy cost-effectively on modest hardware, enabling offline accessibility in areas with unreliable or expensive internet connectivity.

While the carbon footprint of large LLMs is enormous, the InkubaLM-0.4B training process produced an estimated 53.76 kg of CO₂ equivalent emissions. This is significantly lower than that of large models. By minimizing model size and computational demands, SLMs promote sustainability and innovation in regions with limited infrastructure.

Moreover, open-source SLMs empower communities to adapt AI systems for local languages, cultures, and contexts. This fosters digital sovereignty and technological autonomy, reducing dependence on closed, foreign-controlled platforms

In conclusion, building AI without the dystopia is a huge possibility, and we have the tools to move in the direction of an "ustopia". Decentralized AI, Low-Resource Language Models, and Small Language Models show us that it's possible. All that’s left is for us to implement at scale.

Around the Web

▶️ Why Build an AI that's Smarter than Humans? by Joseph Gordon-Levitt

▶️ Open Small AI Toolbox: For Performance, Privacy, and the Planet by Erica Stanley

▶️ Scaling Down to Scale Up: Small Language Models for Large Scale Educational Impact by Anindita Banerjee and Abhijit Roy

▶️ The Ultimate Guide to Local AI and AI Agents by Cole Medin

​📖 How Can Average People Contribute to AI Safety? by Stephen McAleese

​📌 Attend Small Data SF Conference

Good news, everyone! I'm now partnering with Bookshop.org to bring you recommendations based on books I'm reading on this issue's topics. That means I earn a commission if you click through and make a purchase, and your purchases support local bookstores! Bookshop.org has raised over $40M for independent book stores around the world!

Take a look at the new reads this week, all available on Bookshop.org

Book cover: The Developer's Playbook for Large Language Model Security: Building Secure AI Applications by Steve Wilson Book Cover: Small Language Models in Production by Talia Graham

Erica Stanley

Engineering Leader, Community Builder, Speaker, Contributor

Code & Conscience is my way of thinking out loud with friends (that’s you, btw) about how we as technologists–builders, creators, leaders–impact the world around us with what we choose to build and how we build it.

Code & Conscience

This is my way of thinking out loud with friends (that’s you, btw) about how we as technologists–builders, creators, leaders–impact the world around us with what we choose to build and how we build it.

Read more from Code & Conscience
An image of a bitten apple reflected in a mirror showing the side that wasn't eaten by Ronstik on Adobe Stock.

Code & Conscience #019 Listen to this issue on YouTube In this Issue AI does a lot of incredible things, but some of the ways it’s being used right now are hurting people, and maybe it's not accidental. In this issue, I look at how repeated generative AI harms suggest intention, why regulation hasn’t caught up, and what frameworks can help us prevent these harms going forward. Are Common GenAI Harms Intentional? By Ronstik on Adobe Stock It’s not new that technology always finds a way to harm...

An image of a compass pointing to the text "Better World" by Olivier Le Moal on Adobe stock.

Code & Conscience #018 Listen to this issue on YouTube In this Issue I’m so excited to wrap up this three-part series! As many of you know, I've been wrestling with how to build tech products that people genuinely love without accidentally setting the world on fire. That wrestling match turned into my upcoming book! This newsletter series is our third in-depth exploration of the ideas inside. And my book is an attempt to give the practical guide I wish I had: one that connects Tech Strategy...

An illustration of large light bulb surrounded by many connected gears of varying sizes

Code & Conscience #017 Listen to this issue on YouTube In this Issue I’m so excited to continue this three-part series! As many of you know, I've been wrestling with how to build tech products that people genuinely love without accidentally setting the world on fire. That wrestling match turned into my upcoming book! This newsletter is our second in-depth exploration of the ideas inside. And my book is an attempt to give the practical guide I wish I had: one that connects Tech Strategy (what...