All the AI, None of the Dystopia?


Code & Conscience #014

In this Issue

Is it possible to keep developing AI without harming the environment and society? I think it is and I share how. Keep reading!

All the AI, None of the Dystopia?

Change is the only constant in life, but it’s hard. It requires discomfort, a shift from the norm, and a battle with resistance. At the start of the year, you probably vowed to change your sedentary lifestyle. Leave your workstation more often, go for runs, maybe even sign up at the gym. Then, after your muscles ached in protest, you went straight back to your comfort zone. Does this sound like you? Don’t worry, I’m not here to judge, lol!

The point I’m trying to make is, change is difficult! Not just in our personal lives, but in everything. Even the shifts from handwritten letters to email to mobile phones (that we’re all now addicted to) were heavily resisted. Some scholars even warned that the advent of telephones would turn us into heaps of jelly. Well, it’s been hundreds of years since then, and I’ve yet to spot a human jello!

Today, we're seeing this play out again with AI. There are plenty of fortune tellers saying it’ll usher in a better world, and doomsday prophets claiming it’ll be the end of our world. Either side could be right, but what and how we build with AI will determine the answer.

Accepting that AI has already brought irreversible change leads us to my question of the day: How do we build without the dystopia?

Building Harm-less AI

There must be a way to build sustainably, and here’s how I think we can do that:

1. Decentralized AI: Decentralization involves transferring control and decision-making from a single authority to multiple entities. A classic example is blockchain technology, where the control of transactions isn’t determined by one central authority (like a bank) but by a network of computers operating transparently and securely.

In AI, decentralization looks like having smaller data centers located closer to where data is generated, rather than massive centralized ones operating in silos as we have today. This new approach would reshape energy consumption patterns and potentially reduce reliance on large-scale, energy-intensive infrastructure.

One of the most practical applications of this is federated learning. Instead of transmitting vast volumes of raw data to a central location for training, models are trained locally on devices, and only the model updates are aggregated centrally.

The benefits of Decentralized AI include reduced energy consumption for data transfer, increased trust, and the empowerment of smaller, globally distributed innovators — rather than AI's promise and capabilities being controlled by a select few billionaires and big tech companies.

2. Low-Resource Language Models: These are AI models designed for languages that have limited digital resources, such as text corpora, annotated datasets, and speech data. They can address the imbalance caused by the overrepresentation of dominant languages like English and Chinese in existing AI systems.

Small Language Models (SLMs), on the other hand, challenge the conventional belief that effectiveness in AI requires billions of parameters. Instead, they are built to perform efficiently with limited data and computational resources.

The development of African multilingual SLMs like InkubaLM (a 0.4-billion-parameter model) shows this important shift. InkubaLM was trained on five African languages: Swahili, Hausa, Yoruba, isiZulu, and isiXhosa.

SLMs are particularly valuable in resource-constrained environments where access to high-end infrastructure is limited. Models like InkubaLM are easier to refine, fine-tune, and deploy cost-effectively on modest hardware, enabling offline accessibility in areas with unreliable or expensive internet connectivity.

While the carbon footprint of large LLMs is enormous, the InkubaLM-0.4B training process produced an estimated 53.76 kg of CO₂ equivalent emissions. This is significantly lower than that of large models. By minimizing model size and computational demands, SLMs promote sustainability and innovation in regions with limited infrastructure.

Moreover, open-source SLMs empower communities to adapt AI systems for local languages, cultures, and contexts. This fosters digital sovereignty and technological autonomy, reducing dependence on closed, foreign-controlled platforms

In conclusion, building AI without the dystopia is a huge possibility, and we have the tools to move in the direction of an "ustopia". Decentralized AI, Low-Resource Language Models, and Small Language Models show us that it's possible. All that’s left is for us to implement at scale.

Around the Web

▶️ Why Build an AI that's Smarter than Humans? by Joseph Gordon-Levitt

▶️ Open Small AI Toolbox: For Performance, Privacy, and the Planet by Erica Stanley

▶️ Scaling Down to Scale Up: Small Language Models for Large Scale Educational Impact by Anindita Banerjee and Abhijit Roy

▶️ The Ultimate Guide to Local AI and AI Agents by Cole Medin

​📖 How Can Average People Contribute to AI Safety? by Stephen McAleese

​📌 Attend Small Data SF Conference

Good news, everyone! I'm now partnering with Bookshop.org to bring you recommendations based on books I'm reading on this issue's topics. That means I earn a commission if you click through and make a purchase, and your purchases support local bookstores! Bookshop.org has raised over $40M for independent book stores around the world!

Take a look at the new reads this week, all available on Bookshop.org

Book cover: The Developer's Playbook for Large Language Model Security: Building Secure AI Applications by Steve Wilson Book Cover: Small Language Models in Production by Talia Graham

Erica Stanley

Engineering Leader, Community Builder, Speaker, Contributor

Code & Conscience is my way of thinking out loud with friends (that’s you, btw) about how we as technologists–builders, creators, leaders–impact the world around us with what we choose to build and how we build it.

Code & Conscience

This is my way of thinking out loud with friends (that’s you, btw) about how we as technologists–builders, creators, leaders–impact the world around us with what we choose to build and how we build it.

Read more from Code & Conscience
A cartoon-like sketch of hands on laptops and different charts by CACTUS Creative Studio/Stocksy on Adobe Stock

Code & Conscience #016 Listen to this issue on YouTube In this Issue Short-term thinking in tech is creating massive "Societal Debt." This issue shows why leaving the old playbook behind for an ethical approach to tech strategy is the only way to build tech that lasts and drives durable value instead of risk. How Tech Strategy Can Help Dig Us Out of The 'Societal Debt' Trap By CACTUS Creative Studio/Stocksy on Adobe Stock I’m so excited to kick off this new three-part series! As many of you...

An image of a signpost pointing in different directions by MAY on Adobe Stock

Code & Conscience #015 Listen to this issue on YouTube In this Issue I share my honest opinion on why I believe the tech ecosystem and its leaders are experiencing a midlife crisis. Tech's Midlife Crisis By MAY on Adobe Stock Like any ambitious person, I like to set goals. Big goals that stretch me and force me to become the best version of myself. But once I achieve the goal, the excitement starts to wear off. The euphoria of success is suddenly dampened by thoughts of doubt, meltdowns, and...

A shattered iPhone is held up against a blurry backdrop of a fast moving city

Code & Conscience #013 Listen to this issue on YouTube In this Issue Tech moves very fast, especially with AI running the show! But should we trade speed for caution? Is the "Move fast and break things" philosophy still relevant? Or should we reflect on it? Move Fast and Break...Society? Credit: Xinfang on Adobe Stock "Move fast and break things". We've all heard this tech philosophy that prioritizes rapid development and deployment over cautious, deliberate innovation. The idea was to get a...