This is my way of thinking out loud with friends (that’s you, btw) about how we as technologists–builders, creators, leaders–impact the world around us with what we choose to build and how we build it.
Tech moves very fast, especially with AI running the show! But should we trade speed for caution? Is the "Move fast and break things" philosophy still relevant? Or should we reflect on it?
Move Fast and Break...Society?
Credit: Xinfang on Adobe Stock
"Move fast and break things". We've all heard this tech philosophy that prioritizes rapid development and deployment over cautious, deliberate innovation. The idea was to get a product out quickly and fix issues later. While this approach helped companies like Meta (formerly Facebook) quickly dominate the market, its negative societal impacts have become too harmful and more widespread.
I'll be honest, this philosophy has helped shape the world we live in. Many products would never have made it to limelight if we waited for perfection before pushing to the public. However, a focus on speed without a counterbalance of responsibility can cause real damage. In fact, I dare say it is already causing real damage.
A similar phrase that reflects this philosophy is "If there is nothing wrong with your MVP, you’ve launched too late." But it's becoming glaring that a reckless, high-speed approach to innovation is unsustainable and, frankly, dangerous. In today's world, where an adolescent can whip up an app in a few minutes with no safety or ethical measures, we must do better and here’s why:
It Fosters a Lack of Accountability. The absence of a clear line of accountability for negative outcomes is becoming snowballing into a big problem in tech. When something goes wrong (a data breach, an algorithmic error, or a platform that spreads hate speech) the companies responsible often face no meaningful consequences. We have fast unveiling of products, but slow regulations and nonexistent consequences.
It's Unsuitable for High-Stakes Systems: What may be a minor bug in a social media app can be catastrophic in a medical app. The "move fast and break things" ethos is completely inappropriate for technologies that have a direct impact on people's lives and livelihoods.
It Deepens Social Inequality: AI is creating massive workforce displacement without giving people time to adapt. According to a Brookings Institute commentary, current worker retraining programs are struggling to keep pace, leaving many behind and increasing the risk of a widening societal divide. The focus on speed can end up prioritizing a small group of creators while leaving the rest of the world to deal with the consequences.
It Puts Widely Used Tools at Risk. This mindset might be acceptable for early startups, but it is dangerous when applied to systems that millions, or even billions, of people depend on. The constant need to push out new features can lead to a state of perpetual instability.
Good news, everyone! I'm now partnering with Bookshop.org to bring you recommendations based on books I'm reading on this issue's topics. That means I earn a commission if you click through and make a purchase, and your purchases support local bookstores! Bookshop.org has raised over $40M for independent book stores around the world!
How Bookshop.org purchases help local bookstores
Take a look at the new reads this week, all available on Bookshop.org
Engineering Leader, Community Builder, Speaker, Contributor
Code & Conscience is my way of thinking out loud with friends (that’s you, btw) about how we as technologists–builders, creators, leaders–impact the world around us with what we choose to build and how we build it.
This is my way of thinking out loud with friends (that’s you, btw) about how we as technologists–builders, creators, leaders–impact the world around us with what we choose to build and how we build it.
Code & Conscience #019 Listen to this issue on YouTube In this Issue AI does a lot of incredible things, but some of the ways it’s being used right now are hurting people, and maybe it's not accidental. In this issue, I look at how repeated generative AI harms suggest intention, why regulation hasn’t caught up, and what frameworks can help us prevent these harms going forward. Are Common GenAI Harms Intentional? By Ronstik on Adobe Stock It’s not new that technology always finds a way to harm...
Code & Conscience #018 Listen to this issue on YouTube In this Issue I’m so excited to wrap up this three-part series! As many of you know, I've been wrestling with how to build tech products that people genuinely love without accidentally setting the world on fire. That wrestling match turned into my upcoming book! This newsletter series is our third in-depth exploration of the ideas inside. And my book is an attempt to give the practical guide I wish I had: one that connects Tech Strategy...
Code & Conscience #017 Listen to this issue on YouTube In this Issue I’m so excited to continue this three-part series! As many of you know, I've been wrestling with how to build tech products that people genuinely love without accidentally setting the world on fire. That wrestling match turned into my upcoming book! This newsletter is our second in-depth exploration of the ideas inside. And my book is an attempt to give the practical guide I wish I had: one that connects Tech Strategy (what...