This is my way of thinking out loud with friends (that’s you, btw) about how we as technologists–builders, creators, leaders–impact the world around us with what we choose to build and how we build it.
In this issue, I talk about how AI is now influencing foreign policy. From the public fallout between Anthropic and the U.S. Government, to Israel using AI to identify 37,000 targets in Gaza, to data centers becoming important non-human targets in the global power struggle. I also share practical ways technologists can navigate the unique challenges AI brings.
The Uncanny Valley of AI Foreign Policy
By Alex on Adobe Stock
If you’ve been following the news lately, I'm sure you've noticed how technology has become a more active participant in foreign policy.
This isn't entirely new, of course. Guns, missiles, and nuclear weapons are all technological innovations. And even personally, I remember roughly 20 years ago, early in the research portion of my career, turning down or not applying for defense grants, where my work could be used in war. However, we are experiencing something unique with AI.
For many people, AI’s involvement in foreign policy only recently gained attention with the public fallout between Anthropic and the U.S. Government. The Pentagon demanded unrestricted access to Claude (Anthropic's AI model) for use in fully autonomous weapons and mass domestic surveillance. Anthropic vehemently declined. This led to a crash out from the government, labeling Anthropic a supply-chain risk to national security.
But sadly, this isn’t the first time AI has taken center stage in decisions like this. Israel has used AI to identify 37,000 targets in Gaza, and sources claim that it was granted permission to kill civilians in pursuit of low-ranking militants. We are now living in perilous times where our leaders have allowed AI to determine who lives or dies, and which places live in peace or war!
On the other side of foreign policy, we’re seeing governments pushing back against the big tech companies themselves. Earlier this year, French authorities started looking into issues connected to X (Twitter) and its AI system Grok, with police even raiding offices tied to the investigation.
Last month, Canada’s privacy commissioner, Philippe Dufresne, said he is expanding an investigation into X after reports that Grok was being used to create and spread explicit images of people without their consent.
Another side of tech’s influence in foreign policy that doesn't often make headlines is hardware--chips and data centers. In December 2025, the U.S. initiated Pax Silica, an alliance to set a political and material perimeter around the production and distribution of semiconductors, critical minerals, and AI. World powers gathered to make rules on the future of compute. It’s little wonder why Iran targeted data centers in retaliation for the US’ strikes. They have also become an important part of global power.
Leadership Lessons: Navigating the New Era of Silicon Statecraft
When our products begin to play a role in global politics, the impact of our decisions reaches far beyond tech and business; they also carry ethical and political weight. So how do technologists navigate this?
Although Anthropic should maybe not be the poster child of AI safety (because of the skeletons hiding in their closet), this preemptive approach is worth commending. More tech leaders should set guardrails that cement their values before challenges come to test them.
2. Stakeholder Mapping is Now a Governance Tool
The old saying "the customer is always right in matters of taste" meant you don't argue with what someone wants. But today, your customers (e.g., governments, military bodies, powerful companies, and individuals) have moved beyond consumers to de facto stakeholders who can redirect your product.
So before you onboard a new highly influential customer or investor turned external stakeholder, ask yourself and your team: how could their influence drive our development? And what happens if that influence clashes with our values? Are they good for us long term?
In my opinion, this is one of the ways Anthropic was pulled further from its initial values. Not anticipating the ways an external stakeholder like the US government, especially the US government under the Trump administration, would eventually use their product created the perfect storm for the supply-chain risk labeling.
This is a lesson most entrepreneurs learn early in their journeys. You have to choose your customers wisely because “all money isn’t good money”, especially when it comes with unacceptable risks.
3. Prepare for "The Off-boarding Paradox"
The more powerful your technology becomes, the harder it is to remove a client who is misusing it. When a powerful institution embeds your technology into critical operations, off-boarding them might become a herculean task. Before a powerful client gets too deep into your infrastructure, ask: What does off-boarding look like if this goes wrong?
4. Accountability for Human Rights Violations
“A computer can never be held accountable, therefore a computer must never make a management decision.”
– IBM Training Manual, 1979
57 years ago, IBM historically warned us about humans, machines, and the ownership of management decisions. War scenarios, where many lives may hang in the balance, are the ultimate management decisions. As AI becomes a tool of foreign policy, the humans who deploy it remain the only ones who can be held accountable. That responsibility cannot be outsourced to an algorithm, and tech leaders will be answerable for the consequences.
5. Communication is Your Most Potent Defense
The platforms where most conversations happen today are owned by the same billionaires with questionable intentions. For example, Elon Musk owns X, and since his takeover, right-wing content has visibly surged on the platform. The algorithm reflects the values of whoever is at the top. Another example is the saga between Facebook and Cambridge Analytica years ago, which also showed us how social media can shape political opinions. Meta is even more of a menace today. We now need to find ways to effectively communicate with our consumers in a way that allows us to reach them and own our narrative.
Where Do We Draw the Line?
We aren’t building in a vacuum, and our design decisions now have a seat at the geopolitical table. The examples of France, Israel, and the Pentagon prove that technology has moved beyond a kind of product to a form of power.
But as we’ve seen, power without a strategy is a recipe for disaster. As builders, creators, and leaders, we have to anticipate and ask ourselves the hard questions before causing harm at scale.
Conversations on difficult topics like this one are important, but action steps are even more important. Let’s think out loud together:
Should tech companies say no to clients, even governments? If a contract compromises your values, do you sign and figure it out later?
Is company policy just a reflection of its leader's values? For example, Sam Altman started OpenAI as a nonprofit. Now the company is exploring ads inside ChatGPT.
How do we build ethics into our products from day 1?
What are efficient ways for employees to push back against unethical company values/actions?
Join the Conversation
Drop a comment on the YouTube version of this issue or reply on my socials. I read every single one. And if this issue made you think, forward it to a fellow technologist who is navigating these same waters. The future of AI belongs to everyone willing to talk about it honestly.
Good news, everyone! I'm now partnering with Bookshop.org to bring you recommendations based on books I'm reading on this issue's topics. That means I earn a commission if you click through and make a purchase, and your purchases support local bookstores! Bookshop.org has raised over $40M for independent book stores around the world!
How Bookshop.org purchases help local bookstores
Take a look at the new reads this week, all available on Bookshop.org
Engineering Leader, Community Builder, Speaker, Contributor
Code & Conscience is my way of thinking out loud with friends (that’s you, btw) about how we as technologists–builders, creators, leaders–impact the world around us with what we choose to build and how we build it.
This is my way of thinking out loud with friends (that’s you, btw) about how we as technologists–builders, creators, leaders–impact the world around us with what we choose to build and how we build it.
Code & Conscience #020 Listen to this issue on YouTube In this Issue Technofascism is reshaping tech! Billionaires, tech leaders, and right-wing politicians are all coming together to push their ideologies and change the world. Since we're celebrating Black History Month in the U.S., we are looking at what this means for Black communities and what Black history can teach all of us about resisting power that oppresses. Decoding Technofascism By Rafaqat Batool on Adobe Stock At its simplest,...
Code & Conscience #019 Listen to this issue on YouTube In this Issue AI does a lot of incredible things, but some of the ways it’s being used right now are hurting people, and maybe it's not accidental. In this issue, I look at how repeated generative AI harms suggest intention, why regulation hasn’t caught up, and what frameworks can help us prevent these harms going forward. Are Common GenAI Harms Intentional? By Ronstik on Adobe Stock It’s not new that technology always finds a way to harm...
Code & Conscience #018 Listen to this issue on YouTube In this Issue I’m so excited to wrap up this three-part series! As many of you know, I've been wrestling with how to build tech products that people genuinely love without accidentally setting the world on fire. That wrestling match turned into my upcoming book! This newsletter series is our third in-depth exploration of the ideas inside. And my book is an attempt to give the practical guide I wish I had: one that connects Tech Strategy...