China just held its largest military parade in history. Xi Jinping, Putin, and Kim Jong-un stood together on the same stage — three men who collectively control a significant portion of the world's nuclear arsenal — and the centerpiece wasn't tanks or troops. It was AI-powered weaponry. Hypersonic missiles guided by machine learning. Autonomous drones that can identify and engage targets without a human in the loop. Surveillance systems that can track a face through a city of twenty million people. I'm watching this from my living room, the same room where I build AI systems that help businesses run better, and I'm thinking: we are building the same underlying technology. The math is the same. The architectures are the same. The difference is what you point it at.
And that's the question nobody wants to answer honestly. When AI becomes a weapon — and it already has — whose job is it to draw the line? The AI companies? They built it. They understand the capabilities better than any senator or general. Anthropic has its responsible scaling policy. Google pulled out of Project Maven in 2018 when employees revolted. But AI companies are businesses. The moment one says "we won't do defense contracts," another will. You can't run a company on principles if the company doesn't exist. The government? Regulate it, pass laws, create a Geneva Convention for AI. Except governments are slow and AI is fast. By the time Congress understands transformers, the technology will have moved three generations past whatever they're trying to regulate. And the US Department of Defense is one of the biggest AI customers on the planet — you can't ask the government to regulate AI weapons while simultaneously building them. Then there's us — the engineers, the researchers, the builders. Oppenheimer's "I am become death" isn't just a quote, it's a warning from a man who understood too late that building something doesn't mean you control what it becomes. "I just built the tool, I didn't choose how it was used" stopped being an acceptable answer somewhere around Hiroshima.
The honest truth is that I don't have a clean answer, and I'm suspicious of anyone who does. The companies can't self-regulate because capitalism doesn't work that way. The governments can't regulate intelligently because they don't understand the technology and have their own agendas. Individual engineers can walk away, but someone else will fill the seat. What I do know is this: the conversation needs to be louder. We're arguing about whether AI will take your customer service job while actual AI-powered weapons are being paraded through Beijing. The stakes aren't productivity. The stakes are existential. And we're treating this like a product launch.
I build AI because I believe it can make life better. I've seen it automate the tedious, accelerate the creative, and unlock things that weren't possible five years ago. I still believe that. But I'd be lying if I said that watching that parade didn't shake something loose in me. The same neural network architecture that powers the chatbot helping you write an email can be retrained to identify human targets from a drone feed. That's not science fiction. That's September 2025. So who's the adult in the room? Right now, nobody. And that should terrify every single one of us — whether you're building AI, regulating AI, or just using it to plan your dinner. Because the technology doesn't care about your intentions. It just does what it's told. And right now, not enough people are asking who's doing the telling.