Why AI Needs Independent Oversight
The New York Times Just Made Our Case
It’s not every day that the New York Times writes an article giving a compelling pitch for the existence of your new company. However, that’s what just happened today. The New York Times just published an article about how AI is going to be advancing and changing. The United States is not going to be able to trust AI that comes out of China or the other way around without some kind of third-party checks or guard rails. Highly customizable guard rails for AI is the whole pitch for Maybe Don’t AI
In today’s piece, Thomas L. Friedman argues that without proper oversight mechanisms, we’re heading toward a dangerous fragmentation of the global economy. He warns: “In short, as I will argue, if we cannot trust A.I.-infused products from China and it can’t trust ours, very soon the only item China will dare buy from America will be soybeans and the only thing we will dare buy from China is soy sauce, which will surely sap global growth.”
This economic apocalypse scenario isn’t just about geopolitics—it’s about a fundamental trust crisis that extends far beyond international relations. Third-party checks are going to be essential to AI usage going forward. That’s not just at a country level. That’s at a company level and that’s going to be at a personal level.
Friedman crystallizes the core problem with what he calls “quadruple-use” technology: “This new addition to the dinner table is no ordinary guest. A.I. will also become what I call the world’s first quadruple-use technology. We have long been familiar with dual-use — I can use a hammer to help build my neighbor’s house or smash it apart. I can even use an A.I. robot to mow my lawn or tear up my neighbor’s lawn. That’s all dual use. But given the pace of A.I. innovation, it is increasingly likely that in the not-so-distant future my A.I.-enabled robot will be able to decide on its own whether to mow my lawn or tear up my neighbor’s lawn or maybe tear up my lawn, too — or perhaps something worse that we can’t even imagine. Presto! Quadruple use.”
The Very Real Automony Problem
This autonomy problem isn’t theoretical—it’s happening now. Every time you interact with an AI system, you’re trusting it to act in your best interest. But how do you verify that trust? How do you know the AI isn’t optimizing for something else entirely?
This is precisely why Maybe Don’t AI exists. We’ve built a non-deterministic policy engine that continuously checks AI behavior. Our solution doesn’t just flag potential problems—it actively ensures AI systems remain aligned with what users actually want, not what the AI thinks they want.
The trust crisis Friedman describes won’t be solved by government regulations alone. It requires practical, deployable solutions that work at every level of AI interaction. Companies deploying AI systems need independent verification. Governments implementing AI policies need oversight mechanisms. Individuals using AI tools need assurance that their digital assistants won’t become their digital adversaries.
Time to Act
The window for proactive AI oversight is narrowing rapidly. Download Maybe Don’t AI today and start implementing the kind of third-party verification systems that will become essential infrastructure tomorrow.
Don’t wait for the trust crisis to arrive—prevent it.