This article serves as a follow-up to my previous analysis, 'The Distillation Barrier,' published a couple of weeks ago. In that article, I explored the increasing limitations of model distillation and how adversarial nations could leverage open-source AI to narrow the technological gap with the United States. Expanding on those ideas, this article examines the broader implications of AI governance, the effectiveness of export controls, and the long-term consequences of open-source AI in the race for Artificial General Intelligence (AGI).
Artificial intelligence is no longer merely a technological breakthrough; it has become a defining factor in national security, economic dominance, and global power. Nations that lead in AI development will shape the future, while those that fall behind will face significant disadvantages. The growing tension between commercial AI development and national security concerns has reached a critical point, forcing policymakers and industry leaders to reconsider how AI is developed, deployed, and safeguarded (see OpenAI's call for a band of DeepSeek within the US to the US Administration).
This issue demands a nuanced and deliberate approach, as highlighted by three key sources: Superintelligence Strategy: Expert Version by Dan Hendrycks, Eric Schmidt, and Alexandr Wang, 'The Government Knows AGI is Coming' podcast on The Ezra Klein Show, and my own research in The Distillation Barrier. These perspectives underscore the urgent need to rethink AI governance, particularly regarding export controls, AI proliferation risks, and the consequences of open-source AI.
Artificial Intelligence as a National Security Imperative:
The authors of Superintelligence Strategy argue that artificial intelligence is a dual-use technology with far-reaching implications for national security. AI has the potential to transform economies, reshape global power structures, and serve as a profound strategic advantage. Recognizing these stakes, they introduce the concept of Mutual Assured AI Malfunction (MAIM), a doctrine in which nations acknowledge that an unrestrained AI arms race could lead to catastrophic consequences. Just as nuclear deterrence during the Cold War relied on the understanding that any attack would trigger a devastating response, AI superpowers must recognize that an unchecked pursuit of AI supremacy could invite sabotage or retaliation.
To mitigate these risks, the authors advocate three core strategies: deterrence through MAIM, strict nonproliferation measures to prevent the spread of critical AI technologies, and the development of a robust domestic AI infrastructure. Maintaining an advantage in AI depends not only on controlling who has access to advanced AI models but also on securing the supply chains that power AI development.
The Commercial Imperative and National Security Conflicts:
While governments debate AI security, the private sector is advancing AI capabilities at an unprecedented pace. The Ezra Klein Show podcast explores the widening divide between commercial interests and national security priorities. In the United States and other Western nations, AI companies prioritize open innovation, arguing that widespread AI adoption accelerates progress and fuels economic growth. However, this openness presents considerable risks.
The podcast highlights how adversarial nations, particularly China, are actively exploiting Western AI research to close the technological gap. By allowing AI models to be openly distributed, the United States risks providing its competitors with the tools necessary to challenge its technological dominance. The situation is particularly concerning because commercial AI firms have little incentive to prioritize national security. Their primary objective is market expansion, not geopolitical strategy. This misalignment between private sector ambitions and national security priorities creates a significant vulnerability.
The Limits of Distillation and China’s Strategy of Misdirection:
My research in the Distillation Barrier highlights another critical vulnerability: the challenge of model distillation, which involves compressing large AI models into smaller, more efficient versions without significant loss of capability. While distillation has allowed smaller and less advanced AI models to achieve near parity with more advanced models, the increasing complexity of models such as GPT-4.5, GPT-5, and beyond, is making effective distillation increasingly difficult. This presents a natural limitation, where adversaries relying on distilling public models will always lag behind those who control the most advanced architectures, and because of this fact, go to more extreme measures to ensure that the gap between their most advanced AI capabilities and ours do not continue to widen (think corporate espionage, cybersecurity, etc.).
Despite this reality, China has sought to downplay the impact of U.S. restrictions as part of a broader strategy of misdirection. The recent unveiling of DeepSeek’s R1 model was presented as evidence that China is rapidly closing the AI gap, despite being cut off from high-performance AI chips due to U.S. export controls. This follows a well-documented strategy influenced by the ancient Chinese Military Strategist, Sun Tzu’s, philosophy: "Mislead opponents about one’s true capabilities while quietly developing technological breakthroughs in secret".
By overstating DeepSeek’s advancements, China seeks to create the illusion that U.S. chip restrictions have had little effect. This narrative serves two purposes. First, it pressures U.S. policymakers to doubt the effectiveness of export controls. Second, it reassures China’s domestic industry that the country remains competitive. The reality, however, is far more complex. While China has made notable progress, it remains heavily reliant on Western semiconductor innovations, and its efforts to develop homegrown alternatives have yet to reach parity with American firms. The U.S. chip ban has slowed China’s AI development, but the long-term success of these restrictions depends on the continued protection of AI models themselves, not just hardware.
Artificial General Intelligence, Recursive Self-Improvement, and the First-Mover Advantage:
A crucial but often overlooked aspect of AI competition is recursive self-improvement, where AI systems enhance their own intelligence at an accelerating rate. The first nation to develop strong Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI) will gain a permanent technological advantage, as its AI will continuously refine itself, making it impossible for competitors to catch up.
The Superintelligence Strategy report does not explicitly discuss recursive improvement, but it strongly implies that first-mover advantage in strong AGI is irreversible (I argue that we have achieved some level of AGI now, but not to the level where it could improve itself, with our guidance. In addition, we will have to achieve strong AGI first before we can achieve ASI which I think is nearly impossible with humans doing it alone without strong AGI). A nation that reaches AGI first can automate research, accelerate economic productivity, and solidify global influence before others have the chance to respond. The Ezra Klein Show podcast warns that commercial AI firms, if left unchecked, could inadvertently release AGI into the public domain, eliminating the U.S. strategic advantage. My research in The Distillation Barrier supports the argument that AGI cannot be distilled from lesser models, meaning that once one nation reaches this threshold, others will not be able to replicate it quickly.
This has profound implications for the open-source versus closed-source debate. While open-source AI fosters innovation, it allows rivals to quickly close the gap with a fraction of the resources expended, etc. This risk the United States forfeiting its advantages willingly at the expense of being surpassed and even left behind, if its rivals achieve strong AGI first and able to build an insurmountable lead, that it ironically could find itself, never being able close (similar to what happened with gunpowder weapons, which was invented by the Chinese, but the Europeans soon mastered and out innovated them on). The conclusion is clear: critical AI advancements must remain protected if the U.S. intends to maintain global leadership.
A Strategic Path Forward:
The future of AI leadership will be determined by those who act decisively today. The United States must balance commercial AI progress with national security priorities, ensuring that AI development serves long-term strategic interests rather than short-term economic gains. This requires maintaining strict export controls on AI chips, reinforcing model security to prevent adversarial access, and ensuring that AGI breakthroughs remain classified until appropriate safeguards are established. AI leadership is not just about innovation; it is about control. By adopting a disciplined, security-first approach, the United States can ensure that AI remains a force for national strength that it continues to dominate well into the future. To gain more excellent insights from our SSO Network, please join us for our upcoming Intelligent Automation World Series Virtual Summit.