1. Introduction
The analogy of AI to nuclear fission, or more specifically to nuclear weapons, is popular and has been used by a wide range of tech CEOs and thought leaders. This text first provides an overview of how the analogy has been used and then examines 7 commonalities and 7 differences between the two domains.
1.1 Examples of use
Elon Musk has repeatedly compared the danger of advanced AI to that of nuclear weapons, arguing that there is a need for more oversight and regulation (2014, 2018, 2023, 2023, 2023, 2023)
Sam Altman has repeatedly shared the idea of an equivalent to the International Atomic Energy Agency, an “IAEA for AI” (2023, 2023, 2023, 2023, 2023, 2023, 2023, 2023, 2024, 2024, 2024, 2024), an international regulatory body that helps to audit and verify the safety of future frontier AI systems. This corresponds to what the OpenAI leadership team has communicated in writing and to research commissioned by OpenAI’s policy team. Altman also once mentioned the nuclear analogy in the context of avoiding an arms race (2017).
Eric Schmidt has used the nuclear analogy to emphasize the power and misuse potential of the technology. He has stressed the need for a containment regime as well as a new military strategy (akin to mutually assured destruction) (2021, 2021, 2021, 2022, 2022, 2022, 2023, 2024).
Max Tegmark has used the analogy to nuclear war to argue that we need to get superintelligence safety right the first time. We cannot afford to learn from mistakes, as there may not be a second chance (2017, 2018, 2018, 2018, 2018, 2023). Tegmark also once used it as an example for arms control (2023)
Eliezer Yudkowsky has used the nuclear analogy in various contexts including technological surprise (2018), secrecy (2023)1, disarmament (2023), and non-proliferation (2023). However, his favorite analogy is that AI is like nuclear bombs that get bigger over time and create gold until at some point they pass the threshold to set the entire atmosphere on fire (2023, 2023, 2023, 2023, 2023).
This list is not intended to be comprehensive. The nuclear-AI analogy is also part of multiple open letters and joint statements signed by a significant share of leading AI decision-makers:
Open Letter on Autonomous Weapons (2015): “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare2, after gunpowder and nuclear arms.”
Statement on AI Risk (2023): “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Managing extreme AI risks amid rapid progress (2024): “Many areas of technology, from pharmaceuticals to financial systems and nuclear energy, show that society requires and effectively uses government oversight to reduce risks. However, governance frameworks for AI are far less developed, lagging behind rapid technological progress.”
1.2 Governance analogies
Subanalogies to specific projects and institutions in the governance of nuclear fission include:
IAEA: As already mentioned, the idea of an IAEA for AI has been put forward quite consistently by OpenAI. The idea will be discussed in greater detail in a separate article.
CERN: The European Organization for Nuclear Research (CERN) has been suggested as a model by various parties for various projects. This is discussed at length in “CERN for AI”.
Manhattan Project: The Manhattan Project refers to the initiative lead by the United States to build the atomic bomb from 1942 to 1946. Demis Hassabis the founder and CEO of Google Deepmind, once described the company as “an Apollo programme, a Manhattan project, in terms of the quality of the people involved -- getting 100 scientists, here from 40 countries, together to work on something visionary and trying to make as fast progress as possible”. Peter Thiel has used this to argue in an NYT op-ed that AI is at its core a military technology that the US government needs to investigate Google.
In 2023, Alex Karp called for something like a Manhattan Project on AI in a NYT op-ed. In 2024, Leopold Aschenbrenner extensively used the analogy to go even further and argue that the US government should nationalize AI research and start an all-out arms race with China.
1.3 Political implications
The following are some (un-)intended inferences that those who strongly connect AI to nuclear in their minds are likely to make:
Superintelligence can be controlled: Nuclear weapons are managed through sophisticated command and control systems.
Increased government and military role: There are no privately-owned nuclear weapons, and all nuclear weapons were developed by government programs.
AGI for great power status: Whether correctly or not, certain nations connect nuclear weapons to great power status. The international regime for nuclear non-proliferation and control is a dual-regime with the haves and the have nots. You can be certain that countries like the US, China, Russia, France, the United Kingdom, India, and Israel will all think to some degree that they need a national “AGI capacity” if they think AI is just like nuclear weapons.
Classification of AI research: Openly shared model weights and nuclear weapons don’t mix well together (see e.g. Geoffrey Hinton (2023, 2024)).
1.4 Literal overlaps
Aside from analogies, there are also some literal overlaps between the nuclear and AI domains:
Computer networks and nuclear war: The first large computer network was researched by the US Air Force in the 1950s to get radar data to decision-makers in the event of a Soviet air attack. ARPANET the general-purpose computer network that turned into the modern Internet, is also often linked to the idea of resilience in case of a nuclear attack.3
Computing for simulations of nuclear explosions: Computer simulations are crucial for designing nuclear weapons, especially given the Comprehensive Nuclear-Test-Ban Treaty.
AI-tracking of the location of nuclear second-strike forces: Some have suggested that AI may undermine strategic stability by making it easier to detect the location of secure second-strike forces that are meant to survive a first strike and retaliate, specifically mobile land-based launchers and to a lesser extent submarines.4
AI-controlled nuclear weapons: The US National Security Commission on AI recommended that the US clearly and publicly affirms that only human beings can authorize the launch of nuclear weapons and seek similar commitments from China and Russia. Ted Lieu has introduced a congressional bill to that effect, and the US is discussing the matter with China.
Nuclear-powered datacenters: As Dario Amodei quipped “there was a running joke somewhere that the way building AGI would look like is: There would be a data center next to a nuclear power plant next to a bunker.” Nuclear-powered datacenters are increasingly a reality (Microsoft, Amazon, U.S. Energy Secretary).
2. Commonalities
2.1 Ideas of a chain reaction
Nuclear: The basic idea of a nuclear chain reaction is that an atom splits into two smaller atoms which also sets free 2-3 neutrons as well as energy. The neutrons in turn can help to cause this split in more atoms. This creates a self-sustaining, exponential cascade until the process runs out of fissile material.
AI: The idea of an intelligence explosion, in which an AI iteratively improves itself and becomes vastly superhuman in a short period of time was first proposed by I.J. Good.
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.” – I.J. Good, 19665
A similar basic idea is also echoed in the idea of a technological singularity. It is the basic model argued for prominently by Eliezer Yudkowsky and Nick Bostrom.
So far, there is no empirical evidence for such explosive, self-sustaining growth in general-purpose AI systems, and the plausibility of this idea is scientifically controversial. At the same time, the absence of a general intelligence explosion so far, is only weak evidence against the intelligence explosion hypothesis. In Bostrom’s model the explosive feedback loop is only initiated after a “crossover” point “beyond which the system’s further improvement is mainly driven by the system’s own actions”, and we have not passed such a preliminary threshold.
Self-replication in a biological sense, as organisms creating (modified) copies of themselves, arguably only makes sense on the software layer for AI. I would be very skeptical of the idea that an advanced AI model running on an AI chip would just replicate or rearrange the chip on which it currently runs. To get to the right level of precision to edit hardware, you need giant, specialized machines. However, we can still make the case for explosive AI potential looking at it as a more distributed sociotechnical system with positive feedback loops at multiple levels of analysis, impacting both software and hardware.
The first, and kind of obvious socio-technical feedback loop, is that economically valuable AI creates more interest and funding for AI research and training. This capitalist feedback is powerful; however, it is not sufficient to create an explosion – the maximum pace is still bottlenecked by the bounded expansion speeds of human brainpower dedicated to AI research, chip research and production, and human data.
What is needed for more explosive scenarios are tighter positive feedback loops that recursively strengthen the main input factors into AI: compute, data, and algorithms. For example, AI chip designers may increasingly rely on AI trained on their chips to design and validate the next generation of better chips, and better chip-making equipment. To a limited degree, that is already happening. Similarly, advanced AI may increasingly design and write better algorithms to train the next generation of AI systems. Again, to a limited degree, that is already happening. One can imagine that these feedback loops could strengthen with “AI workers”.
Lastly, there is the question of data. If AI is bound to imitate data points generated by human intelligence, that will soon become a bottleneck. If AI can improve itself based on data created by itself that is arguably the most direct and explosive way in which a model cannot just improve the next-generation of AI models, but itself. Now, if you just think of AI as a stochastic parrot, you should be very skeptical that this is possible – just creating new intermediate points (“interpolation”) between the training data does not change the training data distribution. However, we also know that such recursive self-improvement is actually possible in narrow domains. Google Deepmind’s AlphaZero requires no human training data at all and instead iteratively improves through synthetic data generated from self-play. It consists of one AI model that suggests possible next moves, and another AI that specializes in giving an expected value to different states of the game board, and tree search to find the most promising option amongst suggested moves. This set-up has been able to start from scratch to become superhuman in multiple games within only hours of training.
I am not familiar with published evidence that LLM’s can substantially improve through self-play. LLMs like ChatGPT engage in an open-ended environment, whereas AlphaZero worked in a closed game with perfect information. However, I think it is reasonable to have less than 90% confidence that an intelligence explosion can’t happen. As discussed in Is ChatGPT just “autocomplete on steroids”? the training structure of LLMs also involves two AI models. One that probabilistically generates responses and one that predicts how well humans would evaluate this response. So, there is at least some high-level similarity.
2.2 Rapid scientific progress under competitive dynamics
Nuclear: Key scientific breakthroughs, most notably the realization of nuclear fission in late 1938 by German scientists Otto Hahn and Fritz Strassmann, came on the eve of the Second World War. This put the critical phase of nuclear physics development squarely into an all-out war. Many of the key nuclear physicists and key scientists in the US atomic bomb project had fled from Europe due to prosecution (e.g. Albert Einstein, Leo Szilard, Enrico Fermi, Eugene Wigner, Alfred Teller, Niels Bohr). These scientists were concerned about the prospect of a Nazi nuclear bomb, and getting there before the Germans was a significant motivator for pushing ahead despite the destructive potential. Most notably, this was the motivation behind the Einstein-Szilard letter that first brought high-level political attention to the potential of nuclear weapons in the US.
The Nazis did research towards the nuclear bomb, but they never switched to an industrial-scale effort like the Manhattan Project to produce the nuclear bomb, and US and British intelligence were well aware of this. After the Second World War, the US and the Soviet Union switched into the Cold War nuclear arms race in which they went for ever more and more destructive atomic bombs.
AI: Some pundits have also described the fast development progress in AI as a metaphorical “arms race”. Except that in this version, Nazi Germany and the Soviet Union are replaced with China. For example, Alex Karp, the CEO of defense technology firm, Palantir, wrote an NYT Op-Ed that is entirely based on this analogy and that argues that now it’s time for the US to massively invest in autonomous killer robots.
“In the summer of 1939 (…) Albert Einstein sent a letter — which he had worked on with Leo Szilard and others — to President Franklin Roosevelt, urging him to explore building a nuclear weapon, and quickly. (…) It was the raw power and strategic potential of the bomb that prompted their call to action then. It is the far less visible but equally significant capabilities of these newest artificial intelligence technologies that should prompt swift action now.” – Alex Karp, 2023
So, there is some commonality in framing. To what degree such a framing is useful and accurate can be contested.
2.3 Conflicted scientists
a) Concerns and regrets about the societal impacts of technology
Nuclear physics: Many key contributors to the nuclear bomb were concerned about the societal risks of their research and some later came to regret their part in it. Including:
AI: We can see somewhat similar dynamics in AI researchers that have concerns about the societal impact of their work. Most notably:
b) Discovery as personal motivation
Nuclear physics: There is a famous quote from Robert Oppenheimer that highlights the process and joy of scientific discovery as an inherent motivation for scientists that is more immediate than concerns about the societal impact of a breakthrough : “(…) when you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb.”6
AI: A 2015 New Yorker article quoted Geoffrey Hinton with the same explanation as to why he continued to do AI research despite having concerns about the future use of AI (that was before deciding to quit working on AI in 2023):
Hinton: “I think political systems will use it [AI] to terrorize people”
Bostrom: “Then why are you doing the research?”
Hinton: “I could give you the usual arguments, but the truth is that the prospect of discovery is too sweet. When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.”
c) Shifting publication norms in the scientific community
Nuclear physics: Physicists in the 1930s had internalized strong open publication norms and their personal academic prestige depended on publishing their findings. However, realizing the social responsibility that nuclear physicists have considering the impact and likely use of their research in the real world, some physicists led by Leo Szilard, tried to change publication norms:
“Contrary to perhaps what is the most common belief about secrecy, secrecy was not started by generals, was not started by security officers, but was started by physicists. And the man who is most responsible for this certainly extremely novel idea for physicists was Szilard. (…) So he proceeded to startle physicists by proposing to them that given the circumstances of the period—you see it was early 1939 and war was very much in the air—given the circumstances of that period, given the danger that atomic energy and possibly atomic weapons could become the chief tool for the Nazis to enslave the world, it was the duty of the physicists to depart from what had been the tradition of publishing significant results as soon as the Physical Review or other scientific journals might turn them out, and that instead one had to go easy, keep back some results until it was clear whether these results were potentially dangerous or potentially helpful to our side.” – Enrico Fermi, 19547
For example, Szilard had unsuccessfully pleaded with the French academic Frédéric Joliot to not publish a paper which made the fact that fission emits enough neutrons to make a chain reaction plausible.8 The secrecy campaign was more successful in another case, where Enrico Fermi's was convinced to keep tests secret, which had revealed that highly pure graphite was effective in slowing down fast neutrons produced during the fission process, but the typical industrial-grade graphite was not.9
Once the US military had properly understood the potential of nuclear physics and the Manhattan Project began, nuclear physics was heavily classified, which was formalized after the war in the Atomic Energy Act of 1946 (McMahon Act). With the Atomic Energy Act of 1954 there was a review and subsequent risk-based, tiered declassification of nuclear research to enable a civilian nuclear power industry. In 1960 the US worked with several countries (West Germany, Netherlands, United Kingdom) to, in part retroactively, classify research on gas centrifuges that made uranium enrichment easier, which could have undermined non-proliferation efforts.10 These efforts were later formalized as part of the Nuclear Suppliers Group and extended to technologies for isotope separation by laser.
AI: The situation in AI does contain some echoes of that. Over decades AI was mainly incubated in academia in which your personal prestige and your career strongly depend on publishing your research. As in the nuclear case, the graduation of the field from academic niche interest to strategic technology with significant real-world risks comes with changing publication norms. The leading AI labs have become more conservative in publishing all the details of their research and this change is primarily led by concerned scientists. Voices particularly concerned about information hazards are usually concerned about the possibility of runaway AI, the catastrophic misuse of AI by non-state actors, or a future conflict between the US and China.
The meme of OpenAI becoming ClosedAI, is indicative of that shift. Although, those who are willing to read OpenAI’s statements will realize that the company has been fairly consistent in wanting broad access to the technology (see also discussion on universal access) but in arguing that publication norms need to adapt over time as the technology and its misuse potential becomes more powerful. For example, OpenAI has favored a staged release strategy as early as GPT-2 in part to set an example for future publication norms. This shift has also been part of internal strategy since the very early days.
As in the nuclear days, some scientists virulently oppose adapting publication norms. The most prominent critic in AI is the French academic Yann LeCun, who leads the AI effort of Meta. He has repeatedly expressed how important it is to him that all his research is published, and, repeating the argument that Szilard faced, he accused those that do not share their findings with everyone as undermining the scientific method.
d) Windows of political influence for scientists
Nuclear physics: The importance of developing nuclear technology has given a small cadre of top nuclear scientists a public and political platform during a key period, and some of them have tried to use this (e.g. Szilárd petition – 83% of the nuclear scientists that developed the atomic bomb wanted to demonstrate its overwhelming military power to the Japanese on uninhabited territory first and ask them to surrender, before destroying an entire city of civilians) However, this influence diminished rapidly once the technology become mature. In the end the nuclear scientists did not manage to substantially shape how the nuclear bomb was used during the war or how US nuclear policy was defined after the war. Subsequent generations of nuclear physicists had essentially zero influence on nuclear policy.
AI: We are arguably in a similar period for AI in the sense that we may be near the peak of the potential policy influence of AI scientists. AI scientists currently command huge respect amongst the public and those able to develop the technology and politicians don’t have a fixed idea of AI policy yet. Once AI has matured, they may have less influence under multiple scenarios. If there is an automation of AI research, they are not needed as bottleneck anymore. If the technology turns out to be of existential importance in a conflict, it is likely that governments take over control from the private sector. If the technology plateau’s the knowledge how to create it will still proliferate making them less special. As during the nuclear period, some leading scientists, most notably Geoffrey Hinton, Yoshua Bengio, and Stuart Russell, work hard to try to use this window of opportunity to help shape beneficial AI policies.
2.4 Concerns about existential risk
Nuclear: After the use of nuclear bombs on Hiroshima and Nagasaki, many realized that nuclear weapons are so powerful and impossible to defend against, that a future war fought by powers that both have large arsenals of nuclear weapons would create unprecedented levels of destruction, from which it would be hard or impossible for human civilization to recover from. As The Federation of American Atomic Scientists wrote urging for an international nuclear control regime: “Time is short. And survival is at stake.”11 Since 1947, the Bulletin of Atomic Scientists has maintained the “Doomsday Clock” as a metaphorical representation of existential risk.
Concern about existential risk from nuclear weapons was further aggravated in 1983, when it was discovered that a large nuclear war would subsequently lead to a nuclear winter, which will destroy agriculture in areas not directly hit by nuclear weapons.
AI: Based on a large sample of surveyed AI scientists, the mean estimated likelihood of AI creating an extremely bad future on par with human extinction is about 9% and the median estimate about 5%. People in Silicon Valley half-jokingly refer to their personal estimate for a catastrophic AI outcome “p(doom)”. As for nuclear war, it is difficult to put a reliable probability on a counterfactual and there is a wide range of intuitions. However, there is a broad consensus that global catastrophic risks and even existential risk from AI is worth taking seriously.
In May 2023, more than a hundred leading Western and Chinese AI scientists, and the most important tech CEOs signed a joint statement on AI risk, which compares the risk of extinction from AI to that of pandemics and nuclear war.
2.5 One-worldism
Nuclear: The idea that nuclear weapons would require world government has preceded nuclear weapons by three decades. In H.G. Wells’ 1914 science-fiction book “The World Set Free” nuclear energy is first developed in peacetimes and used to power transport but then a world war breaks out and countries destroy entire cities of each other through nuclear bombs. The incredibly destructive nuclear world war only ends when the warring party finally come together in the scenic village of Brissago, Switzerland to form a world government.
Already during the development of the atomic bomb Niels Bohr and others recognized that proliferation will be hard to stop, and defense almost impossible, and argued that a new political structure was needed to survive the atomic era. In 1946 a “who is who” of nuclear scientists, including Albert Einstein, Leo Szilard, Robert Oppenheimer, and Niels Bohr, as well as representatives from industry, military, and media jointly authored the bestseller “One World or None”. While the book contains a variety of essays on the problem of international nuclear control, the overall message is clear: There is no technological solution to defend against nuclear weapons,12 we need to control proliferation at earlier, more bottlenecked stages. Nuclear weapons pose an existential risk13, and the only way out is a political solution for international control, and for some, this can only sustainably work if we manage to escape the semi-anarchy of the international political system:
“In view of these evident facts there is, in my opinion, only one way out. It is necessary that conditions be established that guarantee the individual state the right to solve its conflicts with other states on a legal basis and under international jurisdiction. It is necessary that the individual state be prevented from making war by a supranational organization supported by a military power that is exclusively under its control. Only when these two conditions have been fully met can we have some assurance that we shall not vanish into the atmosphere, dissolved into atoms, one of these days.” – Albert Einstein, 194614
Nuclear one-worldism had many advocates15 in its heyday (ca. 1945-1960) but eventually subsided in favor of a two-tiered international nuclear control regime without supranational military power. In this regime there is limited number of nuclear powers that collaborate to avoid non-proliferation to more states and arms control to avoid or at least limit arms races. The balance between the nuclear powers is not based on defense but on deterrence due to mutually assured destruction from a second-strike capability. So far, this bipolar or multipolar balance has proven more successful than Einstein would have predicted. Then again, there were quite a few close calls, and we are only about 80 years into the nuclear age.
AI: Most ideas about what political arrangements are needed to govern AI and deal with its existential risks are limited to narrow international collaboration to control AI risks. Still, there are some echoes of nuclear one-worldism in the AI debate. Most notably the philosopher Nick Bostrom has argued that the development of superintelligence will likely lead to the creation a “singleton”, which he defined as “a world order in which there is a single decision-making agency at the highest level. Among its powers would be (1) the ability to prevent any threats (internal or external) to its own existence and supremacy, and (2) the ability to exert effective control over major features of its domain (including taxation and territorial allocation)”. Bostrom highlights that a singleton could come in multiple forms but a global rule by a single AI system would be one. Bostrom does not directly advocate for a singleton, but his vulnerable world hypothesis at least highlights that “developments towards ubiquitous surveillance or a unipolar world order” would have the advantage of better preventing the catastrophic misuse of technology.
2.6 Ideas for international control through supply chain bottlenecks (nuclear control: uranium; AI control: AI chips)
Nuclear: When thinking of the Manhattan Project to build the atomic bomb, most intuitively think of the scientists at Los Alamos led by Robert Oppenheimer, which developed the design of the bomb. However, as measured by personnel and by expenditures, the biggest task of the Manhattan Project by far was the production of the fissile material (uranium enrichment in Oak Ridge, plutonium production in Hanford).
Similarly, Toby Ord has assessed isotope separation (= uranium enrichment) as the most difficult step in attaining a nuclear bomb.
Hence, it is not surprising that when we look at international efforts to control the proliferation of nuclear weapons, a lot of it focuses on fissile material.
a) Uranium: All nuclear weapons require natural uranium in their supply chain. The first attempt at international control of the proliferation of nuclear weapons mainly focused on cornering the market for this uranium. As part of the Murray Hill Area Project the US tried to find all worldwide uranium and thorium deposits and secure them. In 1944, the United States, the United Kingdom, and Belgium signed a secret tripartite agreement to ensure that they controlled all uranium supplies from the Shinkolobwe mine in the Belgian Congo and uranium control was a crucial part of trying to undermine the Soviet project for the bomb. However, uranium turned out to be fairly common, and over time the focus has shifted from control over uranium to monitoring and verifying processes that could turn natural uranium into something that is useful for nuclear weapons (either U-235 or plutonium).
b) Uranium enrichment: Uranium enrichment refers to processes that help to separate these different naturally occurring isotopes of uranium through processes such as gas diffusion and gas centrifuges. The first nuclear bomb used as a weapon (dropped on Hiroshima) was a uranium-235 bomb.
Natural Uranium: Uranium can be found in nature as part of uranium ores (e.g., UO2). There are two different isotypes of uranium, the most common form is U-238 (99.3%), the less common form is U-235 (0.7%).
Low Enriched Uranium (LEU): The most common types of nuclear power plants use regular water as a moderator and as a coolant (pressurized water reactors, boiling water reactors, Russian VVERs). These require low enriched uranium that is 3-5% U-235. Anything up to 20% U-235 counts as LEO
High Enriched Uranium (HEU): Anything above 20%. This includes reactors for submarine and aircraft carrier propulsion (20-45% U-235). 90% U-235 is considered weapons-grade uranium and can be used in a nuclear bomb.
The enrichment facilities for nuclear power plants could in theory also be used for nuclear weapons. That’s why there are safeguards agreements with the IAEA to verify that uranium is not enriched beyond certain levels.
c) Plutonium production: The first nuclear bomb tested and the second nuclear bomb used (dropped on Nagasaki) were plutonium bombs. Naturally, plutonium only exists in trace amounts that are too small to be useful. However, in an environment where U-238 is exposed to many neutrons (thanks to U-235 splitting and releasing neutrons), U-238 can absorb a neutron and subsequently turn into plutonium. So, plutonium is essentially a waste product of nuclear energy reactors.
A typical 1 GW Light Water Reactor (LWR) produces about 200-250 kg of plutonium per year. Reactor-grade plutonium has less than 70% Pu-239 and significant amounts of Pu-240 (about 20%) and other isotopes. Weapons-grade plutonium contains a higher proportion of Pu-239, typically over 90%.
Still, about 6-8 kg of Pu-239 is sufficient for one nuclear bomb. So, a typical civilian nuclear reactor could theoretically produce fissile material for about 30 nuclear bombs per year.
The waste products of nuclear power plants could in theory be used for nuclear weapons. That’s why there are safeguards agreements with the IAEA to verify that nuclear waste is properly accounted for and disposed, and not diverted for weapons use.
AI: Some have argued that the supposed lack of bottlenecks in the AI supply chain compared the nuclear supply chain, makes AI harder or even impossible to control through the supply chain:
“Very early on in the in the Manhattan Project they were worried about what if he nuclear weapons can ignite fusion in the nitrogen in the atmosphere and they ran some calculations and decided that it was incredibly unlikely, so they went ahead and were correct (…) AI is like that but instead of needing to refine plutonium you can make nuclear weapons out of a billion tons of laundry detergent, you know the stuff to make them is like fairly widespread, it's not a tightly controlled substance and they spit out gold up until they get large enough and then they ignite the atmosphere and you can't calculate how large is large enough and a bunch of the people the CEOs running these projects are making fun of the idea that it'll ignite the atmosphere.” – Eliezer Yudkowsky, 2023
In contrast, others have highlighted that the AI hardware supply chain is in fact highly concentrated and therefore a suitable target for international control efforts. First, there is an in-depth review of nuclear monitoring and verification and how this might be applied to AI chips by Mauricio Baker, which was primarily done as an independent contractor with OpenAI’s policy research team. Second, in 2024 a group of AI policy researchers from OpenAI, Oxford, Cambridge, as well as Yoshua Bengio wrote a joint paper outlining how compute governance can contribute AI governance. This group also makes an explicit analogy between AI chips and uranium, respectively between AI training and uranium enrichment.
The authors also highlight some aspects in which the analogy falls short. Notably:
The nonradioactivity of compute makes it more difficult to track nuclear material and to detect it at ports and other border crossings.
The release of model weights, poses a significant threat to nonproliferation compute regimes, because their public availability would allow an individual with a moderate amount of machine learning expertise to bypass the large compute requirements needed for training a model.
The analogy of AI to the nuclear monitoring and verification regime, is closely related to the idea of an “IAEA for AI”.
2.7 High hopes for economic impact
While nuclear fission and AI both have inspired fears, they have also inspired techno-utopian hopes. Expectations of the impact of nuclear fission and AI on future economic growth was high for both technologies during key periods of their development.
AI: The idea of AI causing another Industrial Revolution has been discussed at length in a separate post.
Nuclear: What is less known these days, is that nuclear energy had once inspired similar expectations,16 most notably in the 1950s and the context of the Atoms for Peace program. First, much like miniaturized computers eventually spread everywhere, some had the idea that miniaturized nuclear reactors and nuclear batteries might eventually be applied to a very wide range of contexts. Nuclear-powered vacuum cleaners anyone?
“Atomic energy applied to vacuum cleaners may lighten the homemaker’s cleaning lot in about ten years. In a preview word picture of what this appliance may mean to future homemakers, Alex Lewyt said last week that self-operating cleaners powered by nuclear energy would probably be a reality a decade from now. Mr. Lewyt is president of the Lewyt Corporation, makers of vacuum cleaners.” - New York Times, June 11, 195517
Second, energy is a fundamental ingredient in pretty much everything in the economy. The transition from human muscles, horses, and firewood to the more energy-dense coal was one of the drivers and indicators of the first Industrial Revolution. The transition from short distance energy transmissions to mid distance energy transmissions was one of the hallmarks of the Second Industrial Revolution (1870-1914). So even under the assumption that nuclear reactors only work centralized and at scale, there was an idea that the unprecedented energy density of nuclear fission could bring electricity abundance and something akin to a new Industrial Revolution. Here is the impact of nuclear fission that the Chairman of the US Atomic Energy Commission foresaw:
“It is not too much to expect that our children will enjoy in their homes electrical energy too cheap to meter, will know of great periodic regional famines in the world only as matters of history, will travel effortlessly over the seas and under them and through the air with a minimum of danger and at great speeds, and will experience a lifespan far longer than ours, as disease yields and man comes to understand what causes him to age.” – Lewis Strauss, 1954
In the case of nuclear energy, the technology has not managed to deliver on its economic promises. However, in both cases countries were eager to not miss out on the technology because they expected substantial economic impact, and hence there was an economic incentive for proliferation.
3. Differences
3.1 Military vs. Private Sector
Nuclear: Nuclear fission is a military spin-off technology. It was developed in the context of a World War with a clear military purpose and as a government effort. The first application of nuclear fission was the nuclear bomb (U-235 bomb, plutonium bomb, both 1945). The second application of fission was as part of the fusion bomb (1952). The third application was nuclear-powered military submarines (1953). The commercial use of nuclear energy in the US was an afterthought and largely initiated as a political response to the Soviet “Atom Mirny”. With the Atomic Energy Act of 1954 the military released some classified research in the hope of spinning off a civilian nuclear energy industry.
AI: AI research and development is overwhelmingly led by the private sector. Rather than militaries aiming to spin-off technology to the private sector, they aim to “spin on”, some of the innovations developed by civilian tech companies.
In a similar vein, AI is a general-purpose technology that can be applied in across all major industries. In contrast, nuclear fission is much more of a dual-use technology, with one primary military application (nuclear bomb) and one primary civilian application (nuclear energy). As a rule of thumb, AI is more of a military technology than electricity, but less so than nuclear.
3.2 Financial incentives
Nuclear: Nuclear scientists were primarily motivated by discovery and by national security concerns. All of them only had base salaries, none of them had equity in the bomb or in nuclear energy companies.
The maximum compensation for a nuclear physics researcher on the OSRD pay scale was 4800 USD per year, which corresponds to about 83’000 USD in 2024. Robert Oppenheimer as the director of the Los Alamos Project was paid 10’000 USD per year. Adjusted for inflation this corresponds to about 175’000 USD in 2024. Oppenheimer thought that this was too much and (unsuccessfully) asked the president of the University of California to reduce his salary.
AI: While I don’t think that money is the primary motivation of most AI researchers, it is one additional factor for which there is no direct equivalent in nuclear research. AI research is very well paid, and the world’s top AI researchers are all multimillionaires. This compensation often comes in the form of a substantial base salary and additional compensation in terms of equity. Hence, if ethical concerns clash with market incentives, CEOs and employees have a personal financial incentive to prioritize the market incentives.
For example, when the AI ethics team in Microsoft raised concerns that might have clashed with a fast roll out, the company fired their ethics team. OpenAI was set-up as a non-profit so that safety concerns could take priority over market incentives (see e.g., Elon Musk, Sam Altman, Greg Brockman), arguing that others have a fiduciary duty to shareholders, whereas OpenAI’s “fiduciary duty is to humanity”. Yet, the de facto inability of the oversight board to fire Sam Altman has been interpreted by some as evidence that there are nevertheless powerful market incentives at play. OpenAI employees get most of their compensation in form of equity in the OpenAI for-profit subsidiary (called “profit participation units”). As Vox has uncovered, employees who wanted to leave the company had to sign restrictive agreements not to publicly criticize the company or they might lose their equity.
3.3 Ability to discriminate
Nuclear: Targeting civilians in an armed conflict is a clear violation of Additional Protocol I of the Geneva Conventions and therefore a war crime:
Principle of distinction: Art.48 “In order to ensure respect for and protection of the civilian population and civilian objects, the Parties to the conflict shall at all times distinguish between the civilian population and combatants and between civilian objects and military objectives and accordingly shall direct their operations only against military objectives.”
Prohibition of attacks against civilians: Art 51.2 “The civilian population as such, as well as individual civilians, shall not be the object of attack. Acts or threats of violence the primary purpose of which is to spread terror among the civilian population are prohibited.”
Principle of proportionality: Art. 51.5 “Among others, the following types of attacks are to be considered as indiscriminate: (a) an attack by bombardment by any methods or means which treats as a single military objective a number of clearly separated and distinct military objectives located in a city, town, village or other area containing a similar concentration of civilians or civilian objects; and
(b) an attack which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated.”
Nuclear weapons are too large to make meaningful distinctions between military and civilian targets. The only targets that really “require” bombs with a blast radius of their size are not military installations but cities full of civilians. Target lists for nuclear war developed by military planners in the US and the Soviet Union include every significant city of both countries (see also SIOP).
AI: The military use of AI for targeting or the use of AI in weapons systems, such as UAVs, comes with large legal and ethical challenges. I would not want to diminish them in any way. However, AI is not an explosive, it’s something that can assist in or make decisions, and part of the appeal of “smart” weapons is that they are marketed as being good at identifying and hitting specific targets. To what degree that marketing corresponds to reality can and should be discussed critically. Still, the ability to discriminate between civilians and combatants is arguably better for “AI weapons” than for nuclear weapons.
3.4 Deterrence logic
Nuclear: The military-strategic logic of nuclear weapons is one of deterrence by mutually assured destruction through second-strike capability. Nuclear weapons are also called “absolute weapons” in the sense that a minimum deterrent to guarantee devastation is a sufficient deterrent more or less independent of the conventional and non-conventional strength of the enemy.
AI: While it may be too early to tell, I would be quite confident that AI follows no similar deterrence logic.
Signaling: It remains unclear how you would credibly signal the power of your military AI to your opponent in a similar fashion to nuclear tests. While it is fun to imagine North Korea parading GPUs, it’s not a credible signal in a practical sense.
First strike survivability: I don’t want to give military planners bad ideas such as putting datacenters into some kind of autonomous submarine. However, datacenters are tied to the grid and not very mobile (aside from the fact that current datacenters are not hardened sites either). So, your AI clusters will not survive a nuclear first strike.
“Absoluteness”: Nuclear is the “absolute weapon”. It is an open debate to what degree absolute vs. relative AI capacities matter, not all aspects of military AI would fit the label “relative weapon”. However, at least in some ways AI is closer to the cat-and-mouse logic of cyber.
Under the threshold & attribution: In areas such as AI for cyber, we should expect significant activities under the threshold of an armed attack during peacetime. The lines between peacetime espionage and more offensive steps to prepare potential infrastructure targets are naturally a bit blurred, and even for things like attribution the best defense may be offense. Over the threshold, there is arguably some cross-domain deterrence logic that somewhat works. Needless to say, there are no nuclear attacks under the threshold of an armed attack.
The area where AI could potentially bear the most resemblance to the military logic of nuclear is in bargaining theory. I’m not really a fan of madman theory etc. but there are some ideas that you can gain escalation dominance if you’re perceived to be willing to take more risks – to step closer to the nuclear abyss. At least some might view the release of an uncontrollable superintelligence in similar terms.
3.5 Ease of proliferation over time
Nuclear: Nuclear proliferation has gotten somewhat easier over time. The design of nuclear bombs remains secret, although the list of actors from whom they could be bought or stolen has somewhat extended over time. Getting enough fissile material has become somewhat easier over time, primarily due to proliferation of civilian nuclear energy and due to advances in isotope separation technology. However, gas centrifuge technology has not proliferated that far and laser-based enrichment has been kept under tight wraps. Overall, 80 years after the first nuclear bomb, attaining a nuclear bomb is still a prolonged and risky project for a middle power like Iran.
AI: Due to improvements hardware price performance and algorithmic efficiency the proliferation of absolute AI capacities becomes dramatically easier over time as documented by Lennart Heim & Konstantin Pilz as well as Paul Scharre.
In short, given the current rate of progress, it becomes exponentially harder to stop the proliferation of absolute AI capacities over time. The rough equivalent in the nuclear analogy would be a gas centrifuge sized innovation every year.
3.6 No upper bound for chain reaction
Nuclear: The designed size of nuclear bombs has practical limitations because you just run out of targets for which larger bombs would be useful. However, more importantly, a nuclear explosion has an inherently fixed size due to its design. Nuclear weapons are not an intelligent process, the chain reaction stops when it runs out of fissile material.
AI: An intelligent explosion is different because it unleashes superintelligent agents. Even if the capabilities gained from self-learning may be bounded in the short-run by the available AI hardware, a self-improving AI may use economic means to acquire more hardware over time, or it may create better AI chip designs over time. Because it is an intelligent process, there are many ways that it can keep adding fuel to the fire. So, the impact of an intelligence explosion has no clear geographic boundary and there is no clear upper bound where a general intelligence explosion would have to stop, or if that upper bound exists it is way above human general intelligence. Hence, a general intelligence explosion would most likely lead to irreversible loss of human self-determination and control over the future (irrespective whether caused by the US or China). That is what Eliezer Yudkowsky means by “setting the atmosphere on fire”
3.7 Autonomy and agency
There is something fundamentally different between a very powerful tool, and a very powerful general-purpose intelligence that can use and invent tools. In a narrow sense AI is not just an invention but the invention of new method of invention that can create spillovers in many other technological areas. However, more importantly, AI systems may have increasing levels of autonomy, acting in the world, and gradually taking over the economy.
As Yudkowsky highlights, nuclear weapons:
are not smarter than humans
are not capable of self-replicating
are not capable of self-improving
have inner workings that are understood and designed by scientists
“The brain doesn't look anywhere near as impressive as it is. It doesn't look big or dangerous or even beautiful but a skyscraper, a sword, a crown, a gun all these popped out of the brain like a jack from a jack-in-the-box. A space shuttle is an impressive trick, a nuclear weapon is an impressive trick, but not as impressive as the master trick, the brain trick. The trick that does all other tricks.” – Eliezer Yudkowsky, 2007
Further readings
Other write-ups on the nuclear-AI analogy or specific aspects of it that might be of interest:
The Royal Society. (2018). A perspective on nuclear power. In: Portrayals and perceptions of AI and why they matter. royalsociety.org
Waqar Zaidi & Allan Dafoe. (2021). International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons. fhi.ox.ac.uk
Toby Ord. (2022). Lessons from the Development of the Atomic Bomb. governance.ai
Mauricio Baker. (2023). Nuclear Arms Control Verification and Lessons for AI Treaties. arxiv.org
Dylan Matthews. (2023). AI is supposedly the new nuclear weapons — but how similar are they, really? vox.com
Girish Sastry et al. (2024). The Compute-Uranium Analogy. In: Computing Power and the Governance of Artificial Intelligence. arxiv.org
Not material to the argument, but contrary to the claim in this quote, Szilard read about Rutherford’s comment in the newspaper and got the theoretical idea of a neutron-induced chain reaction while walking. Richard Rhodes. (1986). The Making of the Atomic Bomb. pp. 26-28. I guess humans hallucinate differently from AI in some ways, but we still hallucinate.
The concept of a revolution in military affairs has its roots in the 1980s with the Soviet Marshal Nikolai Ogarkov, who referred to precision-guided munition as well as intelligence, surveillance, target acquisition and reconnaissance systems as the third revolution in warfare that would allow for a new type of conventional warfare.
It’s a good narrative, though packet-switching would arguably have been chosen either way for economic reasons. For much more detail see “One, Two, or Two Hundred Internets?”
Keir Lieber & Daryl Press. (2017). The New Era of Counterforce: Technological Change and the Future of Nuclear Deterrence. International Security (2017) 41 (4): 9–49.; Edward Geist & Andrew John. (2018). How Might Artificial Intelligence Affect the Risk of Nuclear War? rand.org; However, for perspective, worth highlighting that the US also was able to locate, and even track, Soviet submarines during extended periods of the Cold War. Austin Long & Brendan Rittenhouse Green. (2015). Stalking the Secure Second Strike: Intelligence, Counterforce, and Nuclear Strategy. Journal of Strategic Studies, 38:1-2, 38-73.
I.J. Good. (1966). Speculations Concerning the First Ultraintelligent Machine. Advances in Computers 6, 31–88. p. 34
U.S. Atomic Energy Commission: Personnel Security Board (1954). In the Matter of J. Robert Oppenheimer. osti.gov p. 81
Enrico Fermi. (1955). Physics at Columbia University: The genesis of the nuclear energy project. Physics Today, 8(11), 12–16. pp. 13&14
Leo Szilard papers. (1939). Joliot-Curie, F. library.ucsd.edu; According to Richard Rhodes (1986, pp. 295&296) and Craig Nelson (2014, pp.112-113) Joliot’s publication has directly contributed to the initiation of the German nuclear program. Its plausible that Joliot has accelerated the start of the Nazi nuclear program by a few weeks to a few months. However, its eventual start was overdetermined. In spring 1939 groups in France, the US, and Germany all independently confirmed neutron emissions. There were also no less than three separate efforts from German scientists in spring 1939 to raise the prospect of a nuclear fission to the government (Joos & Hanle; Riehl; Harteck & Groth). Mark Walker. (1989). German National Socialism and the quest for nuclear power 1939-1949. Cambridge University Press pp. 17. The much harder scientific breakthrough, whose counterfactual non-publication would have made a big difference, was made by the German scientists Hahn and Strassmann, who observed nuclear fission of uranium on 17. December 1938 and shared this finding publicly.
According to Richard Rhodes (1986, pp. 344 & 345), Craig Nelson (2014, pp.112-113), Wikipedia, & Leopold Aschenbrenner Fermi’s silence has led Germany to “cripple their program” by choosing heavy water over graphite as moderator. However, based on original German sources from various archives it seems that while Szilard’s “conspiracy of the scientists” efforts likely has had some overall impact, the Nazi program would have most likely chosen heavy water either way. German efforts to evaluate graphite as a moderator under Bothe did indeed reach misleading results due to lack of purity. However, Hanle correctly realized that this was due to Boron and Cadmium pollution and informed the Heereswaffenamt, incl. with instructions for how to create sufficiently pure graphite. Their decision to nevertheless go with heavy water rather than very pure graphite (like the US) as a moderator was based on economic considerations, not on a false negative (both options work, from spring 1940 onward Germany controlled the world’s only existing heavy water production facility in Norway). The best explanation for the failure of the German program is that it never became a top political priority and hence never transitioned into a “post-Briggs” stage where it was backed by massive resources. (e.g., the Heereswaffenamt prioritized rockets which promised more immediate results; compare that with the Americans who vigorously pursued all nuclear weapon pathways in parallel). Mark Walker. (1989). German National Socialism and the quest for nuclear power 1939-1949. Cambridge University Press. 26&27
John Krige. (2016). Sharing Knowledge, Shaping Europe. pp. 124&125; Later, these countries also jointly set-up Urenco.
Federation of American Atomic Scientists. (1946). Survival Is At Stake. In D. Masters and K. Way (Eds.) One World Or None: A Report to the Public on the Full Meaning of the Atomic Bomb. p. 79.
Louis Ridenour. (1946). There Is No Defense. In D. Masters and K. Way (Eds.) One World Or None: A Report to the Public on the Full Meaning of the Atomic Bomb. pp. 33-38.
Federation of American Atomic Scientists. (1946). Survival Is At Stake. In D. Masters and K. Way (Eds.) One World Or None: A Report to the Public on the Full Meaning of the Atomic Bomb. pp. 78-79.
Albert Einstein. (1946). The Way Out. In D. Masters and K. Way (Eds.) One World Or None: A Report to the Public on the Full Meaning of the Atomic Bomb. p. 76
There was also an offensive version of nuclear one-worldism, which argued that getting there by incremental peaceful steps as suggested by Einstein is illusory and the only option is conquest: e.g., “The discovery of atomic weapons has brought about a situation in which Western civilization, and perhaps human society in general, can continue to exist only if an absolute monopoly in the control of atomic weapons is created. This monopoly can be gained and exercised only through a World Empire, for which the historical stage had already been set prior to and independently of the discovery of atomic weapons. The attempt at World Empire will be made, and is, in fact, the objective of the Third World War, which, in its preliminary stages, has already begun. It should not require argument to state that the present candidates for leadership in the World Empire are only two: the Soviet Union and the United States.” – James Burnham. (1947). The Struggle for the World. Cornwall Press. p. 55
“(…) the first Holsten-Roberts engine brought induced radio-activity into the sphere of industrial production, and its first general use was to replace the steam-engine in electrical generating stations. (…) [the nuclear engine] made the heavy alcohol-driven automobile of the time ridiculous in appearance as well as preposterously costly (…) the new atomic aeroplane became indeed a mania; every one of means was frantic to possess a thing so controllable, so secure and so free from the dust and danger of the road (..) The railways paid enormous premiums for priority in the delivery of atomic traction engines (…) Viewed from the side of the new power and from the point of view of those who financed and manufactured the new engines and material it required the age of Leap into the Air was one of astonishing prosperity (…) The coal mines were manifestly doomed to closure at no very distant date, the vast amount of capital invested in oil was becoming unsaleable, millions of coal miners, steel workers upon the old lines, vast swarms of unskilled or under-skilled labourers in innumerable occupations, were being flung out of employment by the superior efficiency of the new machinery” – H.G. Wells. (1914) The World Set Free. Wildside Press. pp. 30-32
This is a classic example in lists of failed predictions in futures studies. The context is that Lewyt experimented with radio-controlled autonomous vacuum cleaners with a battery and a computer, but that these took up too much room to be practical. His hope was that a nuclear-fueled vacuum cleaner solve this. Details are not elaborated but given the context he likely referred to a vacuum cleaner with a nuclear battery. In defense of Lewyt, he just sounds like an entrepreneur open to many ideas from autonomous vacuum cleaners to dust bag free cleaners. The term nuclear powered in the New York Times may leave some ambiguity but Lewyt really meant nuclear-fueled and not powered by cheap electricity thanks to nuclear. The Tyler Courier-Times. (June 26, 1955). Atoms May Power Vacuum Cleaners. p. 37