212. A Scenario for a Hypothetical Private Nuclear Program

[Editor’s Note: Mad Scientist Laboratory is pleased to publish today’s guest blog post by Mr. Alexander Temerev addressing the possible democratization and proliferation of nuclear weapons expertise, currently residing with only a handful of nation states (i.e., the U.S., Russia, China, the UK, France, India, Pakistan, and North Korea).  We vetted this post with nuclear subject matter experts within our community of action (who wish to remain anonymous) – the following initial comments are their collective input regarding Mr. Temerev’s guest post that follows – read on!]

What is proposed below “is not beyond the realm of possibility and, with enough wise investment, rather feasible — there are no secrets left in achievement of the basic nuclear physics package, and there haven’t been for a while (the key being obtaining the necessary fissile material). A side note — I was a friend and school-mate of the apocryphal Princeton University Physics Undergraduate Student in 1978 who, as part of his final degree project, developed a workable nuclear weapons design with nothing more than the pre-Internet Science Library as a resource. They still talk about the visit from the FBI on campus, and the fact that his professor only begrudgingly gave him an A- as a final grade.”

“Considering the advances since then, it’s likewise no surprise that such a thing could be accomplished today with even greater ease, there remaining the issue of obtaining sufficient fissile material to warrant the effort. Of course, even failure in this regard, done artfully, could still accomplish a sub-critical reaction [aka “a fizzle“– an explosion caused by the two sub-critical masses of the bomb being brought together too slowly] resulting in a militarily (and psychologically) effective detonation. So, as my colleague [name redacted] (far more qualified in matters scientific and technical) points out, with the advances since the advent of the Internet and World Wide Web, the opportunity to obtain the ‘Secret Sauce’ necessary to achieve criticality have likewise advanced exponentially. He has opined that it is quite feasible for a malevolent private actor, armed with currently foreseeable emerging capabilities, to seek and achieve nuclear capabilities utilizing Artificial Intelligence (AI)-based data and communications analysis modalities. Balancing against this emerging capability are the competing and ever-growing capabilities of the state to surveil and discover such endeavors and frustrate them before (hopefully) reaching fruition. Of course, you’ll understand if I only allude to them in this forum and say nothing further in that regard.”

“Nonetheless, for both good guy and bad, given enough speed and capacity, these will serve as the lever to move the incorporeal data world. This realization will move the quiet but deadly arms race in the shadows, that being the potential confluence of matured Artificial Intelligence (AI) and Quantum technologies at a point in the foreseeable future that changes everything. Such a confluence would enable the potential achievement of these, and even worse, WMD developmental approaches through big-data analysis currently considered infeasible. Conversely, state surveillance modes of the Internet would likewise profit through identifying clusters of seemingly unrelated data searches that could be analyzed to identify and frustrate malevolent actors”.

“It is quite conceivable, in this context, that the future of the Internet for our purposes revolves around one continuous game of cat and mouse as identities are sought and hidden between white hat and black hat players. A real, but unanticipated, version of Ray Kurtzweil’s singularity that nonetheless poses fundamental challenges for a free society. In the operational environment to 2050, cyber-operations will no longer be a new domain but one to be taken into account as a matter of course.”

“Once again, all credit goes to [my colleague] for providing the technical insight into this challenge, my contribution being entirely eccentric in nature. I believe the blog is worth publishing, provided that it serves as an opening for furthering discussion of the potential long-range implications such developments would pose.”

A Scenario for a Hypothetical Private Nuclear Program

Let’s assume there is a non-government actor willing to acquire nuclear weapons for some reason. Assume that the group has unlimited financing (or some significant amount of free and untraced money available — e.g., $1 billion in cryptocurrencies). What would be the best way for them to proceed, and what would be the most vulnerable points where they could be stopped?

Stealing existing nuclear weapons would probably not be an option (or will be of limited utility — see below). Modern nuclear devices are all equipped with PALs (permissive action links), rendering them unusable without unlocking codes (the key idea of PAL is removing some small amount of explosives from the implosion shell, different for each detonator – and compensating by adjusting precise timings when each detonator goes off; these timings are different for each device and can be released only by central command authority). Without knowing the entire set of PAL timings and the entire encrypted protocol between PAL controller and detonators, achieving a bona fide nuclear explosion is technically impossible. Some countries like Pakistan and perhaps North Korea do not possess sophisticated PAL systems for their devices; to compensate, their nuclear cores are tightly guarded by the military.

Fat Man Casing, Trinity Site / Source: Flickr by Ed Siasoco via Creative Commons Attribution 2.0 Generic

Therefore, even if weapon-grade nuclear materials are available (which is of course another near impossible problem), designing the nuclear explosive device de novo is still unavoidable. The principal design of nuclear weapons is not secret, and achieving the nuclear explosion is a clearly defined problem (in terms of timing, compression and explosion hydrodynamics) that can be solved by a small group of competent physicists. Indeed, the “Nth Country Experiment” by Lawrence Livermore National Laboratory in 1964 has shown that three bright physicists (without previous nuclear expertise) can deliver a plausible design for a working nuclear weapon (they were building an analogue of the Fat Man device, which is bulky and nearly undeliverable; today, more compact options should be pursued instead). A heavily redacted report is available online.

With modern computers, open information about nuclear weapons, some OSINT, and determination, the same feat could probably be accomplished in less than a year. (Some open source software and libraries that can be useful in such an endeavor, e.g., Castro for explosion hydrodynamics; there is also a guidebook for anyone with a deep interest in the field.) Many ideas for the critical part of the device – the neutron initiator — are also discussed in the open literature (here I will refrain from mentioning exact books and papers, but the information is still publicly available). Again, the task is clearly formulated — injecting the neutrons at the very precise moment during the explosion — this is only an engineering problem.

Assembling the device itself is no easy task; it requires precision engineering and the casting of high explosives, which cannot be done without significant pre-existing expertise. However, the brightest mechanical engineers and even explosives technicians can be legally hired on the open market, if not for the direct participation in the project, then for training and knowledge transfer for the project team. Private organizations have achieved even more complicated engineering feats (e.g., rocket engines at SpaceX), so this part looks feasible.

All current nuclear devices require periodic maintenance and re-casting of their plutonium pits with additional weapon-grade plutonium added every few years; otherwise their neutronic profile will gradually become too unfavorable to achieve a full nuclear explosion. If the group has acquired nuclear materials by stealing them, they will have to make use of them during the following few years. Nuclear programs of sovereign states, of course, have the entire weapon-grade plutonium production pipelines at their disposal, so the fresh plutonium is always available. This will be a much harder feat to achieve for a non-state actor. Ironically, the plutonium could be provided by disassembling PAL-equipped stolen or captured nuclear devices, which are less heavily guarded. While it is true that PAL will prevent their full scale explosion, they still can be the priceless source of weapon-grade plutonium.

Source: Nick Youngson via Picpedia, Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)

Conclusion: Safeguarding weapon-grade nuclear materials is the highest priority, as the design details of nuclear devices are hardly a secret these days, and can be readily reproduced by many competent and determined organizations. Emergence of nuclear production pipelines (isotope separation, SILEX [Separation of Isotopes by Laser Excitation], plutonium separation, plutonium-producing reactors) should be monitored everywhere. Even PAL-equipped weapons need to be closely guarded, as they can be the sources of these materials. Groups and non-state actors willing to acquire nuclear capabilities without building the full production pipeline need to act fast and have the design and device prototypes (sans cores) ready before acquiring nuclear materials, as their utility is diminishing every year since acquisition.

If you enjoyed this post, please also see:

REMINDER: Don’t forget to join us tomorrow on-line at the Mad Scientist GEN Z and the OE Livestream Event! This event is open to all, on any device, anywhere (but is best streamed via a commercial, non-DoD network) — plan on joining us at 1330 EST on 21 February 2020 at: www.tradoc.army.mil/watch and engage in the discussion by submitting your questions and comments via this site’s moderated interactive chat room. You can also follow along on Twitter @ArmyMadSci. For more information, click here!

ALSO:  Help Mad Scientist expand the U.S. Army’s understanding of the Operational Environment (OE) — join the 662 others representing 46 nations who have already done so and take a few minutes to complete our short, on-line Global Perspectives Survey. Check out our initial findings here and stay tuned to future blog posts on the Mad Scientist Laboratory to learn what further insights we will have gleaned from this survey about OE trends, challenges, technologies, and disruptors.

FINALLY:  Don’t forget to enter The Operational Environment in 2035 Mad Scientist Writing Contest and share your unique insights on the future of warfighting — click here to learn more (submission deadline is 1 March 2020!)

Mr. Alexander Temerev is a consultant in complex systems dynamics and network analysis; he is CEO and founder of Reactivity – a boutique consulting company in Geneva, Switzerland.

Disclaimer: The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or the Training and Doctrine Command (TRADOC).

49. “The Queue”

(Editor’s Note: Beginning today, the Mad Science Laboratory will publish a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the previous month. In this anthology, we will address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!)

1. Army of None: Autonomous Weapons and the Future of War, by Paul Scharre, Senior Fellow and Director of the Technology and National Security Program, Center for a New American Security.

One of our favorite Mad Scientists, Paul Scharre, has authored a must read for all military Leaders. This book will help Leaders understand the definitions of robotic and autonomous weapons, how they are proliferating across states, non-states, and super-empowered individuals (his chapter on Garage Bots makes it clear this is not state proliferation analogous), and lastly the ethical considerations that come up at every Mad Scientist Conference. During these Conferences, we have discussed the idea of algorithm vs algorithm warfare and what role human judgement plays in this version of future combat. Paul’s chapters on flash war really challenge our ideas of how a human operates in the loop and his analogies using the financial markets are helpful for developing the questions needed to explore future possibilities and develop policies for dealing with warfare at machine speed.

Source: Rosoboronexport via YouTube
2. “Convergence on retaining human control of weapons systems,” in Campaign to Stop Killer Robots, 13 April 2018.

April 2018 marked the fifth anniversary of the Campaign to Stop Killer Robots. Earlier this month, 82 countries and numerous NGOs also convened at the Convention on Certain Conventional Weapons (CCW) in Geneva, Switzerland, where many stressed the need to retain human control over weapons systems and the use of force. While the majority in attendance proposed moving forward this November to start negotiations towards a legally binding protocol addressing fully autonomous weapons, five key states rejected moving forward in negotiating new international law – France, Israel, Russia, the United Kingdom, and the United States. Mad Scientist notes that the convergence of a number of emerging technologies (synthetic prototyping, additive manufacturing, advanced modeling and simulations, software-defined everything, advanced materials) are advancing both the feasibility and democratization of prototype warfare, enabling and improving the engineering of autonomous weapons by non-state actors and super-empowered individuals alike. The genie is out of the bottle – with the advent of the Hyperactive Battlefield, advanced engagements will collapse the decision-action cycle to mere milliseconds, granting a decisive edge to the side with more autonomous decision-action.

Source: The Stack
3. “China’s Strategic Ambiguity and Shifting Approach to Lethal Autonomous Weapons Systems,” by Elsa Kania, Adjunct Fellow with the Technology and National Security Program, Center for a New American Security, in Lawfare, 17 Apr 18.

Mad Scientist Elsa Kania addresses the People’s Republic of China’s apparent juxtaposition between their diplomatic commitment to limit the use of fully autonomous lethal weapons systems and the PLA’s active pursuit of AI dominance on the battlefield. The PRC’s decision on lethal autonomy and how it defines the role of human judgement in lethal operations will have tactical, operational, and strategic implications. In TRADOC’s Changing Character of Warfare assessment, we addressed the idea of an asymmetry in ethics where the differing ethical choices non-state and state adversaries make on the integration of emerging technologies could have real battlefield overmatch implications. This is a clear pink flamingo where we know the risks but struggle with addressing the threat. It is also an area where technological surprise is likely, as systems could have the ability to move from human in the loop mode to fully autonomous with a flip of a switch.

Source: HBO.com
4. “Maeve’s Dilemma in Westworld: What Does It Mean to be Free?,” by Marco Antonio Azevedo and Ana Azevedo, in Institute of Art and Ideas, 12 Apr 18. [Note: Best viewed on your personal device as access to this site may be limited by Government networks]

While this article focuses primarily on a higher-level philosophical interpretation of human vs. machine (or artificial intelligence, being, etc.), the core arguments and discussion remain relevant to an Army that is looking to increase its reliance on artificial intelligence and robotics. Technological advancements in these areas continue to trend toward modeling humans (both in form and the brain). However, the closer we get to making this a reality, the closer we get to confronting questions about consciousness and artificial humanity. Are we prepared to face these questions earnestly? Do we want an artificial entity that is, essentially, human? What do we do when that breakthrough occurs? Does biological vs. synthetic matter if the being “achieves” personhood? For additional insights on this topic, watch Linda MacDonald Glenn‘s Ethics and Law around the Co-Evolution of Humans and AI presentation from the Mad Scientist Visualizing Multi Domain Battle in 2030-2050 Conference at Georgetown University, 25-26 Jul 17.

5. Do You Trust This Computer?, directed by Chris Paine, Papercut Films, 2018.

The Army, and society as a whole, is continuing to offload certain tasks and receive pieces of information from artificial intelligence sources. Future Army Leaders will be heavily influenced by AI processing and distributing information used for decision making. But how much trust should we put in the information we get? Is it safe to be so reliant? What should the correct ratio be of human/machine contribution to decision-making? Army Leaders need to be prepared to make AI one tool of many, understand its value, and know how to interpret its information, when to question its output, and apply appropriate context. Elon Musk has shown his support for this documentary and tweeted about its importance.

6. Ready Player One, directed by Steven Spielberg, Amblin Entertainment, 2018.

Adapted from the novel of the same name, this film visualizes a future world where most of society is consumed by a massive online virtual reality “game” known as the OASIS. As society transitions from the physical to the virtual (texting, email, skype, MMORPG, Amazon, etc.), large groups of people will become less reliant on the physical world’s governmental and economic systems that have been established for centuries. As virtual money begins to have real value, physical money will begin to lose value. If people can get many of their goods and services through a virtual world, they will become less reliant on the physical world. Correspondingly, physical world social constructs will have less control of the people who still inhabit it, but spend increasing amounts of time interacting in the virtual world. This has huge implications for the future geo-political landscape as many varied and geographically diverse groups of people will begin congregating and forming virtual allegiances across all of the pre-established, but increasingly irrelevant physical world geographic borders. This will dilute the effectiveness, necessity, and control of the nation-state and transfer that power to the company(ies) facilitating the virtual environment.

Source: XO, “SoftEcologies,” suckerPUNCH
7. “US Army could enlist robots inspired by invertebrates,” by Bonnie Burton, in c/net, 22 Apr 18.

As if Boston Dynamic’s SpotMini isn’t creepy enough, the U.S. Army Research Laboratory (ARL) and the University of Minnesota are developing a flexible, soft robot inspired by squid and other invertebrates that Soldiers can create on-demand using 3-D printers on the battlefield. Too often, media visualizations have conditioned us to think of robots in anthropomorphic terms (with corresponding limitations). This and other breakthroughs in “soft,” polymorphic, printable robotics may grant Soldiers in the Future Operational Environment with hitherto unimagined on-demand, tailorable autonomous systems that will assist operations in the tight confines of complex, congested, and non-permissive environments (e.g., dense urban and subterranean). Soft robotics may also prove to be more resilient in arduous conditions. This development changes the paradigm for how robotics are imagined in both design and application.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

For additional insights into the Mad Scientist Initiative and how we continually explore the future through collaborative partnerships and continuous dialogue with academia, industry, and government, check out this Spy Museum’s SPYCAST podcast.

31. Top Ten Bio Convergence Trends Impacting the Future Operational Environment

As Mad Scientist Laboratory has noted in previous blog posts, War is an intrinsically human endeavor. Rapid innovations in the biological sciences are changing how we work, live, and fight. Drawing on the past two years of Mad Scientist events, we have identified a change in the character of war driven by the exponential convergence of bio, neuro, nano, quantum, and information. This convergence is leading to revolutionary achievements in sensing, data acquisition and retrieval, and computer processing hardware; creating a new environment in which humans must co-evolve with these technologies. Mad Scientist has identified the following top ten bio convergence trends associated with this co-evolution that will directly impact the Future Operational Environment (OE).

1) Bio convergence with advanced computing is happening at the edge. Humans will become part of the network connected through their embedded and worn devices. From transhumanism to theorizing about uploading the brain, the Future OE will not be an internet of things but the internet of everything (including humans).

2) The next 50 years will see an evolution in human society; we will be augmented by Artificial Intelligence (AI), partner with AI in centaur chess fashion, and eventually be eclipsed by AI.


3) This augmentation and enhanced AI partnering will require hyper-connected humans with wearables and eventually embeddables to provide continuous diagnostics and human-machine interface.


4) The Army will need to measure cognitive potential and baseline neural activity of its recruits and Soldiers.




5) The Army needs new training tools to take advantage of neuralplasticity and realize the full cognitive potential of Soldiers. Brain gyms and the promise of Augmented and Virtual Reality (AR/VR) training sets could accelerate learning and, in some cases, challenge the tyranny of “the 10,000 hour rule.”

6) Human enhancement, the unlocking of the genome, and improving AI will stress the Army’s policies and ethics. In any case, potential adversaries are exploring using all three of these capabilities as a way to gain advantage over U.S. Forces. This is not a 2050 problem but more than likely a 2030 reality.

7) Asymmetric Ethics, where adversaries make choices we will not (e.g., manipulating the DNA of pathogens to target specific genome populations or to breed “super” soldiers) will play a bigger part in the future. This is not new, but will be amplified by future technologies. Bio enhancements will be one of the areas and experimentation is required to determine our vulnerabilities.

8) Cognitive enhancement and attacking the human brain (neurological system) is not science fiction. The U.S. Army should establish a Program Executive Office (PEO) for Soldier Enhancement to bring unity of purpose to a range of possibilities from physical/mental enhancement with wearables, embeddables, stimulants, brain gyms, and exoskeletons.

9) Chemical and bio defense will need to be much more sophisticated on the next battlefield. The twin challenges of democratization and proliferation have resulted in a world where the capability of engineering potentially grave bio-weapons, once only the purview of nation states and advance research institutes and universities, is now available to Super-Empowered Individuals, Violent Non-State Actors (VNSA), and criminal organizations.

10) We are missing the full impact of bio on all emerging trends. We must focus beyond human enhancement and address how bio is impacting materials, computing, and garage level, down scaled innovation.


Headquarters, U.S. Army Training and Doctrine Command (TRADOC) is co-sponsoring the Bio Convergence and Soldier 2050 Conference with SRI International at Menlo Park, California, on 08-09 March 2018. Click here to learn more about the conference and then watch the live-streamed proceedings, starting at 0840 PST / 1140 EST on 08 March 2018.


Also note that our friends at Small Wars Journal have published the first paper from our series of Soldier 2050 Call for Ideas finalists — enjoy!