[Editor’s Note: Mad Scientist Laboratory welcomes back returning guest blogger and proclaimed Mad Scientist Mr. Samuel Bendett with today’s post, addressing Russia’s commitment to mass produce independent ground combat robotic systems. Simon Briggs, professor of interdisciplinary arts at the University of Edinburgh,predicts that “in 2030 AI will be in routine use to fight wars and kill people, far more effectively than humans can currently kill.” Mr. Bendett’s post below addresses the status of current operationally tested and fielded Russian Unmanned Ground Vehicle (UGV) capabilities, and their pivot to acquire systems able to “independently recognize targets, use weapons, and interact in groups and swarms.” (Note: Some of the embedded links in this post are best accessed using non-DoD networks.)]
Over the past several years, the Russian military has invested heavily in the design, production, and testing of unmanned combat systems. In March 2018, Russian Defense Minister Sergei Shoigu said that mass production of combat robots for the Russian army could begin as early as that year. Now, the Ministry of Defense (MOD) is moving ahead with creating plans for such systems to act independently on the battlefield.
According to theRussian state media (TASS), Russian military robotic complexes (RBCs) will be able to independently recognize targets, use weapons, and interact in groups and swarms. Such plans were stated in the article by the staff of the 3rd Central Scientific Research Institute of the Russian Federation’s MOD.
Russia has already tested several Unmanned Ground Vehicles (UGVs) in combat. Its Uran-6, Scarab, and Sphera demining UGVs were rated well by the Russian engineering forces, and there areplans to start acquisition of such vehicles. However, these systems were designed to have their operators close by. When it came to a UGV that was originally built for operator remoteness in potential combat, things got more complicated.
Russia’s Uran-9 combat UGV experienced a large number of failures when tested in Syria, among them transportation, communication, firing, and situational awareness. The lessons from Uran-9 testssupposedly prompted the Russian military to consider placing more emphasis on using such UGVs as one-off attack vehicles against adversary hard points and stationary targets.
Nonetheless, the aforementioned TASS article analyzes the general requirements for unmanned military systems employed by Russian ground forces. Among them is the ability to solve tasks in different combat conditions during day and night, under enemy fire, electronic and informational counteraction, in conditions of radiation, chemical contamination, and electromagnetic attack – as well as requirements such as modularity and multifunctionality. The article also points out “the [systems’] ability to independently perform tasks in conditions of ambiguity” – implying the use of Artificial Intelligence.
To achieve these requirements, the creation of an “intelligent decision-making system” is proposed, which will also supervise the use of weapons. “The way out of this situation is the intensification of research on increasing the autonomy of the RBCs and the introduction of intelligent decision-making systems at the control stages, including group, autonomous movement and use of equipment for its intended purpose, including weapons, into military robotics,” the article says.
The TASS articlestates that in the near future, the MOD is planning to initiate work aimed at providing technical support for solving this problem set. This research will include domestic laser scanning devices for geographical positioning, the development of methods and equipment for determining the permeability of the soil on which the UGV operates, the development of methods for controlling the military robot in “unstable communications,” and the development of methods for analyzing combat environments such as recognizing scenes, images, and targets.
Successfully employing UGVs in combat requires complicated systems, something that the aforementioned initiatives will seek to address. This work will probably rely on Russia’s Syrian experience, as well as on the current projects and upgrades to Moscow’s growing fleet of combat UGVs. On 24 January 2018, the Kalashnikov Design Bureau that oversees the completion of Uran-9 workadmitted that this UGV has been accepted into military service. Although few details were given, the statement did include the fact that this vehicle will be further “refined” based on lessons learned during its Syria deployment, and that the Uran-9 presents “good scientific and technical groundwork for further products.” The extent of upgrades to that vehicle was not given – however,numerous failures in Syrian trials imply that there is lots of work ahead for this project. The statement also indicates that the Uran-9 may be a test-bed for further UGV development, an interesting fact considering the country’s already diverse collection of combat UGVs
Today, the Russian military is testing and evaluating several systems, such as Nerekhtaand Soratnik. The latter was alsosupposedly tested in “near-combat” conditions, presumably in Syria or elsewhere. The MOD has been testing smaller Platforma-Mand large Vikhr combat UGVs, along with other unmanned vehicles. Yet the defining characteristic for these machines so far has been the fact that they were all remote-operated by soldiers, often in near proximity to the machine itself. Endowing these UGVs with more independent decision–making in the “fog of war” via an intelligent command and control system may exponentially increase their combat effectiveness — assuming that such systems can function as planned.
… and watch Zvezda Broadcasting‘svideo, showing a Vikhr unmanned, tele-operated BMP-3 maneuvering and shooting its 7.62mm MG, 30mm cannon, and automatic grenade launcher on a test range.
Automated lethality is but one of the many Future Operational Environment trends that the U.S. Army’s Mad Scientist Initiative is tracking. Mad Scientist seeks to crowdsource your visions of future combat with our Science Fiction Writing Contest 2019. Our deadline for submission is1 APRIL 2019, so please review the contest details and associated release formhere, get those creative writing juices flowing, and send us your visions of combat in 2030! Selected submissions may be chosen for publication or a possible future speaking opportunity.
Samuel Bendett is a Researcher at CNA and a Fellow in Russia Studies at the American Foreign Policy Council. He is also a proud Mad Scientist.
[Editor’s Note: Mad Scientist Laboratory is pleased to present our next edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Mad Scientist Initiative has come across during the previous month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment (OE). We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]
Space is rapidly democratizing and the death of tactical and operational surprise might be the casualty. Sara Spangelo and her startup,Swarm Technologies, is on a quest to deliver global communications at the lowest possible cost. This is a shared objective with companies like Elon Musk’s Starlink, but his solution includes thousands of satellites requiring many successful rocket launches. Swarm Technologies takes the decrease in launch costs due to commercialization and the miniaturization of satellites to the max. Swarm Technologies satellites will be the size of a grilled cheese sandwich and will harness the currents coursing through space to maneuver. This should reduce the required cost and time to create a worldwide network of connectivity for texting and collecting Internet of Things (IoT) data to approximately 25 million dollars and eighteen months.
The work at Starlink and Swarm Technologies only represents a small part of a new space race led by companies rather than the governments that built and manage much of space capability today. In the recent Mad Sci blog post “War Laid Bare,” Matthew Ader described this explosion and how access to global communications and sensing might tip the scales of warfare in favor of the finder, providing an overwhelming advantage over competitors that require stealth or need to hide their signatures to be effective in 21st Century warfare.
The impact of this level of global transparency not only weighs on governments and their militaries, but businesses will find it more difficult to hide from competitors and regulators. Cade Metz writes in the New York Times “Businesses Will Not Be Able to Hide: Spy Satellites May Give Edge from Above” about the impact this will have on global competition. It is a brave new world unless you have something to hide!
Subtitled, “This will fundamentally change the way we use CRISPR,” the subject article was published following Dr. He Jiankui’s announcement in November 2018 that he had successfullygene-edited two human babies. Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) associated protein 9, or CRISPR/Cas9, has become the “go to” tool for genomic engineering. When Dr. He announced that he had altered (as embryos) the twin girls Lulu and Nana’s genes in order to make them HIV-resistant, there was a global outcry from scientists, bio-ethicists, and politicians alike for a variety of reasons. One was the potential imprecision of the genetic editing performed, with the associated risk of unintended genomic damage leading to future health issues for the twins.
With the publication of “Target-Specific Precision of CRISPR-Mediated Genome Editing” in the scientific journal Molecular Cell by research scientists at The Francis Crick Institute in London, however, this particular concern appears to have been mitigated with a set of simple rules that determine the precision of CRISPR/Cas9 editing in human cells.
“The effects of CRISPR were thought to be unpredictable and seemingly random,” Crick researcher and group leader Paola Scaffidi said in their news release, “but by analysing hundreds of edits we were shocked to find that there are actually simple, predictable patterns behind it all.”
Per Scaffidi, “Until now, editing genes with CRISPR has involved a lot of guesswork, frustration and trial and error…. The effects of CRISPR were thought to be unpredictable and seemingly random, but by analysing hundreds of edits we were shocked to find that there are actually simple, predictable patterns behind it all. This will fundamentally change the way we use CRISPR, allowing us to study gene function with greater precision and significantly accelerating our science.”
As predicted by Stanford’s bio-ethicistHank Greely at last March’sMad Scientist Bio Convergence and Soldier 2050 Conference in Menlo Park, CA, “killer apps” like healthier babies will help overcome the initial resistance to human enhancement via genetic engineering. The Crick Institute’s discovery, with its associated enhanced precision and reliability, may pave the way for market-based designer eugenics. Ramifications for the Future Operational Environment include further societal polarization between the privileged few that will have access to the genomic protocols providing such enhancements and the majority that do not (as in the 2013 filmElysium); the potential for unscrupulous regimes, non-state actors, and super-empowered individuals to breed and employ cadres of genetically enhanced thugs, “button men,” and super soldiers; and the relative policing / combat disadvantage experienced by those powers that outlaw such human genetic enhancements.
SOFWERX, in collaboration with USSOCOM / J5 Donovan Group, hosted a Radical Speaker Series on weaponized information. Mass influence operations, deep fakes, and social media metrics have been used by state and non-state actors in attempts to influence everything from public sentiment on policy issues to election results. The type and extent of influence operations has laid bare policy and technology gaps. This represents an emerging new threat vector for global competition.
As discussed in the TRADOC G-2’sThe Operational Environment and the Changing Character of Future Warfare, Social Media and the Internet of Things has connected “all aspects of human engagement where cognition, ideas, and perceptions, are almost instantaneously available.” While this connectivity has been a global change agent, some are suggesting starting over and abandoning the internet as we know it in favor of alternative internet or “Alternet” solutions.LikeWarauthors Singer and Brookings provide examples of how our adversaries are weaponizing Social Media to augment their operations in the physical domain. One example is the defeat ISIS and re-capture of Mosul, “… Who was involved in the fight, where they were located, and even how they achieved victory had been twisted and transformed. Indeed, if what was online could swing the course of a battle — or eliminate the need for battle entirely — what, exactly, could be considered ‘war’ at all?”
Taken to the next level in the battle for the brain, novel neuroweapons could grant adversaries (and perhaps the United States) the ability to disrupt, degrade, damage, kill, and even “hack” human brains to influence populations. The resulting confusion and panic could disrupt government and society, without mass casualties. These attacks against the human brain facilitate personalized warfare. Neuroweapons are “Weapons of Mass Disruption” that may characterize segments of warfare in the future. These capabilities come with a host of ethical and moral considerations — does affecting someone’s brain purposely, even temporarily, violate ethical codes, treaties, conventions, and international norms followed by the U.S. military? As posed by Singer and Brookings — “what, exactly, could be considered ‘war’ at all?”
4. Nano, short film directed by Mike Manning, 2017.
This short film noir focuses on invasive technology and explores themes of liberty, control, and what citizens are willing to trade for safety and security. In a future America, technology has progressed to the point whereembedded devices in humans are not only possible and popular, but the norm. These devices, known as Nano, can sync with one’s physiology, alter genomes, change hair and eye color, and, most importantly to law enforcement and government entities, control motor functions. Nano has resulted in a safer society, with tremendous reductions in gun violence. In the film, a new law has passed mandating that all citizens must be upgraded to Nano 2.0 – this controversial move means that the Government will now have access to everyone’s location, will be able to monitor them in real-time, and control their physiology. The Government could, were they so inclined, change someone’s hair color remotely, without permission or, perhaps, more frighteningly, induce indefinite paralysis.
Nano explores and, in some cases, answers the questions about future technologies and their potential impact on society. Nano illustrates how with many of the advantages and services we gain through new technologies, we sometimes have to give up things just as valuable. Technology no longer operates in a vacuum – meaning control over ourselves doesn’t exist. When we use a cellphone, when we access a website, when we, in Nano, change the color of our hair, our actions are being monitored, logged, and tracked by something. With cellphone use, we are willing to live with the fact that we give off a signature that could be traced by a number of agencies, including our service providers, as a net positive outweighing the associated negatives. But where does that line fall? How far would the average citizen go if they could have an embedded device installed that would heal minor wounds and lacerations? What becomes of privacy and what would we be willing to give up? Nano shows the negative consequences of this progression and the dystopian nature of technological slavery. It proposes questions of trust, both in the state and in individuals, and how blurred the lines can be, both in terms of freedoms and physical appearance.
The Pew Research Center canvassed a host of technology innovators and business and policy leaders on whether artificial intelligence (AI) and related technology will enhance human capabilities and improve human life, or will it lessen human autonomy and agency to a detrimental level. A majority of the experts who responded to this query agreed that AI will better the lives of most people, but qualified this by noting significant negative outcomes will likely accompany the proliferation and integration of AI systems.
Most agree that AI will greatly benefit humanity and increase the quality of life for many, such as eliminating poverty and disease, while conveniently supplementing human intelligence helping to solve crucial problems. However, there are concerns that AI will conflict with and eventually overpower human autonomy, intelligence, decision-making, analysis, and many other uniquely “human” characteristics. Professionals in the field expressed concerns over the potential for data abuse and cybercrime, job loss, and becoming dependent on AI resulting in the loss of the ability to think independently.
Amy Webb, the founder of the Future Today Institute and professor of strategic foresight at New York University posits that the integration of AI will last for the next 50 years until every industry is reliant on AI systems, requiring workers to possess hybrid skills to compete for jobs that do not yet exist. Simon Briggs, professor of interdisciplinary arts at the University of Edinburgh, predicts that the potential negative outcomes of AI will be the result of a failure of humanity, and that “in 2030 AI will be in routine use to fight wars and kill people, far more effectively than humans can currently kill,” and, “we cannot expect our AI systems to be ethical on our behalf”.
As the U.S. Army continues to explore and experiment withhow best to employ AI on the battlefield, there is the great challenge of ensuring that they are being used in the most effective and beneficial capacity, without reducing the efficiency and relevance of the humans working alongside the machines. Warfare will become more integrated with this technology, so monitoring the transition carefully is important for the successful application of AI to military strategy and operations to mitigate its potential negative effects.
A newly released paper from the Brookings Institute indicated that the advent of autonomy and advanced automation will have unevenly distributed positive and negative effects on varying job and career sectors. According to thereport, the three fields most vulnerable to reduction through automation will be production, food service, and transportation jobs. Additionally, certain geographic categories (especially rural, less populated areas) will suffer graver effects of this continuous push towards autonomy.
Though automation is expected to displace labor in 72% of businesses in 2019, the prospects of future workers is not all doom and gloom. As the report notes, automation in a general sense replaces tasks and not entire jobs, although AI and autonomy makes the specter of total job replacement more likely. Remaining tasks make humans even more critical though there may be less of them. While a wide variety of workers are at risk, young people face higher risks of labor displacement (16-24 year olds) partially due to a large amount of their jobs being in the aforementioned sectors.
All of these automation impacts have significant implications for the Future Operational Environment, U.S. Army, and the Future of Warfare. An increase in automation and autonomy in production, food service, and transportation may mean that Soldiers can focus more exclusively on warfighting – moving, shooting, communicating – and in many cases will be complemented and made more lethal through automation. The dynamic nature of work due to these shifts could cause significant unrest requiring military attention in unexpected places. Additionally, the labor displacement of so much youth could be both a boon and a hindrance to the Army. On one hand, there could be a glut of new recruits due to poor employment outlook in the private sector; contrariwise, many of the freshly available recruits may not inherently have the required skills or even aptitude for becoming Warfighters.
If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future OE, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: firstname.lastname@example.org — we may select it for inclusion in our next edition of “The Queue”!
[Editor’s Note: At the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies in Washington, DC, Leading scientists, innovators, and scholars gathered to discuss how humans will receive, process, and integrate information in the future. The convergence of technology, the speed of change, the generational differences of new Recruits, and the uncertainty of the Future Operational Environment will dramatically alter the way Soldiers and Leaders learn in 2050. One clear signal generated from this conference is that learning in the future will be personalized, continuous, and accelerated.]
“The principal consequence of individual differences is that every general law of teaching has to be applied with consideration of the particular person.” – E.L. Thorndike (1906)
The world is becoming increasingly personalized, and individual choice and preference drives much of daily life, from commerce, to transportation, to entertainment. For example, your Amazon account today can keep your payment information on file (one click away), suggest new products based on your purchase history, and allow you to shop from anywhere and ship to any place, all while tracking your purchase every step of the way, including providing photographic proof of delivery. Online retailers, personal transportation services, and streaming content providers track and maintain an unprecedented amount of specific individual information to deliver a detailed and personalized experience for the consumer.
There is an opportunity to improve the effectiveness in targeted areas of learning – skills training, foundational learning, and functional training, for example – if learning institutions and organizations, as well as learners, follow the path of personalization set by commerce, transportation, and entertainment.1 This necessitates an institutional shift in the way we educate Soldiers. Instead of training being administered based on rank or pre-determined schedule, it is conducted based on need, temporally optimized for maximum absorption and retention, in a style that matches the learner, and implemented on the battlefield, if needed.
An important facet of personalized learning is personal attention to the learner. Tutors have been used in education for 60,000 years.2 However, they always have been limited to how many educators could devote their attention to one student. With advancements in AI, intelligent tutors could reduce the cost and manpower requirements associated with one-on-one instructor to student ratios. Research indicates that students who have access to tutors as opposed to exclusive classroom instruction were more effective learners as seen in the chart below. In other words, the average tutored student performed better than 98 percent of the students in the traditional classroom.3 What was a problem of scale in the past – cost, manpower, time – can be alleviated in the future through the use of AI-enabled ubiquitous intelligent tutors.
Another aspect of personalized learning is the diminishing importance of geo-location. Education, in general, has traditionally been executed in a “brick and mortar” setting. The students, learners, or trainees physically travel to the location of the teacher, expert, or trainer in order for knowledge to be imparted. Historically, this was the only viable option. However, a hyper-connected world with enabling technologies likevirtual and augmented reality; high-bandwidth networks with low latency; high fidelity modeling, simulations, and video; and universal interfaces reduces or eliminates the necessity for physical co-location. This allows Soldiers to attend courses hosted virtually anywhere, participate in combined arms and Joint exercises globally, and experience a variety of austere and otherwise inaccessible environments through virtual and augmented reality.4
Based on these trends and emerging opportunities to increase efficiency, the Army may have to re-evaluate its educational and training frameworks and traditional operational practices to adjust for more individualized and personalized learning styles. When personalized learning is optimized, Soldiers could become more lethal, specially skilled, and decisive along a shorter timeline, using lesser budget resources, and with reduced manpower.
Continuous learning, or the process of repeatedly engaging in activities designed to learn new information or skills, is a natural process that will remain necessary for Soldiers and Leaders in 2050. The future workforce will define and drive when, where, and how learning takes place. Continuous learning has the advantage of allowing humans to learn from past mistakes and understand biases by “working the problem” – assessing and fixing biases, actively changing behavior to offset biases, moving on to decision-making, and then returning to work the problem again for further solutions. Learners must be given the chance to fail, and failure must be built in to the continuous learning process so that the learner not only arrives at the solution organically, but practices critical thinking and evaluation skills.5
There are costs and caveats to successful continuous learning. After a skill is learned, it must be continually practiced and maintained. Amy Titus explained how skills perish after 3-5 years unless they are updated to meet present needs and circumstances. In an environment of rapidly changing technology and situational dynamics, keeping skills up to date must be a conscious and nonstop process. One of the major obstacles to continuous learning is that learning is work and requires a measure of self-motivation to execute. Learners only effectively learn if they are curious, so learning to pass a class or check a box does not yield the same result as genuine interest in the subject.6 New approaches such as gamification and experiential learning can help mitigate some of these limitations.
The concept of accelerated learning, or using a compressed timeline and various approaches, methodologies, or technological means to maximize learning, opens up several questions: what kinds of technologies accelerate learning, and how does technology accelerate learning? Technologies useful for accelerated learning include the immersive reality spectrum – virtual reality/augmented reality (mixed reality) and haptic feedback – as well as wearables, neural stimulation, and brain mapping. These technologies and devices enable the individualization and personalization of learning. Individualization allows the learner to identify their strengths and weaknesses in learning, retaining, and applying information and provides a program structured to capitalize on his/her naturally favored learning style to maximize the amount and depth of information presented in the most time and cost-effective manner.
Digital learning platforms are important tools for the tracking of a Soldier’s progress. This tool not only delivers individualized progress reports to superiors and instructors, but also allows the learner to remain up to date regardless of their physical location. Intelligent tutors may be integrated into a digital learning platform, providing real-time, individual feedback and suggesting areas for improvement or those in need of increased attention. Intelligent tutors and other technologies utilized in the accelerated learning process, such as augmented reality, can be readily adapted to a variety of situations conforming to the needs of a specific unit or mission.
Besides external methods of accelerated learning, there are also biological techniques to increase the speed and accuracy of learning new skills. DARPA scientist Dr. Tristan McClure-Begley introduced Targeted Neuroplasticity Training (TNT), whereby the peripheral nervous system is artificially stimulated resulting in the rapid acquisition of a specific skill. Soldiers can learn movements and retain that muscle memory faster than the time it would take to complete many sets of repetitions by pairing nerve stimulation with the performance of a physical action.
Accelerated learning does not guarantee positive outcomes. There is a high initial startup cost to producing mixed, augmented, and virtual reality training programs, and these programs require massive amounts of data and inputs for the most realistic product.7 There are questions about the longevity and quality of retention when learning is delivered through accelerated means. About 40 percent of information that humans receive is forgotten after 20 minutes and another 40 percent is lost after 30 days if it is not reinforced.8
Most learners attribute mastery of a skill to practical application and not formal training programs.9 TNT attempts to mitigate this factor by allowing for multiple physical repetitions to be administered quickly. But this technique must be correctly administered, or psychological and physiological pairing may not occur correctly or occur between the wrong stimuli, creating maladaptive plasticity, which is training the wrong behavior.
An increased emphasis on continuous and accelerated learning could present the Army with an opportunity to have Soldiers that are lifelong learners capable of quickly picking up emerging required skills and knowledge. However, this focus would need to account for peak learner interest and long-term viability.
[Editor’s Note: On 8-9 August 2018, the U.S. Army Training and Doctrine Command (TRADOC) co-hosted the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies in Washington, DC. Leading scientists, innovators, and scholars from academia, industry, and the government gathered to address future learning techniques and technologies that are critical in preparing for Army operations in the mid-21st century against adversaries in rapidly evolving battlespaces. One finding from this conference is that tomorrow’s Soldiers will learn differently from earlier generations, given the technological innovations that will have surrounded them from birth through their high school graduation. To effectively engage these “New Humans” and prepare them for combat on future battlefields, the Army must discard old paradigms of learning that no longer resonate (e.g., those desiccated lectures delivered via interminable PowerPoint presentations) and embrace more effective means of instruction.]
The recruit of 2050 will be born in 2032 and will be fundamentally different from the generations born before them. Marc Prensky, educational writer and speaker who coined the term digital native, asserts this “New Human” will stand in stark contrast to the “Old Human” in the ways they assimilate information and approach learning.1 Where humans today are born into a world with ubiquitous internet, hyper-connectivity, and the Internet of Things, each of these elements are generally external to the human. By 2032, these technologies likely will have converged and will be embedded or integrated into the individual with connectivity literally on the tips of their fingers. The challenge for the Army will be to recognize the implications of this momentous shift and alter its learning methodologies, approach to training, and educational paradigm to account for these digital natives.
These New Humans will be accustomed to the use of artificial intelligence (AI) to augment and supplement decision-making in their everyday lives. AI will be responsible for keeping them on schedule, suggesting options for what and when to eat, delivering relevant news and information, and serving as an on-demand embedded expert. The Old Human learned to use these technologies and adapted their learning style to accommodate them, while the New Human will be born into them and their learning style will be a result of them. In 2018, 94% of Americans aged 18-29 owned some kind of smartphone.2 Compare that to 73% ownership for ages 50-64 and 46% for age 65 and above and it becomes clear that there is a strong disconnect between the age groups in terms of employing technology. Both of the leading software developers for smartphones include a built-in artificially intelligent digital assistant, and at the end of 2017, nearly half of all U.S. adults used a digital voice assistant in some way.3 Based on these trends, there likely will be in the future an even greater technological wedge between New Humans and Old Humans.
New Humans will be information assimilators, where Old Humans were information gatherers. The techniques to acquire and gather information have evolved swiftly since the advent of the printing press, from user-intensive methods such as manual research, to a reduction in user involvement through Internet search engines. Now, narrow AI using natural language processing is transitioning to AI-enabled predictive learning. Through these AI-enabled virtual entities, New Humans will carry targeted, predictive, and continuous learning assistants with them. These assistants will observe, listen, and process everything of relevance to the learner and then deliver them information as necessary.
There is an abundance of research on the stark contrast between the three generations currently in the workforce: Baby Boomers, Generation X, and Millennials.4, 5 There will be similar fundamental differences between Old Humans and New Humans and their learning styles. The New Human likely will value experiential learning over traditional classroom learning.6 The convergence of mixed reality and advanced, high fidelity modeling and simulation will provide New Humans with immersive, experiential learning. For example, Soldiers learning military history and battlefield tactics will be able to experience it ubiquitously, observing how each facet of the battlefield affects the whole in real-time as opposed to reading about it sequentially. Soldiers in training could stand next to an avatar of General Patton and experience him explaining his command decisions firsthand.
There is an opportunity for the Army to adapt its education and training to these growing differences. The Army could—and eventually will need—to recruit, train, and develop New Humans by altering its current structure and recruitment programs. It will become imperative to conduct training with new tools, materials, and technologies that will allow Soldiers to become information assimilators. Additionally, the incorporation of experiential learning techniques will entice Soldiers’ learning. There is an opportunity for the Army to pave the way and train its Soldiers with cutting edge technology rather than trying to belatedly catch up to what is publicly available.
Evolution in Learning Technologies
If you enjoyed this post, please also watch Elliott Masie‘s video presentation onDynamic Readiness and MarkPrensky‘s presentation on The Future of Learning from of the Mad Scientist Learning in 2050 Conference …
[Editor’s Note: Mad Scientist welcomes back returning guest blogger Dr. Nir Buras with today’s post. We’ve found crowdsourcing (i.e., the gathering of ideas, thoughts, and concepts from a widespread variety of interested individuals) to be a very effective tool in enabling us to diversify our thoughts and challenge our assumptions. Dr. Buras’ post takes the results from one such crowdsourcing exercise and extrapolates three future urban scenarios. Given The Army Vision‘s clarion call to “Focus training on high-intensity conflict, with emphasis on operating in dense urban terrain,” our readers would do well to consider how the Army would operate in each of Dr. Buras’ posited future scenarios…]
Thechallenges of the 21st century have been forecast and are well-known. In many ways we are already experiencing the future now. But predictions are hard to validate. A way around that is turning to slightly older predictions to illuminate the magnitude of the issues and the reality of their propositions.1Futurists William E. Halal and Michael Marien’s predictions of 2011 have aged enough to be useful. In an improved version of theDelphi method, they iteratively built consensus among participants. Halal and Marien balanced the individual sense of over sixty well-qualified experts and thinkers representing a range of technologies with facilitated feedback from the others. They translated their implicit or tacit know how to make qualified quantitative empirical predictions.2
From their research we can transpose three future urban scenarios: TheHigh-Tech City, The Feral City, and Muddling Through.
The High-Tech City
The High-Tech City scenario is based primarily on futurist Jim Dator’s high-tech predictions. It envisions the continued growth of a technologically progressive, upwardly mobile, internationally dominant, science-guided, rich, leisure-filled, abundant, and liberal society. Widespread understanding of what works largely avoids energy shortages, climate change, and global conflict.3
The high-tech, digital megacity is envisaged as a Dubai on steroids. It is hyper-connected and energy-efficient, powered by self-sustaining, renewable resources and nuclear energy.4
Connected by subways and skyways, with skyscraping vertical gardens, the cities are ringed by elaborately managed green spaces and ecosystems. The city’s 50 to 150-story megastructures, “cities-in-buildings,” incorporate apartments, offices, schools and grocery stores, hospitals and shopping centers, sports facilities and cultural centers, gardens, and running tracks. Alongside them rise vertical farms housing animals and crops. The rooftop garden of the 2015 filmHigh Rise depicts how aerial terraces up high provide a sense of suburban living in the high-tech city.5
On land, zero-emission driverless traffic zips about on intelligent highways. High-speed trains glide silently by. After dark, spider bots and snake drones automatically inspect and repair buildings and infrastructure.6
In the air, helicopters, drones, and flying cars zoom around. Small drones, mimicking insects and birds, and programmable nano-chips, some as small as “smart” dust, swarm over the city into any object or shape on command. To avoid surface traffic, inconvenience, and crime, wealthier residents fly everywhere.7
Dominated by centralized government and private sector bureaucracies wielding AI, these self-constructing robotic “cyburgs” have massive technology, robotics, and nanotechnology embedded in every aspect of their life, powered by mammoth fusion energy plants.8
Every unit of every component is embedded with at least one flea-size chip. Connected into a single worldwide digital network,trillions of sensors monitor countless parameters for the cityand everything in it. The ruling AI, commanded directly by individual minds, autonomously creates, edits, and implements software, simultaneously processing feedback from a global network of sensors.9
The High-Tech City is not a new concept. It goes back to Jules Verne, H. G. Wells, and Fritz Lang, who most inspired its urban look in the 1927 film Metropolis. The extrapolated growth of technology has long been the basis for predictions. But professional futurists surprisingly agree that a High-Tech Jetsons scenario has only a 0%-5% probability of being realized.10
Poignantly, the early predictors transmitted a message that the stressful lifestyle of the High-Tech City contradicts the intention of freedom from drudge. Moreover, the High-Tech megacities’ appetite for minerals may lay waste to whole ecosystems. Much of the earth may become a feral wilderness. Massive, centralized AI Internet clouds and distribution systems give a false sense of cultural robustness. People become redundant and democracy meaningless. The world may fail to react to accelerated global crises, with disastrous consequences. The paradoxical obsolescence of high-tech could slide humanity into a new Dark Age.11
The Feral City
Futurists disturbingly describe a Decline to Disaster scenario as five times more likely to happen than the high-tech one. From Tainter’s theory of collapse and Jane Jacobs’s Dark Age Ahead we learn that the cycles of urban problem-solving lead to more problems and ultimately failures. If Murphy’s Law kicks in, futurists predict a 60% chance that large parts of the world may be plunged into an Armageddon-type techno-dystopian scenario, typified by the films Mad Max (1979) and Blade Runner (1982).12
Apocalyptic feral cities, once vital components in national economies, are routinely imagined as vast, sprawling urban environments defined by blighted buildings. An immense petri dish of both ancient and new diseases, rule of law has long been replaced by gang anarchy and the only security available in them is attained through brute power.13
Neat suburban areas were long ago stripped for their raw materials. Daily life in feral cities is characterized by a ubiquitous specter of murder, bloodshed, and war, of the militarization of young men, and the constant threat of rape to females. Urban enclaves are separated by wild zones, fragmented habitats consisting of wild nature and subsistence agriculture. With minimal or no sanitation facilities, a complete absence of environmental controls, and massive populations, feral cities suffer from extreme air pollution from vehicles and the use of open fires and coal for cooking and heating. In effect toxic-waste dumps, these cities pollute vast stretches of land, poisoning coastal waters, watersheds, and river systems throughout their hinterlands.14
Pollution is exported outside the enclaves, where the practices of the desperately poor, and the extraction of resources for the wealthy, induce extreme environmental deterioration. Rivers flow with human waste and leached chemicals from mining, contaminating much of the soil on their banks.15
Globally connected, a feral city might possess a modicum of commercial linkages, and some of its inhabitants might have access to advanced communication and computing. In some areas, agriculture might forcefully cultivate high-yield, GMO, and biomass crops. But secure long-distance travel nearly disappears, undertaken mostly by the super-rich and otherwise powerful.16
Futurists backcasting from 2050 say that the current urbanization of violence and war are harbingers of the feral city scenario. But feral cities have long been present. The Warsaw Ghetto in World War Two was among them, as were the Los Angeles’ Watts neighborhood in the 1960s and 1990s; Mogadishu in 2003, and Gaza repeatedly.17
Conflict and crime changed once charming, peaceful Aleppo, Bamako, Caracas, Erbil, Mosul, Tripoli, and Salvador into feral cities. Medieval San Gimignano was one. Spectacularly, from 1889 to 1994 the ghastly spaces of Hong Kong’s singular urban phenomenon, theWalled City of Kowloon, provided a living example.18
The good news is that futurists tend to believe in a 65%-85% probability of a Muddling Through scenario. Despite interlinked, cascading catastrophes, they suggest that technologies may gain some on the problems. Somehow securing a sustainable world for 9 billion people by 2050, they suggest the world will be massively changed, yet somehow livable.19
Lending credibility to the Muddling Through scenario is that it blends numerous hypotheses. It predicts that people living in rural communities will tend the land scientifically. Its technological salvation hypothesis posits that science will come to the rescue. Its free market hypothesis assumes that commerce will drive technological advancements.20
It pictures a “conserver” society tinged by Marxism, a neo-puritan “ecotopia,” colored by both the high-tech and feral scenarios. Tropical diseases, corruption, capitalism, socialism, inequality, and war are not eradicated. But nationalism, tribalism, and xenophobia are reduced after global traumas. Though measurably poorer, most people will still have a reasonable level of wellbeing.21 According to the Muddling Through scenario, large cities retract and densify around their old centers and waterfronts. Largely self-sufficient, small towns and cities survive amid the ruins of suburban sprawl, separated by resurgent forests and fields. Shopping malls, office towers and office parks, town dumps, tract homes, and abandoned steel and glass buildings are stripped for their recyclables. Unsalvageable downtowns in some cases go feral.22
A mix of high and low tech fosters digital communication with those at a distance. There would be drip irrigation, hydroponic farming, aquaculture, and grey water recycling, overlaid with artificial intelligence, biotechnology and biomimicry, nuclear power, geoengineering, and oil from algae.23
In some places, rail links are maintained, but cars are a rarity, and transportation is greatly reduced. Collapsed or dismantled freeways and bridges return to the forest or desert. While flying still exists, it is rarer. But expanded virtual mobility offering “holodeck” experiences subsumes tourism. Cosmopolitanism happens on the porch with an iPad.24
Surprisingly, the Muddling Through scenario ends up with urban fabric similar in properties tohomeostatic planning had it been done intentionally. Work is a short walk from home. Corner stores pop up, as do rudimentary cafés, bistros, and other gathering places. Forty percent of the food is produced in or around cities on small farms. Wildlife returns to course freely. Groups of travelers move on surviving “high roads.” Communities meet at large sports venues situated in the countryside between them.25
Sea level rise is met with river and sea walls. At their base, vast new coral beds and kelp forests grow over the skeletons of submerged districts and towns. In a matter of years, rivers and seas build new beaches. Their flood plains are populated with new plants. Smaller scale trade waterfronts are reactivated for shipping, and some ships are even powered by sail. Cities occupying harbors, rivers, and railroad junctions reconnect to distant supply chains, mostly for non-quotidian (i.e., luxury) goods.26
Learning from Rome to Understand Detroit
Rome’s deterioration from a third century city of more than 1,000,000 people started long before it was acknowledged. An unnoticed population drop to 800,000 was characterized by ever larger buildings of decreasing beauty and craft, including the huge Baths of Diocletian (298-306 CE). Anticipating barbarian invasion, Rome’s walls were built (271-275 CE). It was ransacked twice (410 and 455 CE).27
But as if in a dream, 5th century life of the diminishing but still substantial population continued as normal. Invading Goths maintained Rome’s Senate, taxes, and cops. But administrative and military infrastructure vaporized. An unraveling education system led to the rise of illiteracy. Noble families began using mob politics, economic and social linkages broke down, travel and transportation became unsafe, and manufacturing collapsed.28
By 500 CE, Rome had less than 100,000 people. Systematic agriculture disappeared, and much land returned to forest. The Pope and nobility pillaged abandoned public buildings for their materials. The expansive city was reduced to small groups of inhabited buildings, interspersed among large areas of abandoned ruins and overgrown vegetation. In the 12th and 13th centuries the population of Rome was possibly as few as 20,000 people.29
The long journey from first cities, to Ancient Greece, Rome, and the Middle Ages, through Paris, Washington, and Shanghai, helps us understand how our cities might end up. Holding Rome up to the mirrors of history reads like backcasting Rome’s decline and survival in a Muddling Through scenario from today’s view. Halal predicted that muddling would start about 2023 to 2027 and that if we weren’t muddling by then, collapse would set in by 2029.30
Detroit started muddling in 1968. New York proved to be a fragile city during blackouts, as did Dubai in its 2009 financial crisis. Since the 1970s, most of America’sten “dead cities,” many formerly among its largest and most vibrant, came disturbingly close to being feral. The overlapping invisibilities of heavily armed warlords and brutal police, make the favelas of Medellin and Rio de Janeiro virtually feral.31
Today we are at a tipping point. We can wait for the collapse of systems to reach homeostasis or attain it intentionally by applying Classic Planning principles.32
If you enjoyed this post, please also see Dr. Buras’ other posts:
Nir Buras is a PhD architect and planner with over 30 years of in-depth experience in strategic planning, architecture, and transportation design, as well as teaching and lecturing. His planning, design and construction experience includes East Side Access at Grand Central Terminal, New York; International Terminal D, Dallas-Fort-Worth; the Washington DC Dulles Metro line; work on the US Capitol and the Senate and House Office Buildings in Washington. Projects he has worked on have been published in the New York Times, the Washington Post, local newspapers, and trade magazines. Buras, whose original degree was Architect and Town planner, learned his first lesson in urbanism while planning military bases in the Negev Desert in Israel. Engaged in numerous projects since then, Buras has watched first-hand how urban planning impacted architecture. After the last decade of applying in practice the classical method that Buras learned in post-doctoral studies, his book, *The Art of Classic Planning* (Harvard University Press, 2019), presents the urban design and planning method of Classic Planning as a path forward for homeostatic, durable urbanism.
1 Population growth, clean water, compromised resilience of infrastructures, drug-resistant microbes, pandemics, possible famine, authoritarian regimes, social breakdowns, terrestrial cataclysms, terrorist mischief, nuclear mishaps, perhaps major war, inequity, education and healthcare collapse, climate change, ecological devastation, biodiversity loss, ocean acidification, world confusion, institutional gridlock, failures of leadership, failure to cooperate. Sources include: Glenn, Jerome C., Theodore J. Gordon, Elizabeth Florescu, 2013-14 State of the Future Millennium Project: Global Futures Studies and Research, Millennium-project.org (website), Washington, DC, 2014; Cutter, S. L. et al., Urban Systems, Infrastructure, and Vulnerability, in Climate Change Impacts in the United States: The Third National Climate Assessment, in Melillo, J. M. et al., (eds.), U.S. Global Change Research Program, 2014, Ch. 11, pp. 282-296; Kaminski, Frank, A review of James Kunstler’s The Long Emergency 10 years later, Mud City Press (website), Eugene, OR, 9 March 2015; Urban, Mark C., Accelerating extinction risk from climate change, Science Magazine, Vol. 348, Issue 6234, 1 May 2015, pp. 571-573; Kunstler, J.H., Clusterfuck Nation: A Glimpse into the Future, Kunstler.com (website), 2001b; US Geological Survey, Materials Flow and Sustainability, Fact Sheet FS-068-98, June 1998; Klare, M. T., The Race for What’s Left, Metropolitan Books, New York, 2012; Drielsma, Johannes A. et al., Mineral resources in life cycle impact assessment – defining the path forward, International Journal of Life Cycle Assessment, 21 (1), 2016, pp. 85-105; Meinert, Lawrence D. et al., Mineral Resources: Reserves, Peak Production and the Future, Resources 5(14), 2016; OECD World Nuclear Agency and International Atomic Energy Agency, 2004; Tahil, William, The Trouble with Lithium Implications of Future PHEV Production for Lithium Demand, Meridian International Research, 2007; Turner, Graham, Cathy Alexander, Limits to Growth was right. New research shows we’re nearing collapse, Guardian, Manchester, 1 September 2014; Kelemen, Peter, quoted in Cho, Renee, Rare Earth Metals: Will We Have Enough?, in State of the Planet, News from the Earth Institute, Earth Institute, Columbia University, September 19, 2012; Griffiths, Sarah, The end of the world as we know it? CO2 levels to reach a ‘tipping point’ on 6 June – and Earth may never recover, expert warns, Daily Mail, London, 12 May 2016; van der Werf, G.R. et al., CO2 emissions from forest loss, Nature Geoscience, Volume 2, November 2009, pp. 737–738; Global Deforestation, Global Change Program, University of Michigan, January 4, 2006; Arnell, Nigel, Future worlds: a narrative description of a plausible world following climate change, Met Office, London, 2012; The End, Scientific American, Special Issue, Sept 2010; Dator, Jim, Memo on mini-scenarios for the pacific island region, 3, November, 1981b, quoted in Bezold, Clement, Jim Dator’sAlternative Futures and the Path to IAF’s Aspirational Futures, Journal of Futures Studies, 14(2), November 2009, pp. 123 – 134.
2 Halal, William, Through the megacrisis: the passage to global maturity, Foresight Journal, VOL. 15 NO. 5, 2013a, pp. 392-404; Halal, William E., and Michael Marien, Global MegaCrisis Four Scenarios, Two Perspectives, The Futurist, Vol. 45, No. 3, May-June 2011; Halal, William E., Forecasting the technology revolution: Results and learnings from the TechCast project, Technological Forecasting and Social Change, 80.8, 2013b, pp. 1635-1643; TechCast Project, George Washington University, TechCast.org (website), Washington, DC, N.D.; National Research Council, Persistent Forecasting of Disruptive Technologies—Report 2, The National Academies Press, Washington, DC,2010. Halal, William E., Technology’s Promise: Expert Knowledge on the Transformation of Business and Society, Palgrave Macmillan, London, 2008; Halal et al., The GW Forecast of Emerging Technologies, Technology Forecasting & Social Change, Vol. 59, 1998, pp. 89-110. The name was inspired by the oracle at Delphi (8th century BCE to 390 CE). The modern Delphi Method helps uncover data, and collect and distill the judgments of experts using rounds of questionnaires, interspersed with feedback. Each round is developed based on the results of the previous, until the research question is answered, a consensus is reached, a theoretical saturation is achieved, or sufficient information was exchanged. Linstone, Harold A., & Murray Turoff (eds.), The Delphi method: Techniques and applications, Addinson-Wesley, London, 1975; Halal, William E., Business Strategy for the Technology Revolution: Competing at the Edge of Creative Destruction, Journal of Knowledge Economics, Springer Science+Business Media, New York, September 2012. The author consolidated both of Halal and Marien muddling scenarios into one. The uncertainty of each particular forecast element was about 20% – 30 %.
4 Chan, Tony, in Reubold, Todd, Envision 2050: The Future of Cities, Ensia.com (website), 16 June, 2014; Kunstler, James Howard, Back to the Future, Orion Magazine, June 23, 2011. Urry, John et al., Living in the City, Foresight, Government Office for Science, London, 2014; Hoff, Mary, Envision 2050: The Future of Transportation, Ensia.com (website), 31 March, 2014.
5 Kaku, Michio, The World in 2100, New York Post, New York, 20 March 2011. Tonn, Bruce E., LeCorbusier Meets the Jetsons in Anytown U.S.A. in the Year 2050: Glimpses of the Future, Planning Forum, Community and, Regional Planning, Volume 8, School of Architecture, The University of Texas, Austin, 2002; Urry et al., 2014.
6 Kaku, 2011; Hon, 2016. Rubbish bins will send alarms when they are about full. Talking garbage bins will reward people with poems, aphorisms, and songs for placing street rubbish in the bin. Heinonen, 2013.
8 Heinonen, 2013. The prefix cy*, an abbreviation of cybernetics, relates to computers and virtual reality. The suffix *burg means city, fortified town. Urrutia, Orlando, Eco-Cybernetic City of the Future, Pacebutler.com (website), 12 February 2010; Tonn, 2002.
9 Shepard, M., Sentient City: Ubiquitous Computing, Architecture, And The Future of Urban Space. MIT Press, Cambridge, 2011; Kurzweil, Ray, The Singularity is Near, Penguin Group, New York, 2005. Some futurists predict that the energy required to keep a “global brain” operating may so deplete energy that it will bankrupt society and cause total collapse. Heinonen, 2013. The terms smart city, intelligent city, and digital city are sometimes synonymous, but the digital or intelligent city is considered heavily technological. Heinonen, 2013; Giffinger, Rudolf et al., Smart cities – Ranking of European medium-sized cities. Centre of Regional Science, Vienna UT, October 2007; Kaku, 2011; Vermesan, Ovidiu and Friess, Peter, Internet of Things: Converging Technologies for Smart Environments and Integrated Ecosystems, River Publishers, Aalborg DK, 2013; Cooper, G., Using Technology to Improve Society, The Guardian, Manchester, 2010; Heinonen, 2013. Typical smart city programs utilize traffic data visualization, smart grids, smart water and e-government solutions, The Internet, smartphones, inexpensive sensors, and mobile devices. Amsterdam, Dubai, Cairo, Edinburg, Malaga, and Yokohama have smart city schemes. Webb, Molly et al., Information Marketplaces: The New Economics of Cities, The Climate Group, ARUP, Accenture and The University of Nottingham, 2011.
10 Dator, 2002; Bezold, 2009. The Jetsons originally ran a single season in 1962-63. It was revived but not resuscitated in 1985. The term Jetsons today stands for “unlikely, faraway futurism.” Novak, Matt, 50 Years of the Jetsons: Why The Show Still Matters, Smithsonian.Com, 19 September 2012.
11 Perrow, Charles, Normal Accidents: Living with High-Risk Technologies, Basic Books, New York, 1984. By adding complexity, including conventional engineering warnings, precautions, and safeguards, systems failure not only becomes inevitable, but it may help create new categories of accidents, such as those of Bhopal, the Challenger disaster, Chernobyl, and Fukushima. Deconcentrating high-risk populations, corporate power, and critical infrastructures is suggested. Perrow, Charles, The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters, Princeton University Press, Princeton, 2011; Turner, 2014; Jacobs, Jane, Dark Age Ahead, Random House, New York, 2004, p.24.
12 Jacobs, 2004; Dirda, Michael, A living urban legend on the sorry way we live now, Washington Post, Washington DC, 6 June, 2004; Dator, 2002; Bezold, 2009; Dator, James, Alternative futures & the futures of law, in Dator, James & Clement Bezold (eds.), Judging the future, University of Hawaii Press, Honolulu, 1981. pp.1-17; Halal, 2013b.
13 The term feral city was coined in Norton, Richard J., Feral Cities, Naval War College Review, Vol. LVI, No. 4, Autumn 2003. See also Brunn, Stanley D. et al., Cities of the World: World Regional Urban Development, Rowman & Littlefield, Lanham, MD, 2003, pp. 5–14, chap. 1.
15 Urry, J., Offshoring. Polity, Cambridge, 2014; Gallopin, G., A. Hammond, P. Raskin, R. Swart, Branch Points, Global Scenario Group, Stockholm Environment Institute, Stockholm, 1997, p. 34. Norton, 2003.
17Backcasting is future hindsight. Kilcullen, David, Out of the Mountains: The Coming Age of the Urban Guerrilla, Oxford University Press, Oxford, 2013.
18Heterotopia, in Foucault, Michel, The Order of Things, Vintage Books, New York, 1971; Foucault, M., Of Other Spaces, Diacritics 16, 1986, pp. 22-27. Girard, Greg, and Ian Lambot, City of Darkness: Life in Kowloon Walled City, Watermark, Chiddingfold, 1993, 2007, 2014; Tan, Aaron Hee-Hung, Kowloon Walled City: Heterotopia in a Space of Disappearance (Master’s Thesis), Harvard University, Cambridge, MA, 1993; Sinn, Elizabeth, Kowloon Walled City: Its Origin and Early History (PDF). Journal of the Hong Kong Branch of the Royal Asiatic Society, 27, 1987, pp. 30–31; Harter, Seth, Hong Kong’s Dirty Little Secret: Clearing the Walled City of Kowloon, Journal of Urban History 27, 1, 2000, pp. 92-113; Grau, Lester W. and Geoffrey Demarest, Diehard Buildings: Control Architecture a Challenge for the Urban Warrior, Military Review, Combined Arms Center, Fort Leavenworth, Kansas, September / October 2003; Kunstler, James Howard, A Reflection on Cities of the Future, Energy Bulletin, Post Carbon Institute, 28 September, 2006; ArenaNet Art Director Daniel Dociu wins Spectrum 14 gold medal!, Guild Wars.com (website), 9 March 2007. Authors, game designers, and filmmakers used the Walled City to convey a sense of feral urbanization. It was the setting for Jean-Claude Van Damme’s 1988 film Bloodsport; Jackie Chan’s 1993 film Crime Story was partly filmed there during among genuine scenes of building demolition; and the video game Shadowrun: Hong Kong features a futuristic Walled City. Today the location of the former Kowloon Walled City is occupied by a park modelled on early Qing Dynasty Jiangnan gardens.
19 Halal, 2013a; Wright, Austin Tappan, Islandia, Farrar & Rinehart, New York, Toronto, 1942; Tonn, Bruce E., Anytown U.S.A. in the Year 2050: Glimpses of the Future, Planning Forum, Community and, Regional Planning, Volume 8, School of Architecture, The University of Texas, Austin, 2002; Porritt, Jonathon, The World We Made: Alex McKay’s Story from 2050, Phaidon Press, London, 2013. World Made by Hand novels by James Howard Kunstler: World Made By Hand, Grove Press, New York, 2008; The Witch of Hebron, Atlantic Monthly Press, 2010; A History of the Future, Atlantic Monthly, 2014; The Harrows of Spring, Atlantic Monthly Press, 2016
21 Dator, 2002; Bezold, 2009; Dator & Bezold, 1981; Dator, 1981a; Dator, 1981b; Dator, James, The Unholy Trinity, Plus One (Preface), Journal of Futures Studies, University of Hawaii, 13(3), February 2009, pp. 33 – 48; McDonough, William & Michael Braungart, Cradle to Cradle: Remaking the Way We Make Things, Macmillan, New York, 2002; Porritt, 2013; Urry et al., 2014.
22 Wright, 1942; Kunstler, 2011; Givens, Mark, Bring It On Home: An Interview with James Howard Kunstler, Urban Landscapes and Environmental Psychology, Mung Being (website), Issue 11, N.D., p. 30; Kunstler, World Made by Hand series.
23 Tonn, 2002; Mollison, B. C. Permaculture: A Designer’s Manual. Tagari Publications, Tyalgum, Australia, 1988; Holmgren, D. and B. Mollison, Permaculture One, Transworld Publishers, Melbourne, 1978; Holmgren, D., Permaculture: Principles and Pathways beyond Sustainability, Holmgren Design Services, Hepburn, Victoria, Australia, 2002; Holmgren, David, Future Scenarios: How Communities Can Adopt to Peak Oil and Climate Change, Chelsea Green Publishing White River Junction, Vermont, 2009; Walker, L., Eco-Village at Ithaca: Pioneering a Sustainable Culture, New Society Publishers, Gabriola Island, 2005; Hopkins, R., The Transition Handbook: From Oil Dependency to Local Resilience, Green Books, Totnes, Devon, 2008; Urry et al., 2014; Porritt, 2013.
24 Urry et al., 2014; Porritt, 2013; Caletrío, Javier, “The world we made. Alex McKay’s story from 2050” by Jonathon Porritt (review), Mobile Lives Forum, forumviesmobiles.org (website), 21 May 2015.
27 Krautheimer, Richard, Rome: Profile of A City, 312-1308, Princeton University Press, Princeton, 1980.
28 Palmer, Ada, The Shape of Rome, exurbe.com (website), Chicago, 15 August 2013.
29 Procopius of Caesarea, (c.490/507- c.560s) Procopius, Dewing, H B., and Glanville Downey (trans), Procopius, Harvard University Press, Cambridge, MA, 2000.On the Wars in eight books(Polemon or De bellis) waspublished 552, with an addition in 554; Storey, Glenn R., The population of ancient Rome, Antiquity, December 1, 199; Wickham, Chris, Medieval Rome: Stability and Crisis of a City, 900-1150, Oxford Studies in Medieval European History, Oxford University Press, New York, Oxford, 2015. Population numbers are uncertain well into the Renaissance. Krautheimer, 1980.
30 Porritt, 2013; Alexander, Samuel, Resilience through Simplification: Revisiting Tainter’s Theory of Collapse, Simplicity Institute Report, Melbourne (?), 2012b; Palmer, 2013: Halal, 2013a, 2013b.
31 America’s “Ten Dead Cities” in 2010: Buffalo; Flint; Hartford; Cleveland; New Orleans; Detroit; Albany; Atlantic City; Allentown, and Galveston. McIntyre, Douglas A., America’s Ten Dead Cities: From Detroit to New Orleans, 24/7 Wall Street (website), 23 August, 2010; Gibson, Campbell, Population of The 100 Largest Cities And Other Urban Places In The United States: 1790 To 1990, Population Division, U.S. Bureau of the Census, Washington, DC, June 1998. See also “America’s 150 forgotten cities.” Hoyt, Lorlene and André Leroux, Voices from Forgotten Cities Innovative Revitalization Coalitions in America’s Older Small Cities, MIT, Cambridge, MA, 2007; Manaugh, Geoff, Cities Gone Wild, Bldgblog.com (website), 1 December 2009.
32 Buras, Nir, The Art of Classic Planning for Beautiful and Enduring Communities, Harvard University Press, Cambridge, 2019.
[Editor’s Note:Multi-Domain Operations (MDO) describes how the U.S. Army, as part of the joint force, can counter and defeat a near-peer adversary capable of contesting the U.S. in all domains, in both competition and armed conflict. MDO provides commanders numerous options for executing simultaneous and sequential operations using surprise and the rapid and continuous integration of capabilities across all domains to present multiple dilemmas to an adversary in order to gain physical and psychological advantages and influence and control over the operational environment.
Today’s guest blog post by Mr. Matthew Ader, however, addresses the advent of inexpensive CubeSats, capable of providing global surveillance at a fraction of the cost of legacy spy satellites, and how they could usher in the end of covert movement for combat units and their associated logistical support, and with that the demise of strategic and operational deception and surprise.]
One of the key factors of war since time immemorial has been uncertainty. The dispositions of the enemy, the strength of their industry, the will of their people – all have been guessed at, but rarely known for certain. Thanks to a pair of companies in California, that is about to change. Deception is dying.
Planet Labs operates a constellation of around 180 CubeSats – shoebox sized satellites in low earth orbit. Each day, they photograph the entirety of the globe, sending 6 terabytes of data to Earth for use. That capacity alone is valuable, but the sheer volume of data makes it impossible to analyze quickly.
For a human.
Artificial Intelligence (AI) image analysis is not so limited. This has been recognized and operationalized byOrbital Insight, a company specializing in AI image analysis.Partnering with Planet Labs, Orbital Insight delivers unique intelligence –for example, counting cars in parking lots to determine market movements. If they can count cars, they can certainly count tanks.
And, unlike conventional satellites, CubeSat imagery is cheap. It costs about US$100,000 to put one into orbit. The cost of a Planet Labs satellite is not easily available, but a similar sized CubeSat costs an estimatedUS$30,000. A 180-satellite constellation would therefore cost US$24.3 million, around a third of the price of a single F-35. If more timely imagery is required, buying more satellites is not an obstacle. It’s harder to find solid numbers for AI, but Project Maven, DoD’s flagship image analysis research program, was budgeted at$93 million a year.
Therefore, it’s not implausible that given some years for technology to mature and a few billion dollars investment1, any national military will have the capability to persistently surveil the entire Earth. A combination of camouflage and low-resolution satellite cameras will probably preserve tactical deception. But strategic and operational deception, the covert movement of battalions and carrier strike groups, will be impossible. That is a revolution in military affairs.
In particular, logistics will become very difficult. The depots and truck convoys required to sustain a modern army will be easily visible. Long range, uninterceptable hypersonic weapons can then strike these targets with impunity. Even absent high-tech hypersonics, conventional missiles and rocket artillery can still have a serious impact. The result is that deploying and sustaining any sizeable force against an enemy with a large CubeSat constellation will be very difficult.
In trying to predict the future of war, it is easy to fall prey to LTG H.R. McMaster’s ‘vampire fallacy’of thinking new technology will deliver bloodless, decisive victory. Certainly, there are a range of factors which could mitigate the incredible intelligence advantages of CubeSat constellations. These could range from better cyberwarfare to degrade enemy intelligence sharing, to more effective missile defense, to directly attacking the CubeSats themselves.
These mitigating factors do not occur in the wild. It will take years of hard work to develop and deploy them. The U.S. military, in partnership with its allies, must take the lead on developing its own CubeSat constellations and countermeasures. Because if they don’t, someone else will– and the results for U.S. power could be potentially catastrophic.
[Editor’s Note: Story Telling is a powerful tool that allows us to envision how innovative technologies could be employed and operationalized in the Future Operational Environment. Mad Scientist is seeking your visions of future combat with our Science Fiction Writing Contest 2019. Our deadline for submission is 1 APRIL 2019, so please review the contest details below, get those creative writing juices flowing, and send us your visions of combat in 2030!]
Background: The U.S. Army finds itself at a historical inflection point, where disparate, yet related elements of an increasingly complexOperational Environment (OE) are converging, creating a situation where fast moving trends are rapidly transforming the nature of all aspects of society and human life – including the character of warfare. It is important to take a creative approach to projecting and anticipating both transformational and enduring trends that will lend themselves to the depiction of the future. In this vein, the U.S. Army Mad Scientist Initiative is seeking your creativity and unique ideas to describe a battlefield that does not yet exist.
Task: Write about the following scenario – On March 17th, 2030, the country of Donovia, after months of strained relations and covert hostilities, invades neighboring country Otso. Donovia is a wealthy nation that is a near-peer competitor to the United States. Like the United States, Donovia has invested heavily indisruptive technologies such as robotics, AI, autonomy, quantum information sciences, bio enhancements and gene editing, space-based weapons and communications, drones, nanotechnology, and directed energy weapons. The United States is a close ally of Otso and is compelled to intervene due to treaty obligations and historical ties. The United States is about to engage Donovia in its first battle with a near-peer competitor in over 80 years…
Three ways to approach:
1) Forecasting – Description of the timeline and events leading up to the battle.
2) Describing – Account of the battle while it’s happening.
3) Backcasting – Retrospective look after the battle has ended (i.e., After Action Review or lessons learned).
Three questions to consider while writing (U.S., adversaries, and others):
1) What will forces and Soldiers look like in 2030?
2) What technologies will enable them or be prevalent on the battlefield?
3) What doMulti-Domain Operations look like in 2030?
– No more than 5000 words in length
– Provide your submission in .doc or .docx format
– Please use conventional text formatting (e.g., no columns) and have images “in line” with text
– Submissions from Government and DoD employees must be cleared through their respective PAOs prior to submission
– MUST include completed release form (on the back of contest flyer)
– CANNOT have been previously published
Selected submissions may be chosen for publication or a possible future speaking opportunity.
Contact: Send your submissions to: email@example.com
For additional story telling inspiration, please see the following blog posts:
[Editor’s Note: As stated previously here in the Mad Scientist Laboratory, the nature of war remains inherently humanistic in the Future Operational Environment. Today’s post by guest blogger COL James K. Greer (USA-Ret.) calls on us to stop envisioning Artificial Intelligence (AI) as a separate and distinct end state (oftentimes in competition with humanity) and to instead focus on preparing for future connected competitions and wars.]
Thepossibilities and challenges for future security, military operations, and warfare associated with advancements in AI are proposed and discussed with ever-increasing frequency, both within formal defense establishments and informally among national security professionals and stakeholders. One is confronted with a myriad of alternative futures, including everything from a humanity-killing variation of Terminator’s SkyNet to uncontrolled warfare ala WarGames to Deep Learning used to enhance existing military processes and operations. And of courselegal and ethical issues surrounding the military use of AI abound.
Yet in most discussions of the military applications of AI and its use in warfare, we have a blind spot in our thinking about technological progress toward the future. That blind spot is that we think about AI largely as disconnected from humans and the human brain. Rather than thinking about AI-enabled systems as connected to humans, we think about them as parallel processes. We talk abouthuman-in-the loop or human-on-the-loop largely in terms of the control over autonomous systems, rather than comprehensive connection to and interaction with those systems.
But even while significant progress is being made in the development of AI, almost no attention is paid to the military implications of advances in human connectivity. Experiments have already been conductedconnecting the human brain directly to the internet, which of course connects the human mind not only to the Internet of Things (IoT), but potentially to every computer and AI device in the world. Such connections will be enabled by a chip in the brain that provides connectivity while enabling humans to perform all normal functions, including all those associated with warfare (as envisioned by John Scalzi’s BrainPal in “Old Man’s War”).
Moreover, experiments inconnecting human brains to each other are ongoing. Brain-to-brain connectivity has occurred in a controlled setting enabled by an internet connection. And, in experiments conducted to date, the brain of one human can be used to direct the weapons firing of another human, demonstrating applicability to future warfare. While experimentation in brain-to-internet and brain-to-brain connectivity is not as advanced as the development of AI, it is easy to see that the potential benefits, desirability, and frankly, market forces are likely to accelerate the human side of connectivity development past the AI side.
So, when contemplating the future of human activity, of which warfare is unfortunately a central component, we cannot and must not think of AI development and human development as separate, but rather as interconnected. Future warfare will be connected warfare, with implications we must now begin to consider. How would such connected warfare be conducted? How would mission command be exercised between man and machine? What are the leadership implications of the human leader’s brain being connected to those of their subordinates? How will humans manage information for decision-making without being completely overloaded and paralyzed by overwhelming amounts of data? What are the moral, ethical, and legal implications of connected humans in combat, as well as responsibility for the actions of machines to which they are connected? These and thousands of other questions and implications related to policy and operation must be considered.
The power of AI resides not just in that of the individual computer, but in the connection of each computer to literally millions, if not billions, of sensors, servers, computers, and smart devices employing thousands, if not millions, of software programs and apps. The consensus is that at some point the computing and analytic power of AI will surpass that of the individual. And therein lies a major flaw in our thinking about the future. The power of AI may surpass that of a human being, but it won’t surpass the learning, thinking, and decision-making power of connected human beings. When a future human is connected to the internet, that human will have access to the computing power of all AI. But, when that same human is connected to several (in a platoon), or hundreds (on a ship) or thousands (in multiple headquarters) of other humans, then the power of AI will be exceeded by multiple orders of magnitude. The challenge of course is being able to think effectively under those circumstances, with your brain connected to all those sensors, computers, and other humans. This is whatRay Kurzwell terms “hybrid thinking.” Imagine how that is going to change every facet of human life, to include every aspect of warfare, and how everyone in our future defense establishment, uniformed or not, will have to be capable of hybrid thinking.
So, what will the military human bring to warfare that the AI-empowered computer won’t? Certainly, one of the major challenges with AI thus far has been its inability to demonstrate human intuition. AI can replicate some derivative tasks with intuition using what is now called “Artificial Intuition.” These tasks are primarily the intuitive decisions that result from experience: AI generates this experience through some large number of iterations, which is how Goggle’s AlphaGo was able to beat the human world Go champion. Still, this is only a small part of the capacity of humans in terms not only of intuition, but of “insight,” what we call the “light bulb moment”. Humans will also bring emotional intelligenceto connected warfare. Emotional intelligence, including aspects such as empathy, loyalty, and courage, are critical in the crucible of war and are not capabilities that machines can provide the Force, not today and perhaps not ever.
Warfare in the future is not going to be conducted by machines, no matter how far AI advances. Warfare will instead be connected human to human, human to internet, and internet to machine in complex, global networks. We cannot know today how such warfare will be conducted or what characteristics and capabilities of future forces will be necessary for victory. What we can do is cease developing AI as if it were something separate and distinct from, and often envisioned in competition with, humanity and instead focus our endeavors and investments in preparing for future connected competitions and wars.
If you enjoyed this post, please read the following Mad Scientist Laboratory blog posts:
… and watch Dr. Alexander Kott‘s presentation The Network is the Robot, presented at the Mad Scientist Robotics, Artificial Intelligence, & Autonomy: Visioning Multi Domain Battle in 2030-2050 Conference, at the Georgia Tech Research Institute, 8-9 March 2017, in Atlanta, Georgia.
COL James K. Greer (USA-Ret.) is the Defense Threat Reduction Agency (DTRA) and Joint Improvised Threat Defeat Organization (JIDO) Integrator at the Combined Arms Command. A former cavalry officer, he served thirty years in the US Army, commanding at all levels from platoon through Brigade. Jim served in operational units in CONUS, Germany, the Balkans and the Middle East. He served in US Army Training and Doctrine Command (TRADOC), primarily focused on leader, capabilities and doctrine development. He has significant concept development experience, co-writing concepts for Force XXI, Army After Next and Army Transformation. Jim was the Army representative to OSD-Net assessment 20XX Wargame Series developing concepts OSD and the Joint Staff. He is a former Director of the Army School of Advanced Military Studies (SAMS) and instructor in tactics at West Point. Jim is a veteran of six combat tours in Iraq, Afghanistan, and the Balkans, including serving as Chief of Staff of the Multi-National Security Transition Command – Iraq (MNSTC-I). Since leaving active duty, Jim has led the conduct of research for the Army Research Institute (ARI) and designed, developed and delivered instruction in leadership, strategic foresight, design, and strategic and operational planning. Dr. Greer holds a Doctorate in Education, with his dissertation subject as US Army leader self-development. A graduate of the United States Military Academy, he has a Master’s Degree in Education, with a concentration in Psychological Counseling: as well as Masters Degrees in National Security from the National War College and Operational Planning from the School of Advanced Military Studies.
[Editor’s Note: Mad Scientist Laboratory is pleased to publish the following post by repeat guest blogger Mr. Victor R. Morris, addressing the relationship of Artificial Intelligence (AI), Robotic and Autonomous systems (RAS), and Quantum Information Science (QIS) to Quantum Artificial Intelligence (QAI), and why we should pursue a parallel QAI strategy in order to predict alternative possibilities in a quantum multiverse. Prepare to have your consciousness expanded — Read on! (Note: Some of the embedded links in this post are best accessed using non-DoD networks.)]
The U.S. defense industry routinely analyzes emerging and potentially disruptive technological trends influencing long-term strategic competition. This post describes the greater defense community as public and private sectors responsible for national security and associated interests abroad. Interstate competition has implications for global order and disorder, according to the2018 National Defense Strategysummary.
The three defense industry trends identified in this post are:
Artificial Intelligence (AI),
Robotic and Autonomous systems (RAS), and
Quantum Information Science (QIS).
According to Paul Scharre‘s preface to Elsa Kania‘s paper onBattlefield Singularity, published by the Center for a New American Security (CNAS), “Artificial intelligence (AI) is fast heating up as a key area of strategic competition.” (N.B., both Mr. Scharre and Ms. Kania are proclaimed Mad Scientists whose works have previously graced this blog site). Furthermore, structured analysis identified interrelated aspects of these trends and the requirement for a multi-disciplinary strategy focused on Quantum Artificial Intelligence (QAI), anticipating the potential impact on global systems.
First, this post argues that AI, QIS, and RAS are components of a greater QAI ecosystem underpinned by the scientific notion of information discussed in detail later. Information does not measure what is known, ratherit measures the number of possible alternatives for something. CombiningAI and quantum computing applicationspotentially results in QAI, according to a variety of scientists and theorists in the field. Additionally, information is the nucleus or “quanta” of the entire QAI ecosystem. Understanding information is critical to understanding the natural world. Secondly, the post argues “keeping up with the Joneses” in AI is counterproductive and perpetuates misunderstanding of advancements and implications for the future.
The first section of this post briefly describes AI, Machine Learning (ML), RAS, QIS, and QAI, and their relationships with information. The second section describes theoretical interpretations of reality based on quantum mechanical properties.
Section 1 Overview AI, sometimes called machine intelligence, includes the machine learning field enabling autonomous or independent functions and activity. QIS and computing are the next evolution of classical computing with implications for machine learning, reasoning, and autonomous systems behavior. As mentioned above, information is a fundamental consideration for all of these fields and the ability to perform parallel probabilistic tasks. “Probabilistic” refers to probabilities indirectly associated with randomness.
Artificial Intelligence (AI) and Machine Learning (ML)
AI involves computer systems performing tasks normally requiring human intelligence. In computer science, AI is the study of intelligent agents or autonomous entitiesperceiving and acting upon their environment. AI is intelligence exhibited by machines, enabled by machine learning algorithms in simpler terms. Algorithms are rule sets defining sequences of operations.ML is a field of AI and set of statistical techniques associated with machines performing intellectual, human tasks. ML includes deep learning and is critical to AI because it involves Artificial Neural Networks (ANN) like the human brain, enabling learning from large quantities of data to improve predictions and data driven decisions. ANNs are a framework for ML algorithms working together to process complex data sets.
Robotic and Autonomous Systems (RAS) Robots are one type of AI entity, while others include cyber agents, decision aids, and virtual assistants. Amazon’s Alexa and Apple’s Siri are good examples of AI-enabled virtual assistants using ML to perform tasks. RAS are technologies grantedautonomy or level of independence to execute tasks in a prescribed environment in a military context. RAS examples include both land and air systems like explosive ordnance disposal robots and unmanned aerial vehicles commonly referred to as “drones.” Autonomous behavior is designed by humans through a combination ofsensors and advanced computing processes. Advanced computing involves both environmental navigation and software enabled decision-making. RAS independence is a progressive spectrum, ranging from remote control to full autonomy.
Quantum Information Science (QIS)
According to the September 2018 United States Government’sNational Strategic Overview for Quantum Information Science report, “Quantum information science (QIS) applies the best understanding of the sub-atomic world—quantum theory—to generate new knowledge and technologies.” Quantum theory, also called quantum mechanics, describes the smallest finite quantities, or “quanta,” making up thequantum fields composing the universe. QIS includes the quantum computing field using quantum mechanical properties to advance information processing, transmission, and measurement.For example, quantum computation uses the quantum analog of a bit, called a quantum bit, existing in multiple states due toquantum superposition. Superposition allows quantum systems the ability to simultaneously occupy different quantum states. This fundamental principle means qubits are described as a linear combination of 0 and 1 (composition of basis states), and not solely 0 or 1 as in classical computing before measurement.
Quantum Artificial Intelligence (QAI)
This section does not attempt to explain AI and QIS intersections in detail. Both areas are so extensive that unifying concepts are difficult to understand. This post sees QAI as a different element of the taxonomy and not a subset of classical AI. “Quantum physics is based on information theory and probability theory” according to Andreas Wichert, author of Principles of Quantum Artificial Intelligence. He presents both theories in his book, highlighting quantum physics’ relationship to AI through associative memory and Bayesian networks. Associate memory and Bayesian networks are applied later to QAI based on their access to information.
Section 2 Overview
This section outlines interpretations of information and quantum theories and AI intersections. Information is a finite measurement of possible alternatives existing in the multiverse. Quantum computing has the potential for reversible or time-invertible deep learning and associative memory based on quantum entanglement and superposition. Quantum AI has the potential to test the multiverse theory, because QAI networks process, transmit, and measure information across space-time.
Information takes many forms that differ from one another, like natural language, symbols, acoustic speech, and pictures. The scientific notion of information is more precise.Information theory, proposed by Claude E. Shannon, studies the quantification, storage, and communication of information. Once again, Information does not measure what is known, it measures the number of possible alternatives for something.
Carlo Rovelli uses a dice example in his book Reality Is Not What It Seems: The Journey to Quantum Gravity to illustrate this point. If a dice is thrown, it can land on one of six sides. When we observe it fall on a number, we have an amount of information where N=6 because the possible alternatives are six. Instead of “N” (number of alternatives), scientists measure information in terms of quantity deemed “S” after Shannon. Rovelli also states information is finite in nature based on quantum mechanical properties. New or “relevant” information cancels out “irrelevant” information in a physical system, therefore systems can always obtain new information from other systems. *This point is important for later.
Measuring Possible Alternatives
The fundamental unit of classical information is a “bit.” The natural unit of information, or “nat,” is a unit of information orentropy. Information entropy is the average rate information is produced by a random source of data. Information entropy can be measured in bits, nats, or decimal digits, depending on the base logarithm defining it. Once again, a binary digit, characterized as 0 and 1, represents information in classical computing. A quantum bit, or “qubit,” is the basic unit of quantum information in the quantum world. A qubit can be acoherent superposition of both 0 and 1 eigenstates according to quantum mechanical properties. A qubit can also hold more information than a classical bit. Lastly, probability amplitudes are complex numbers. They are the probability of a qubit to appear in its basis states.
Quantum Machine Learning through Quantum Information
Quantum ANNs potentially enable deep learning from large quantities of qubits. Qubits are information, so they measure possible alternatives. Quantum ANNs are like Bayesian networks graphically modelling probabilistic relationships in this specific interpretation. The quantum nature of these networks expand access to reciprocal or correlated information.
An interpretation of reciprocal information is discussed through quantum mechanical properties and quantum many-worlds, also called“multiverse” theory in the last part of this post. This specific interpretation is multiverses are finite because information is. This is loosely based on Steven Hawking and Thomas Hertog’s April 2018 article,A smooth exit from internal inflation? where they state, “eternal inflation does not produce an infinite fractal-like multiverse, but is finite and reasonably smooth.”
Quantum Many Worlds
Quantum computing has the potential to allow reversible or time-invertible deep learning and associative memory, based on quantum entanglement and superposition. Qubits contain entangled relevant and irrelevant (anti-correlated probabilities) information across space-time. This concept ensures a retro-causality loop of finite information exchange. Quantum associative memory is the ability to learn and remember correlations between seemingly unrelated items. This is possible because all “items” are correlated through quantum phenomena. Relevant information in one world or universe (macro possible alternative) is simultaneously irrelevant information in the adjacent world because of quantum states and finite quantity of information in nature. Quantum information cannot be copied according to the no-cloning theorem. Conversely, it cannot be deleted based on a time reversed dual called the “no-deleting theorem.”
Information is the quanta of consciousness. It is a measurement of awareness following all possible trajectories through the quantum multiverse ensuring the feedback loop of finite information that is reality.
This specific interpretation is based on Hugh Everett’s relative state or many-worlds interpretation (MWI) and informationreality code concept. MWI states “allpossible alternate histories and futures are real, each representing an actual world” or universe. The reality code behaves similarly to classical coding. Coding theory is the application of information theory manifesting efficient and reliable data transmission in a non-deterministic manner (where meaning is relative). Information in a data set is characterized by its Shannon entropy.
Summary of Key Points (You made it!) • The QAI ecosystem is underpinned by the scientific notion of information • Information does not measure what is known, it measures the number of possible alternatives for something • Relevant information cancels out irrelevant information in a physical system, therefore systems can always obtain new information from other systems • A qubit can be a coherent superposition of both 0 and 1 eigenstates, according to quantum mechanical properties • Qubits contain entangled relevant and irrelevant information across the multiverse • MWI states all possible alternate histories and futures are real, each representing an “actual world” or universe • Multiverses are finite because information is • Information is the quanta of consciousness and measurement of awareness
One interpretation of AI iswhoever becomes the leader in this sphere will become the ruler of the world. This is one possible alternative for QAI. Another possible alternative is the validation the many-worlds theory, providing insight into observable world alternate histories and optimized futures because information is available to QAI agent networks. The predictive nature of classical AI to support global superpower decision-making may not happen as planned either. Predictions in the observable world exist in other worlds, so AI predicting the observable future is relative. For example, when a dice lands on the number 1 in the observable world, it lands on the other five alternatives in alternate worlds. Additionally, unknown events in the observable world are known elsewhere in the quantum multiverse and vice versa (alternate histories and futures). Physicist David Deutsch, a proponent of the MWI, believes MWI will be testable throughquantum computing. Based on this blog’s conjecture, developing a parallel QAI strategy is the first step in preparing for our changing understanding of the world.
If you enjoyed this mind-bending post, please see Mr. Morris’ previous guest blog posts:
[Editor’s Note: Mad Scientist Laboratory is pleased to publish today’s guest blog post by MAJ Vincent Dueñas, addressing how AI can mitigate a human commander’s cognitive biases and enhance his/her (and their staff’s) decision-making, freeing them to do what they do best — command, fight, and win on future battlefields!]
Humans are susceptible to cognitive biases and these biases sometimes result in catastrophic outcomes, particularly in the high stress environment of war-time decision-making. Artificial Intelligence (AI) offers the possibility of mitigating the susceptibility of negative outcomes in the commander’s decision-making process by enhancing the collective Emotional Intelligence (EI) of the commander and his/her staff. AI will continue to become more prevalent in combat and as such, should be integrated in a way that advances the EI capacity of our commanders. An interactive AI that feels like one is communicating with a staff officer, which has human-compatible principles, can support decision-making in high-stakes, time-critical situations with ambiguous or incomplete information.
Mission Command in the Army is the exercise of authority and direction by the commander using mission orders to enable disciplined initiative within the commander’s intent.i It requires an environment of mutual trust and shared understanding between the commander and his subordinates in order to understand, visualize, describe, and direct throughout the decision-making Operations Process and mass the effects of combat power.ii
The mission command philosophy necessitates improved EI. EI is defined as the capacity to be aware of, control, and express one’s emotions, and to handle interpersonal relationships judiciously and empathetically, at much quicker speeds in order seize the initiative in war.iii The more effective our commanders are at EI, the better they lead, fight, and win using all the tools available.
AI Staff Officer
To conceptualize how AI can enhance decision-making on the battlefields of the future, we must understand that AI today is advancing more quickly in narrow problem solving domains than in those that require broad understanding.iv This means that, for now, humans continue to retain the advantage in broad information assimilation. The advent of machine-learning algorithms that could be applied to autonomous lethal weapons systems has so far resulted in a general predilection towards ensuring humans remain in the decision-making loop with respect to all aspects of warfare.v, vi AI’s near-term niche will continue to advance rapidly in narrow domains and become a more useful interactive assistant capable of analyzing not only the systems it manages, but the very users themselves. AI could be used to provide detailed analysis and aggregated assessments for the commander at the key decision points that require a human-in-the-loop interface.
The Battalion is a good example organization to visualize this framework. A machine-learning software system could be connected into different staff systems to analyze data produced by the section as they execute their warfighting functions. This machine-learning software system would also assess the human-in-the-loop decisions against statistical outcomes and aggregate important data to support the commander’s assessments. Over time, this EI-based machine-learning software system could rank the quality of the staff officers’ judgements. The commander can then consider the value of the staff officers’ assessments against the officers’ track-record of reliability and the raw data provided by the staff sections’ systems. The Bridgewater financial firm employs this very type of human decision-making assessment algorithm in order to assess the “believability” of their employees’ judgements before making high-stakes, and sometimes time-critical, international financial decisions.vii Included in such a multi-layered machine-learning system applied to the battalion, there would also be an assessment made of the commander’s own reliability, to maximize objectivity.
Observations by the AI of multiple iterations of human behavioral patterns during simulations and real-world operations would improve its accuracy and enhance the trust between this type of AI system and its users. Commanders’ EI skills would be put front and center for scrutiny and could improve drastically by virtue of the weight of the responsibility of consciously knowing the cognitive bias shortcomings of the staff with quantifiable evidence, at any given time. This assisted decision-making AI framework would also consequently reinforce the commander’s intuition and decisions as it elevates the level of objectivity in decision-making.
The capacity to understand information broadly and conduct unsupervised learning remains the virtue of humans for the foreseeable future.viii The integration of AI into the battlefield should work towards enhancing the EI of the commander since it supports mission command and complements the human advantage in decision-making. Giving the AI the feel of a staff officer implies also providing it with a framework for how it might begin to understand the information it is receiving and the decisions being made by the commander.
Stuart Russell offers a construct of limitations that should be coded into AI in order to make it most useful to humanity and prevent conclusions that result in an AI turning on humanity. These three concepts are: 1) principle of altruism towards the human race (and not itself), 2) maximizing uncertainty by making it follow only human objectives, but not explaining what those are, and 3) making it learn by exposing it to everything and all types of humans.ix
Russell’s principles offer a human-compatible guide for AI to be useful within the human decision-making process, protecting humans from unintended consequences of the AI making decisions on its own. The integration of these principles in battlefield AI systems would provide the best chance of ensuring the AI serves as an assistant to the commander, enhancing his/her EI to make better decisions.
Making AI Work
The potential opportunities and pitfalls are abundant for the employment of AI in decision-making. Apart from the obvious danger of this type of system being hacked, the possibility of the AI machine-learning algorithms harboring biased coding inconsistent with the values of the unit employing it are real.
The commander’s primary goal is to achieve the mission. The future includes AI, and commanders will need to trust and integrate AI assessments into their natural decision-making process and make it part of their intuitive calculus. In this way, they will have ready access to objective analyses of their units’ potential biases, enhancing their own EI, and be able overcome them to accomplish their mission.
MAJ Vincent Dueñas is an Army Foreign Area Officer and has deployed as a cavalry and communications officer. His writing on national security issues, decision-making, and international affairs has been featured in Divergent Options, Small Wars Journal, and The Strategy Bridge. MAJ Dueñas is a member of the Military Writers Guild and a Term Member with the Council on Foreign Relations. The views reflected are his own and do not represent the opinion of the United States Government or any of its agencies.
i United States, Army, States, United. “ADRP 5-0 2012: The Operations Process.” ADRP 5-0 2012: The Operations Process, Headquarters, Dept. of the Army., 2012, pp. 1–1.
iii “Emotional Intelligence | Definition of Emotional Intelligence in English by Oxford Dictionaries.” Oxford Dictionaries | English, Oxford Dictionaries, 2018, en.oxforddictionaries.com/definition/emotional_intelligence.