130. Trouble in Paradise: The Technological Upheaval of Modern Political and Economic Systems

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish the following post by returning guest blogger and proclaimed Mad Scientist Ms. Marie Murphy, addressing how advances in various technologies have the potential to upset the international order and empower individuals and non-state actors.  Read on to learn who will be the winners and losers in this technological upheaval!]

Access to new and advanced technologies has the potential to upset the current power dynamic of the world. From the proliferation of smartphones to commercially available software and hardware, individuals and states that were previously discounted as threats now have the potential to launch sophisticated attacks against powerful international players. Power will no longer remain in the upper echelons of society, where it is primarily held by national governments, multinational corporations, and national news services. These groups are losing their information dominance as individuals, local authorities, and other organizations now have the ability to access and distribute unfiltered information at their fingertips.1

A historical example of technology altering the balance of power are cassette tapes. Ayatollah Khomeini used cassette tape recordings to deliver sermons and direct the Iranian Revolution when exiled in Paris, while the United States observed the use of cassette tapes by the USSR in the spreading of communist propaganda.2 A new technology in the hands of empowered individuals and states allowed for events to transpire that otherwise would not have been possible with the same speed and effectiveness. Adaptation of technology created new agency for actors to direct movements from thousands of miles away, forever shaping the course of history. A more contemporary example is the role of smartphones and social media in the Arab Spring. These new disruptive technologies enabled the organizing of protests and the broadcasting of videos in real time, eclipsing traditional journalism’s ability to report.3

Near-term Analysis:

Technologically sophisticated international actors, such as the United States and the European Union, will maintain the capacity to manage the growth and use of technology within their own borders without adversely affecting governance. However, the increased availability of these technologies may strain civil/government relations in both developing countries and authoritarian systems.4 Technologies such as smartphones and the ability to instantly transmit data may force governments to be accountable for their actions, especially if their abuses of power are recorded and distributed globally by personal devices. At the same time however, “smart” devices may also be used by governments as instruments of social control, repression, and misinformation.

Technology also affords non-state actors new methods for recruiting and executing operations.  Technology-enabled platforms have allowed these groups to network near instantaneously across borders and around the world in a manner that would have been impossible prior to the advent of the digital age.5 A well-known example is the use of social media platforms by terrorist groups such as al-Qaeda and ISIS for propaganda and recruitment. These groups and others, such as Hezbollah and the political opposition in Venezuela, have deployed drones for both reconnaissance and as lethal weapons.6 The availability of these information age technologies has enabled these groups to garner more power and control than similar organizations could have done in the past, posing a real threat to major international actors.

Distant Future Analysis:

There is an extremely high chance of future political disruption stemming from technological advancement. There are some who predict a non-polar power balance emerging. In this scenario, the world is dominated by dozens of technologically capable actors with various capabilities. “Hyperconnected,” developed states such as Sweden, Finland, and Israel may become greater international players and brokers of technologically backed global power. “Partially-connected” nations, today’s developing world, will face multiple challenges and could possibly take advantage of new opportunities due to the proliferation of technology. Technologically empowered individuals, groups, or neighboring states may have the ability to question or threaten the legitimacy of an otherwise weak government. However, in these “partially-connected” states, technology will serve to break down social barriers to equalize social discourse among all strata of society. Other predictions suggest the dissolution of national boundaries and the creation of an “interconnected state” comprised of different national laws without borders in a virtual space.7

Democracy itself is evolving due to technological innovation. Increasing concerns about the roles of privacy, big data, internet security, and artificial intelligence in the digital age raise the following questions: how much does technology influence and control the lives of people in democratic countries, and what effect does this have on politics? Algorithms control the advertisements on the internet based on users’ search history, the collection and sale of personal data, and “fake news” which affects the opinions of millions.8  While these technologies provide convenience in the daily lives of internet-connected citizens, such as recommending items for purchase on Amazon and other platforms, they also lead to an erosion of public trust, a pillar upon which democracy is founded. Democracies must remain vigilant regarding how emerging technologies influence and affect their people and how governments use technology to interact with its citizens.

The changing geopolitical dynamics of the world is inextricably linked with economic power, and increasing economic power is positively correlated with technological advancement. Power is becoming more diffused as Brazil, Russia, India, China, and South Africa (i.e., the BRICS states), the Philippines, Mexico, Turkey, and others develop stronger economies. States with rising economic power may begin to shun traditional global political and economic institutions in favor of regional institutions and bilateral agreements.9 There will be many more emerging markets competing for market share,10 driving up competition and forcing greater innovation and integration to remain relevant.

One of the major factors of the changing economic landscape is the growth of robotics use. Today these technologies are exclusive to world economic leaders but are likely to proliferate as more technological advancements make them cost-effective for a wider range of industries and companies. The adaptation of artificial intelligence will also dictate the future success of businesses in developed and emerging economies. It is important for governments to consider “retraining programs” for those workers laid off by roboticization and AI domination of their career fields.11 Economically dominant countries of the future will be driven by technology and hold the majority of political power in the political arena. These states will harness these technologies and use them to increase their productivity while training their workforce to participate in a technologically aided market.

The Winners and Losers of the Future:

Winners:

  • Countries with stable governments and emerging economies which are able to adapt to the rapid pace of technological innovation without severe political disruption.
  • Current international powers which invest in the development and application of advanced technologies.

Losers:

  • Countries with fragile governments which can be overpowered by citizens, neighbors, or non-state actors armed with technology and authoritarian regimes who use technology as a tool of repression.
  • Traditional international powers which put themselves at risk of losing political and financial leverage if they only work to maintain the status quo. Those systems that do not adapt will struggle to remain relevant in a world dominated by a greater number of powers who fall into the “winners” category.

Conclusion

Modern power players in the world will have to adapt to the changing role of technology, particularly the influence of technology-empowered individuals. Technology will change how democracies and other political systems operate both domestically and on the world stage. The major international players of today will also have to accept that rising economic powers will gain more influence in the global market as they are more technologically enabled. As power becomes more diluted when states gain equalizing technology, the hegemony of the current powers that lead international institutions will begin to lose relevancy if they do not adapt.

If you enjoyed this post, please also see:

… and Ms. Murphy‘s previous posts:

… and crank up Bob Marley and the Wailers Get Up, Stand Up!

Marie Murphy is a junior at The College of William and Mary in Virginia, studying International Relations and Arabic. She is a regular contributor to the Mad Scientist Laboratory; interned at Headquarters, U.S. Army Training and Doctrine Command (TRADOC) with the Mad Scientist Initiative during the Summer of 2018; and is currently a Research Fellow for William and Mary’s Project on International Peace and Security.


1 Laudicina, Paul A, and Erik R Peterson. “Divergence, Disruption, and Innovation: Global Trends 2015–2025.” Strategy, A T Kearney, www.middle-east.atkearney.com/strategy/featured-article/-/asset_publisher/KwarGm4gaWhz/content/global-trends-2015-2025-divergence-disruption-and-innovation/10192?inheritRedirect=false&redirect=http://www.middle-east.atkearney.com/strategy/featured-article?p_p_id=101_INSTANCE_KwarGm4gaWhz&p_p_lifecycle=0&p_p_state=normal&p_p_mode=view&p_p_col_id=column-2&p_p_col_count=1.

2 Schmidt, Eric, and Jared Cohen. “The Digital Disruption.” Foreign Affairs, Foreign Affairs Magazine, 27 Oct. 2010, www.foreignaffairs.com/articles/2010-10-16/digital-disruption.

3 Duffy, Matt J. “Smartphones in the Arab Spring.” Academia.edu – Share Research, 2011, www.academia.edu/1911044/Smartphones_in_the_Arab_Spring

4 China is a unique case here because it’s a major developer of technology and counter-technology systems which block the use of certain devices, applications, or programs within their borders. But Chinese people do find loopholes and other points of access in the system, defying the government.

5 Schmidt, Eric, and Jared Cohen. “The Digital Disruption.” www.foreignaffairs.com/articles/2010-10-16/digital-disruption.

6 “Drone Terrorism Is Now a Reality, and We Need a Plan to Counter the Threat.” International Security: Fragility, Violence and Conflict, World Economic Forum, 20 Aug. 2018, www.weforum.org/agenda/2018/08/drone-terrorism-is-now-a-reality-and-we-need-a-plan-to-counter-the-threat.

7 Schmidt, Eric, and Jared Cohen. “The Digital Disruption.”  www.foreignaffairs.com/articles/2010-10-16/digital-disruption.

8 Unver, Hamid Akin. “Artificial Intelligence, Authoritarianism and the Future of Political Systems.” SSRN, EDAM Research Reports, 2018, 26 Feb. 2019, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3331635.

9 Laudicina, Paul A, and Erik R Peterson. “Divergence, Disruption, and Innovation: Global Trends 2015–2025.”

10 Stowell, Joshua. The Emerging Seven Countries Will Hold Increasing Levels of Global Economic Power by 2050. Global Security Review, 26 Apr. 2018, www.globalsecurityreview.com/will-global-economic-order-2050-look-like/.

11 Laudicina, Paul A, and Erik R Peterson. “Divergence, Disruption, and Innovation: Global Trends 2015–2025.”

121. Emergent Global Trends Impacting on the Future Operational Environment

[Editor’s Note: Regular readers of the Mad Scientist Laboratory are familiar with a number of disruptive trends and their individual and convergent impacts on the Future Operational Environment (OE). In today’s post, we explore three recent publications to expand our understanding of these and additional emergent global trends.  We also solicit your input on any other trends that have the potential to transform the OE and change the character of future warfare.]

The U.S. Army finds itself at a historical inflection point, where disparate, yet related elements of the Operational Environment (OE) are converging, creating a situation where fast-moving trends across the Diplomatic, Information, Military, and Economic (DIME) spheres are rapidly transforming the nature of all aspects of society and human life – including the character of warfare.” — The Operational Environment and the Changing Character of Future Warfare

Last year, the Mad Scientist Initiative published several products that envisioned these fast-moving trends and how they are transforming the Future OE. These products included our:

• Updated Potential Game Changers information sheet, identifying a host of innovative technologies with the potential to disrupt future warfare during The Era of Accelerated Human Progress (now through 2035) and The Era of Contested Equality (2035 through 2050).

 

 

 

Black Swans and Pink Flamingos blog post, addressing both Black Swan events (i.e., unknown, unknowns) which, though not likely, might have significant impacts on how we think about warfighting and security; and Pink Flamingos, which are the known, knowns that are often discussed, but ignored by Leaders trapped by organizational cultures and rigid bureaucratic decision-making structures.

With the advent of 2019, three new predictive publications have both confirmed and expanded the Mad Scientist Initiative’s understanding of emergent trends and technologies:

• Government Accounting Office (GAO) Report to Congressional Committees: National Security Long Range Emerging Threats Facing the United States As Identified by Federal Agencies, December 2018

• Deloitte Insights Technology, Media, and Telecommunications Predictions 2019, January 2019

• World Economic Forum (WEF) The Global Risks Report 2019, 14th Edition, January 2019

Commonalities:

These three publications collectively confirmed Mad Scientist’s thoughts regarding the disruptive potential of Artificial Intelligence (AI), Quantum Computing, the Internet of Things (IoT), and Big Data; and individually echoed our concerns regarding Cyber, Additive Manufacturing, Space and Counterspace, Natural Disasters, and the continuing threat of Weapons of Mass Destruction. That said, the real value of these (and other) predictions is in informing us about the trends we might have missed, and expanding our understanding of those that we were already tracking.

New Insights:

From the GAO Report we learned:

Megacorporations as adversaries. Our list of potential adversaries must expand to include “large companies that have the financial resources and a power base to exert influence on par with or exceeding non-state actors.” Think super-empowered individual(s) enhanced further by the wealth, reach, influence, and cover afforded by a transnational corporation.

The rich population is shrinking, the poor population is not. Working-age populations are shrinking in wealthy countries and in China and Russia, and are growing in developing, poorer countries…. [with] the potential to increase economic, employment, urbanization and welfare pressures, and spur migration.”

Climate change, environment, and health issues will demand attention. More extreme weather, water and soil stress, and food insecurity will disrupt societies. Sea-level rise, ocean acidification, glacial melt, and pollution will change living patterns. Tensions over climate change will grow.”

Internal and International Migration. Governments in megacities … may not have the capacity to provide adequate resources and infrastructure…. Mass migration events may occur and threaten regional stability, undermine governments, and strain U.S. military and civilian responses.”

Infectious Diseases. New and evolving diseases from the natural environment—exacerbated by changes in climate, the movement of people into cities, and global trade and travel—may become a
pandemic. Drug-resistant forms of diseases previously considered treatable could become widespread again…. Diminishing permafrost could expand habitats for pathogens that cause disease.”

From Deloitte Insights Predictions we learned:

Intuitive AI development services may not require specialized knowledge. “Baidu recently released an AI training platform called EZDL that requires no coding experience and works even with small data training sets…. Cloud providers have developed pre-built machine learning APIs [application-programming interfaces] for technologies such as natural language processing that customers can access instead of building their own.”

Cryptocurrency growth may have driven Chinese semiconductor innovation. Chinese chipmakers’ Application-Specific Integrated Circuits (ASICs), initially designed to meet domestic bitmining demands, may also meet China’s growing demand for AI chipsets vice Graphics Processing Units (GPUs). “Not only could these activities spark more domestic innovation… China just might be positioned to have a larger impact on the next generation of cognitive technologies.”

Quantum-safe security was important yesterday. Malicious adversaries could store classically encrypted information today to decrypt in the future using a QC [Quantum Computer], in a gambit known as a ‘harvest-and-decrypt’ attack.”

From the WEF Report we learned:

This is an increasingly anxious, unhappy, and lonely world. Anger is increasing and empathy appears to be in short supply…. Depression and anxiety disorders increased [globally] between 1990 and 2013…. It is not difficult to imagine such emotional and psychological disruptions having serious diplomatic—and perhaps even military—consequences.”

The risk from biological pathogens is increasing. “Outbreaks since 2000 have been described as a ‘rollcall of near-miss catastrophes’” and they are on the rise. “Biological weapons still have attractions for malicious non-state actors…. it [is] difficult to reliably attribute a biological attack… the direct effects—fatalities and injuries—would be compounded by potentially grave societal and political disruption.”

Use of weather manipulation tools stokes geopolitical tensions. Could be used to disrupt … agriculture or military planning… if states decided unilaterally to use more radical geo-engineering technologies, it could trigger dramatic climatic disruptions.”

Food supply disruption emerges as a tool as geo-economic tensions intensify. Worsening trade wars might spill over into high-stakes threats to disrupt food or agricultural supplies…. Could lead to disruptions of domestic and cross-border flows of food. At the extreme, state or non-state actors could target the crops of an adversary state… with a clandestine biological attack.”

Taps run dry on Water Day Zero. “Population growth, migration, industrialization, climate change, drought, groundwater depletion, weak infrastructure, and poor urban planning” all stress megacities’ ability to meet burgeoning demands, further exacerbating existing urban / rural divides, and could potentially lead to conflicts over remaining supply sources.

What Are We Missing?

The aforementioned trends are by no means comprehensive. Mad Scientist invites our readers to assist us in identifying any other additional emergent global trends that will potentially transform the OE and change the character of future warfare. Please share them with us and our readers by scrolling down to the bottom of this post to the “Leave a Reply” section, entering them in the Comment Box with an accompanying rationale, and then selecting the “Post Comment” button. Thank you in advance for all of your submissions!

If you enjoyed reading these assessments about future trends, please also see the Statement for the Record:  Worldwide Threat Assessment of the US Intelligence Community, 29 January 2019, from the U.S. Senate Select Committee on Intelligence.

106. Man-Machine Rules

[Editor’s Note:  Mad Scientist Laboratory is pleased to present the first of two guest blog posts by Dr. Nir Buras.  In today’s post, he makes the compelling case for the establishment of man-machine rules.  Given the vast technological leaps we’ve made during the past two centuries (with associated societal disruptions), and the potential game changing technological innovations predicted through the middle of this century, we would do well to consider Dr. Buras’ recommended list of nine rules — developed for applicability to all technologies, from humankind’s first Paleolithic hand axe to the future’s much predicted, but long-awaited strong Artificial Intelligence (AI).]

Two hundred years of massive collateral impacts by technology have brought to the forefront of society’s consciousness the idea that some sort of rules for man-machine interaction are necessary, similar to the rules in place for gun safety, nuclear power, and biological agents. But where their physical effects are clear to see, the power of computing is veiled in virtuality and anthropomorphization. It appears harmless, if not familiar, and it often has a virtuous appearance.

Avid mathematician Ada Augusta Lovelace is often called the first computer programmer

Computing originated in the punched cards of Jacquard looms early in the 19th century. Today it carries the promise of a cloud of electrons from which we make our Emperor’s New Clothes. As far back as 1842, the brilliant mathematician Ada Augusta, Countess of Lovelace (1815-1852), foresaw the potential of computers. A protégé and associate of Charles Babbage (1791-1871), conceptual originator of the programmable digital computer, she realized the “almost incalculable” ultimate potential of such difference engines. She also recognized that, as in all extensions of human power or knowledge, “collateral influences” occur.1

AI presents us with such “collateral influences.”2  The question is not whether machine systems can mimic human abilities and nature, but when. Will the world become dependent on ungoverned algorithms?3  Should there be limits to mankind’s connection to machines? As concerns mount, well-meaning politicians, government officials, and some in the field are trying to forge ethical guidelines to address the collateral challenges of data use, robotics, and AI.4

A Hippocratic Oath of AI?

This cover of Asimov’s I, Robot illustrates the story “Runaround”, the first to list all Three Laws of Robotics.

Asimov’s Three Laws of Robotics are merely a literary ploy to infuse his storylines.5  In the real world, Apple, Amazon, Facebook, Google, DeepMind, IBM, and Microsoft, founded www.partnershiponai.org6 to ensure “… the safety and trustworthiness of AI technologies, the fairness and transparency of systems.” Data scientists from tech companies, governments, and nonprofits gathered to draft a voluntary digital charter for their profession.7  Oren Etzioni, CEO of the Allen Institute for AI and a professor at the University of Washington’s Computer Science Department, proposed a Hippocratic Oath for AI.

But such codes are composed of hard-to-enforce terms and vague goals, such as using AI “responsibly and ethically, with the aim of reducing bias and discrimination.” They pay lip service to privacy and human priority over machines. They appear to sugarcoat a culture which passes the buck to the lowliest Soldier.8

We know that good intentions are inadequate when enforcing confidentiality. Well-meant but unenforceable ideas don’t meet business standards.  It is unlikely that techies and their bosses, caught up in the magic of coding, will shepherd society through the challenges of the petabyte AI world.9  Vague principles, underwriting a non-binding code, cannot counter the cynical drive for profit.10

Indeed, in an area that lacks authorities or legislation to enforce rules, the Association for Computing Machinery (ACM) is itself backpedaling from its own Code of Ethics and Professional Conduct. Their document weakly defines notions of “public good” and “prioritizing the least advantaged.”11 Microsoft’s President Brad Smith admits that his company wouldn’t expect customers of its services to meet even these standards.

In the wake of the Cambridge Analytica scandal, it is clear that coders are not morally superior to other people and that voluntary, unenforceable Codes and Oaths are inadequate.12  Programming and algorithms clearly reflect ethical, philosophical, and moral positions.13  It is false to assume that the so-called “openness” trait of programmers reflects a broad mindfulness.  There is nothing heroic about “disruption for disruption’s sake” or hiding behind “black box computing.”14  The future cannot be left up to an adolescent-centric culture in an economic system that rests on greed.15  The society that adopts “Electronic personhood” deserves it.

Machines are Machines, People are People

After 200 years of the technology tail wagging the humanity dog, it is apparent now that we are replaying history – and don’t know it. Most human cultures have been intensively engaged with technology since before the Iron Age 3,000 years ago. We have been keenly aware of technology’s collateral effects mostly since the Industrial Revolution, but have not yet created general rules for how we want machines to impact individuals and society. The blurring of reality and virtuality that AI brings to the table might prompt us to do so.

Distinctions between the real and the virtual must be maintained if the behavior of the most sophisticated computation machines and robots is captured by legal systems. Nothing in the virtual world should be considered real any more than we believe that the hallucinations of a drunk or drugged person are real.

The simplest way to maintain the distinction is remembering that the real IS, and the virtual ISN’T, and that virtual mimesis is produced by machines. Lovelace reminded us that machines are just machines. While in a dark, distant future, giving machines personhood might lead to the collapse of humanity, Harari’s Homo Deus warns us that AI, robotics, and automation are quickly bringing the economic value of humans to zero.16

From the start of civilization, tools and machines have been used to reduce human drudge labor and increase production efficiency. But while tools and machines obviate physical aspects of human work in the context of the production of goods or processing information, they in no way affect the truth of humans as sentient and emotional living beings, nor the value of transactions among them.

Microsoft’s Tay AI Chatter Bot

The man-machine line is further blurred by our anthropomorphizing machinery, computing, and programming. We speak of machines in terms of human traits, and make programming analogous to human behavior. But there is nothing amusing about GIGO experiments like MIT’s psychotic bot Norman, or Microsoft’s fascist Tay.17 Technologists falling into the trap of considering that AI systems can make decisions, are analogous to children, playing with dolls, marveling that “their dolly is speaking.”

Machines don’t make decisions. Humans do. They may accept suggestions made by machines and when they do, they are responsible for the decisions made. People are and must be held accountable, especially those hiding behind machines. The holocaust taught us that one can never say, “I was just following orders.”

Nothing less than enforceable operational rules is required for any technical activity, including programming. It is especially important for tech companies, since evidence suggests that they take ethical questions to heart only under direct threats to their balance sheets.18

When virtuality offers experiences that humans perceive as real, the outcomes are the responsibility of the creators and distributors, no less than tobacco companies selling cigarettes, or pharmaceutical companies and cartels selling addictive drugs. Individuals do not have the right to risk the well-being of others to satisfy their need for complying with clichés such as “innovation,” and “disruption.”

Nuclear, chemical, biological, gun, aviation, machine, and automobile safety rules do not rely on human nature. They are based on technical rules and procedures. They are enforceable and moral responsibility is typically carried by the hierarchies of their organizations.19

As we master artificial intelligence, human intelligence must take charge.20 The highest values known to mankind remains human life and the qualities and quantities necessary for the best individual life experience.21 For the transactions and transformations in which technology assists, we need simple operational rules to regulate the actions and manners of individuals. Moving the focus to human interactions empowers individuals and society.

Man-Machine Rules

Man-Machine rules should address any tool or machine ever made or to be made. They would be equally applicable to any technology of any period, from the first flaked stone, to the ultimate predictive “emotion machines.” They would be adjudicated by common law.22

1. All material transformations and human transactions are to be conducted by humans.

2. Humans may directly employ hand/desktop/workstation devices in the above.

3. At all times, an individual human is responsible for the activity of any machine or program.

4. Responsibility for errors, omissions, negligence, mischief, or criminal-like activity is shared by every person in the organizational hierarchical chain, from the lowliest coder or operator, to the CEO of the organization, and its last shareholder.

5. Any person can shut off any machine at any time.

6. All computing is visible to anyone [No Black Box].

7. Personal Data are things. They belong to the individual who owns them, and any use of them by a third-party requires permission and compensation.

8. Technology must age before common use, until an Appropriate Technology is selected.

9. Disputes must be adjudicated according to Common Law.

Machines are here to help and advise humans, not replace them, and humans may exhibit a spectrum of responses to them. Some may ignore a robot’s advice and put others at risk. Some may follow recommendations to the point of becoming a zombie. But either way, Man-Machine Rules are based on and meant to support free, individual human choices.

Man-Machine Rules can help organize dialog around questions such as how to secure personal data. Do we need hardcopy and analog formats? How ethical are chips embedded in people and in their belongings? What degrees and controls are contemplatable for personal freedoms and personal risk? Will consumer rights and government organizations audit algorithms?23 Would equipment sabbaticals be enacted for societal and economic balances?

The idea that we can fix the tech world through a voluntary ethical code emergent from itself, paradoxically expects that the people who created the problems will fix them.24 It is not whether the focus should shift to human interactions that leaves more humans in touch with their destiny. The question is at what cost? If not now, when? If not by us, by whom?

If you reading enjoyed this post, please also see:

Prediction Machines: The Simple Economics of Artificial Intelligence

Artificial Intelligence (AI) Trends

Making the Future More Personal: The Oft-Forgotten Human Driver in Future’s Analysis

Nir Buras is a PhD architect and planner with over 30 years of in-depth experience in strategic planning, architecture, and transportation design, as well as teaching and lecturing. His planning, design and construction experience includes East Side Access at Grand Central Terminal, New York; International Terminal D, Dallas-Fort-Worth; the Washington DC Dulles Metro line; work on the US Capitol and the Senate and House Office Buildings in Washington. Projects he has worked on have been published in the New York Times, the Washington Post, local newspapers, and trade magazines. Buras, whose original degree was Architect and Town planner, learned his first lesson in urbanism while planning military bases in the Negev Desert in Israel. Engaged in numerous projects since then, Buras has watched first-hand how urban planning impacted architecture. After the last decade of applying in practice the classical method that Buras learned in post-doctoral studies, his book, *The Art of Classic Planning* (Harvard University Press, 2019), presents the urban design and planning method of Classic Planning as a path forward for homeostatic, durable urbanism.


1 Lovelace, Ada Augusta, Countess, Sketch of The Analytical Engine Invented by Charles Babbage by L. F. Menabrea of Turin, Officer of the Military Engineers, With notes upon the Memoir by the Translator, Bibliothèque Universelle de Genève, October, 1842, No. 82.

2 Oliveira, Arlindo, in Pereira, Vitor, Hippocratic Oath for Algorithms and Artificial Intelligence, Medium.com (website), 23 August 2018, https://medium.com/predict/hippocratic-oath-for-algorithms-and-artificial-intelligence-5836e14fb540; Middleton, Chris, Make AI developers sign Hippocratic Oath, urges ethics report: Industry backs RSA/YouGov report urging the development of ethical robotics and AI, computing.co.uk (website), 22 September 2017, https://www.computing.co.uk/ctg/news/3017891/make-ai-developers-sign-a-hippocratic-oath-urges-ethics-report; N.A., Do AI programmers need a Hippocratic oath?, Techhq.com (website), 15 August 2018, https://techhq.com/2018/08/do-ai-programmers-need-a-hippocratic-oath/

3 Oliveira, 2018; Dellot, Benedict, A Hippocratic Oath for AI Developers? It May Only Be a Matter of Time, Thersa.org (website), 13 February 2017, https://www.thersa.org/discover/publications-and-articles/rsa-blogs/2017/02/a-hippocratic-oath-for-ai-developers-it-may-only-be-a-matter-of-time; See also: Clifford, Catherine, Expert says graduates in A.I. should take oath: ‘I must not play at God nor let my technology do so’, Cnbc.com (website), 14 March 2018, https://www.cnbc.com/2018/03/14/allen-institute-ceo-says-a-i-graduates-should-take-oath.html; Johnson, Khari, AI Weekly: For the sake of us all, AI practitioners need a Hippocratic oath, Venturebeat.com (website), 23 March 2018, https://venturebeat.com/2018/03/23/ai-weekly-for-the-sake-of-us-all-ai-practitioners-need-a-hippocratic-oath/; Work, Robert O., former deputy secretary of defense, in Metz, Cade, Pentagon Wants Silicon Valley’s Help on A.I., New York Times, 15 March 2018.

4 Schotz, Mai, Should Data Scientists Adhere To A Hippocratic Oath?, Wired.com (website), 8 February 2018, https://www.wired.com/story/should-data-scientists-adhere-to-a-hippocratic-oath/; du Preez, Derek, MPs debate ‘hippocratic oath’ for those working with AI, Government.diginomica.com (website), 19 January 2018, https://government.diginomica.com/2018/01/19/mps-debate-hippocratic-oath-working-ai/

5 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Asimov, Isaac, Runaround, in I, Robot, The Isaac Asimov Collection ed., Doubleday, New York City, p. 40.

6 Middleton, 2017.

7 Etzioni, Oren, A Hippocratic Oath for artificial intelligence practitioners, Techcrunch.com (website), 14 March 2018. https://techcrunch.com/2018/03/14/a-hippocratic-oath-for-artificial-intelligence-practitioners/?platform=hootsuite

8 Do AI programmers need a Hippocratic oath?, Techhq, 2018.

9 Goodsmith, Dave, quoted in Schotz, 2018.

10 Schotz, 2018.

11 Do AI programmers need a Hippocratic oath?, Techhq, 2018. Wheeler, Schaun, in Schotz, 2018.

12 Gnambs, T., What makes a computer wiz? Linking personality traits and programming aptitude, Journal of Research in Personality, 58, 2015, pp. 31-34.

13 Oliveira, 2018.

14 Jarrett, Christian, The surprising truth about which personality traits do and don’t correlate with computer programming skills, Digest.bps.org.uk (website), British Psychological Society, 26 October 2015, https://digest.bps.org.uk/2015/10/26/the-surprising-truth-about-which-personality-traits-do-and-dont-correlate-with-computer-programming-skills/; Johnson, 2018.

15 Do AI programmers need a Hippocratic oath?, Techhq, 2018.

16 Harari, Yuval N. Homo Deus: A Brief History of Tomorrow. London: Harvill Secker, 2015.

17 That Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms is not an excuse. AI Twitter bot, Tay had to be deleted after it started making sexual references and declarations such as “Hitler did nothing wrong.”

18 Schotz, 2018.

19 See the example of Dr. Kerstin Dautenhahn, Research Professor of Artificial Intelligence in the School of Computer Science at the University of Hertfordshire, who claims no responsibility in determining the application of the work she creates. She might as well be feeding children shards of glass saying, “It is their choice to eat it or not.” In Middleton, 2017. The principle is that the risk of an unfavorable outcome lies with an individual as well as the entire chain of command, direction, and or ownership of their organization, including shareholders of public companies and citizens of states. Everybody has responsibility the moment they engage in anything that could affect others. Regulatory “sandboxes” for AI developer experiments – equivalent to pathogen or nuclear labs – should have the same types of controls and restrictions. Dellot, 2017.

20 Oliveira, 2018.

21 Sentience and sensibilities of other beings is recognized here, but not addressed.

22 The proposed rules may be appended to the International Covenant on Economic, Social and Cultural Rights (ICESCR, 1976), part of the International Bill of Human Rights, which include the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). International Covenant on Economic, Social and Cultural Rights, www.refworld.org.; EISIL International Covenant on Economic, Social and Cultural Rights, www.eisil.org; UN Treaty Collection: International Covenant on Economic, Social and Cultural Rights, UN. 3 January 1976; Fact Sheet No.2 (Rev.1), The International Bill of Human Rights, UN OHCHR. June 1996.

23 Dellot, 2017.

24 Schotz, 2018.

80. “The Queue”

[Editor’s Note:  Mad Scientist Laboratory is pleased to present our August edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

Gartner Hype Cycle / Source:  Nicole Saraco Loddo, Gartner

1.5 Trends Emerge in the Gartner Hype Cycle for Emerging Technologies,” by Kasey Panetta, Gartner, 16 August 2018.

Gartner’s annual hype cycle highlights many of the technologies and trends explored by the Mad Scientist program over the last two years. This year’s cycle added 17 new technologies and organized them into five emerging trends: 1) Democratized Artificial Intelligence (AI), 2) Digitalized Eco-Systems, 3) Do-It-Yourself Bio-Hacking, 4) Transparently Immersive Experiences, and 5) Ubiquitous Infrastructure. Of note, many of these technologies have a 5–10 year horizon until the Plateau of Productivity. If this time horizon is accurate, we believe these emerging technologies and five trends will have a significant role in defining the Character of Future War in 2035 and should have modernization implications for the Army of 2028. For additional information on the disruptive technologies identified between now and 2035, see the Era of Accelerated Human Progress portion of our Potential Game Changers broadsheet.

[Gartner disclaimer:  Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.]

Artificial Intelligence by GLAS-8 / Source: Flickr

2.Should Evil AI Research Be Published? Five Experts Weigh In,” by Dan Robitzski, Futurism, 27 August 2018.

The following rhetorical (for now) question was posed to the “AI Race and Societal Impacts” panel during last month’s The Joint Multi-Conference on Human-Level Artificial Intelligence in Prague, The Czech Republic:

“Let’s say you’re an AI scientist, and you’ve found the holy grail of your field — you figured out how to build an artificial general intelligence (AGI). That’s a truly intelligent computer that could pass as human in terms of cognitive ability or emotional intelligence. AGI would be creative and find links between disparate ideas — things no computer can do today.

That’s great, right? Except for one big catch: your AGI system is evil or could only be used for malicious purposes.

So, now a conundrum. Do you publish your white paper and tell the world exactly how to create this unrelenting force of evil? Do you file a patent so that no one else (except for you) could bring such an algorithm into existence? Or do you sit on your research, protecting the world from your creation but also passing up on the astronomical paycheck that would surely arrive in the wake of such a discovery?”

The panel’s responses ranged from controlling — “Don’t publish it!” and treat it like a grenade, “one would not hand it to a small child, but maybe a trained soldier could be trusted with it”; to the altruistic — “publish [it]… immediately” and “there is no evil technology, but there are people who would misuse it. If that AGI algorithm was shared with the world, people might be able to find ways to use it for good”; to the entrepreneurial – “sell the evil AGI to [me]. That way, they wouldn’t have to hold onto the ethical burden of such a powerful and scary AI — instead, you could just pass it to [me and I will] take it from there.

While no consensus of opinion was arrived at, the panel discussion served a useful exercise in illustrating how AI differs from previous eras’ game changing technologies. Unlike Nuclear, Biological, and Chemical weapons, no internationally agreed to and implemented control protocols can be applied to AI, as there are no analogous gas centrifuges, fissile materials, or triggering mechanisms; no restricted access pathogens; no proscribed precursor chemicals to control. Rather, when AGI is ultimately achieved, it is likely to be composed of nothing more than diffuse code; a digital will’o wisp that can permeate across the global net to other nations, non-state actors, and super-empowered individuals, with the potential to facilitate unprecedentedly disruptive Information Operation (IO) campaigns and Virtual Warfare, revolutionizing human affairs. The West would be best served in emulating the PRC with its Military-Civil Fusion Centers and integrate the resources of the State with the innovation of industry to achieve their own AGI solutions soonest. The decisive edge will “accrue to the side with more autonomous decision-action concurrency on the Hyperactive Battlefield” — the best defense against a nefarious AGI is a friendly AGI!

Scales Sword Of Justice / Source: https://www.maxpixel.net/

3.Can Justice be blind when it comes to machine learning? Researchers present findings at ICML 2018,” The Alan Turing Institute, 11 July 2018.

Can justice really be blind? The International Conference on Machine Learning (ICML) was held in Stockholm, Sweden, in July 2018. This conference explored the notion of machine learning fairness and proposed new methods to help regulators provide better oversight and practitioners to develop fair and privacy-preserving data analyses. Like ethical discussions taking place within the DoD, there are rising legal concerns that commercial machine learning systems (e.g., those associated with car insurance pricing) might illegally or unfairly discriminate against certain subgroups of the population. Machine learning will play an important role in assisting battlefield decisions (e.g., the targeting cycle and commander’s decisions) – especially lethal decisions. There is a common misperception that machines will make unbiased and fair decisions, divorced from human bias. Yet the issue of machine learning bias is significant because humans, with their host of cognitive biases, code the very programming that will enable machines to learn and make decisions. Making the best, unbiased decisions will become critical in AI-assisted warfighting. We must ensure that machine-based learning outputs are verified and understood to preclude the inadvertent introduction of human biases.  Read the full report here.

Robot PNG / Source: pngimg.com

4.Uptight robots that suddenly beg to stay alive are less likely to be switched off by humans,” by Katyanna Quach, The Register, 3 August 2018.

In a study published by PLOS ONE, researchers found that a robot’s personality affected a human’s decision-making. In the study, participants were asked to dialogue with a robot that was either sociable (chatty) or functional (focused). At the end of the study, the researchers let the participants know that they could switch the robot off if they wanted to. At that moment, the robot would make an impassioned plea to the participant to resist shutting them down. The participants’ actions were then recorded. Unexpectedly, there were  a large number of participants who resisted shutting down the functional robots after they made their plea, as opposed to the sociable ones. This is significant. It shows, beyond the unexpected result, that decision-making is affected by robotic personality. Humans will form an emotional connection to artificial entities despite knowing they are robotic if they mimic and emulate human behavior. If the Army believes its Soldiers will be accompanied and augmented heavily by robots in the near future, it must also understand that human-robot interaction will not be the same as human-computer interaction. The U.S. Army must explore how attain the appropriate level of trust between Soldiers and their robotic teammates on the future battlefield. Robots must be treated more like partners than tools, with trust, cooperation, and even empathy displayed.

IoT / Source: Pixabay

5.Spending on Internet of Things May More Than Double to Over Half a Trillion Dollars,” by Aaron Pressman, Fortune, 8 August 2018.

While the advent of the Internet brought home computing and communication even deeper into global households, the revolution of smart phones brought about the concept of constant personal interconnectivity. Today and into the future, not only are humans being connected to the global commons via their smart devices, but a multitude of devices, vehicles, and various accessories are being integrated into the Internet of Things (IoT). Previously, the IoT was addressed as a game changing technology. The IoT is composed of trillions of internet-linked items, creating opportunities and vulnerabilities. There has been explosive growth in low Size Weight and Power (SWaP) and connected devices (Internet of Battlefield Things), especially for sensor applications (situational awareness).

Large companies are expected to quickly grow their spending on Internet-connected devices (i.e., appliances, home devices [such as Google Home, Alexa, etc.], various sensors) to approximately $520 billion. This is a massive investment into what will likely become the Internet of Everything (IoE). While growth is focused on known devices, it is likely that it will expand to embedded and wearable sensors – think clothing, accessories, and even sensors and communication devices embedded within the human body. This has two major implications for the Future Operational Environment (FOE):

– The U.S. military is already struggling with the balance between collecting, organizing, and using critical data, allowing service members to use personal devices, and maintaining operations and network security and integrity (see banning of personal fitness trackers recently). A segment of the IoT sensors and devices may be necessary or critical to the function and operation of many U.S. Armed Forces platforms and weapons systems, inciting some critical questions about supply chain security, system vulnerabilities, and reliance on micro sensors and microelectronics

– The U.S. Army of the future will likely have to operate in and around dense urban environments, where IoT devices and sensors will be abundant, degrading blue force’s ability to sense the battlefield and “see” the enemy, thereby creating a veritable needle in a stack of needles.

6.Battlefield Internet: A Plan for Securing Cyberspace,” by Michèle Flournoy and Michael Sulmeyer, Foreign Affairs, September/October 2018. Review submitted by Ms. Marie Murphy.

With the possibility of a “cyber Pearl Harbor” becoming increasingly imminent, intelligence officials warn of the rising danger of cyber attacks. Effects of these attacks have already been felt around the world. They have the power to break the trust people have in institutions, companies, and governments as they act in the undefined gray zone between peace and all-out war. The military implications are quite clear: cyber attacks can cripple the military’s ability to function from a command and control aspect to intelligence communications and materiel and personnel networks. Besides the military and government, private companies’ use of the internet must be accounted for when discussing cyber security. Some companies have felt the effects of cyber attacks, while others are reluctant to invest in cyber protection measures. In this way, civilians become affected by acts of cyber warfare, and attacks on a country may not be directed at the opposing military, but the civilian population of a state, as in the case of power and utility outages seen in eastern Europe. Any actor with access to the internet can inflict damage, and anyone connected to the internet is vulnerable to attack, so public-private cooperation is necessary to most effectively combat cyber threats.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at:  usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

52. Potential Game Changers

The Mad Scientist Initiative brings together cutting-edge leaders and thinkers from the technology industry, research laboratories, academia, and across the military and Government to explore the impact of potentially disruptive technologies. Much like Johannes Gutenberg’s moveable type (illustrated above), these transformational game changers have the potential to impact how we live, create, think, and prosper. Understanding their individual and convergent impacts is essential to continued battlefield dominance in the Future Operational Environment. In accordance with The Operational Environment and the Changing Character of Future Warfare, we have divided this continuum into two distinct timeframes:

The Era of Accelerated Human Progress (Now through 2035):
The period where our adversaries can take advantage of new technologies, new doctrine, and revised strategic concepts to effectively challenge U.S. military forces across multiple domains. Game changers during this era include:

• Robotics: Forty plus countries develop military robots with some level of autonomy. Impact on society, employment.
Vulnerable: To Cyber/Electromagnetic (EM) disruption, battery life, ethics without man in the loop.
Formats: Unmanned/Autonomous; ground/air vehicles/subsurface/sea systems. Nano-weapons.
Examples: (Air) Hunter/killer Unmanned Aerial Vehicle (UAV) swarms; (Ground) Russian Uran: Recon, ATGMs, SAMs.

• Artificial Intelligence: Human-Agent Teaming, where humans and intelligent systems work together to achieve either a physical or mental task. The human and the intelligent system will trade-off cognitive and physical loads in a collaborative fashion.

• Swarms/Semi Autonomous: Massed, coordinated, fast, collaborative, small, stand-off. Overwhelm target systems. Mass or disaggregate.



• Internet of Things (IoT): Trillions of internet linked items create opportunities and vulnerabilities. Explosive growth in low Size Weight and Power (SWaP) connected devices (Internet of Battlefield Things), especially for sensor applications (situational awareness). Greater than 100 devices per human. Significant end device processing (sensor analytics, sensor to shooter, supply chain management).
Vulnerable: To Cyber/EM/Power disruption. Privacy concerns regarding location and tracking.
Sensor to shooter: Accelerate kill chain, data processing, and decision-making.

• Space: Over 50 nations operate in space, increasingly congested and difficult to monitor, endanger Positioning, Navigation, and Timing (PNT)

GPS Jamming/Spoofing: Increasingly sophisticated, used successfully in Ukraine.
Anti Satellite: China has tested two direct ascent anti-satellite missiles.

The Era of Contested Equality (2035 through 2050):
The period marked by significant breakthroughs in technology and convergences in terms of capabilities, which lead to significant changes in the character of warfare. During this period, traditional aspects of warfare undergo dramatic, almost revolutionary changes which at the end of this timeframe may even challenge the very nature of warfare itself. Game changers during this era include:

• Hyper Velocity Weapons:
Rail Guns (Electrodynamic Kinetic Energy Weapons): Electromagnetic projectile launchers. High velocity/energy and space (Mach 5 or higher). Not powered by explosive.
No Propellant: Easier to store and handle.
Lower Cost Projectiles: Potentially. Extreme G-force requires sturdy payloads.
Limiting factors: Power. Significant IR signature. Materials science.
Hyper Glide Vehicles: Less susceptible to anti-ballistic missile countermeasures.

• Directed Energy Weapons: Signature not visible without technology, must dwell on target. Power requirements currently problematic.
Potential: Tunable, lethal, and non-lethal.
Laser: Directed energy damages intended target. Targets: Counter Aircraft, UAS, Missiles, Projectiles, Sensors, Swarms.
Radio Frequency (RF): Attack targets across the frequency spectrum. Targets: Not just RF; Microwave weapons “cook targets,” people, electronics.

• Synthetic Biology: Engineering / modification of biological entities
Increased Crop Yield: Potential to reduce food scarcity.
Weaponization: Potential for micro-targeting, Seek & destroy microbes that can target DNA. Potentially accessible to super-empowered individuals.
Medical Advances: Enhance soldier survivability.
Genetic Modification: Disease resistant, potentially designer babies and super athletes/soldiers. Synthetic DNA stores digital data. Data can be used for micro-targeting.
CRISPR: Genome editing.

• Information Environment: Use IoT and sensors to harness the flow of information for situational understanding and decision-making advantage.




In envisioning Future Operational Environment possibilities, the Mad Scientist Initiative employs a number of techniques. We have found Crowdsourcing (i.e., the gathering of ideas, thoughts, and concepts from a wide variety of interested individuals assists us in diversifying thoughts and challenging conventional assumptions) to be a particularly effective technique. To that end, we have published our latest, 2-page compendium of Potential Game Changers here — we would like to hear your feedback regarding them. Please let us know your thoughts / observations by posting them in this blog post’s Comment box (found below, in the Leave a Reply section). Alternatively, you can also submit them to us via email at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil. Thank you in advance for your contributions!

50. Four Elements for Future Innovation

(Editor’s Note: Mad Scientist Laboratory is pleased to present a new post by returning guest blogger Dr. Richard Nabors addressing the four key practices of innovation. Dr. Nabors’ previous guest posts discussed how integrated sensor systems will provide Future Soldiers with the requisite situational awareness to fight and win in increasingly complex and advanced battlespaces, and how Augmented and Mixed Reality are the critical elements required for these integrated sensor systems to become truly operational and support Soldiers’ needs in complex environments.)


For the U.S. military to maintain its overmatch capabilities, innovation is an absolute necessity. As noted in The Operational Environment and the Changing Character of Future Warfare, our adversaries will continue to aggressively pursue rapid innovation in key technologies in order to challenge U.S. forces across multiple domains. Because of its vital necessity, U.S. innovation cannot be left solely to the development of serendipitous discoveries.

The Army has successfully generated innovative programs and transitioned them from the research community into military use. In the process, it has identified four key practices that can be used in the future development of innovative programs. These practices – identifying the need, the vision, the expertise, and the resources – are essential in preparing for warfare in the Future Operational Environment. The recently completed Third Generation Forward Looking Infrared (3rd Gen FLIR) program provides us with a contemporary use case regarding how each of these practices are key to the success of future innovations.


1. Identifying the NEED:
To increase speed, precision, and accuracy of a platform lethality, while at the same time increasing mission effectiveness and warfighter safety and survivability.

As the U.S. Army Training and Doctrine Command (TRADOC) noted in its Advanced Engagement Battlespace assessment, future Advanced Engagements will be…
compressed in time, as the speed of weapon delivery and their associated effects accelerate enormously;
extended in space, in many cases to a global extent, via precision long-range strike and interconnectedness, particularly in the information environment;
far more lethal, by virtue of ubiquitous sensors, proliferated precision, high kinetic energy weapons and advanced area munitions;
routinely interconnected – and contested — across the multiple domains of air, land, sea, space and cyber; and
interactive across the multiple dimensions of conflict, not only across every domain in the physical dimension, but also the cognitive dimension of information operations, and even the moral dimension of belief and values.

Identifying the NEED within the context of these future Advanced Engagement characteristics is critical to the success of future innovations.

The first-generation FLIR systems gave a limited ability to detect objects on the battlefield at night. They were large, slow, and provided low-resolution, short-range images. The need was for greater speed, precision, and range in the targeting process to unlock the full potential of infrared imaging. Third generation FLIR uses multiband infrared imaging sensors combined with multiple fields of view which are integrated with computer software to automatically enhance images in real-time. Sensors can be used across multiple platforms and missions, allowing optimization of equipment for battlefield conditions, greatly enhancing mission effectiveness and survivability, and providing significant cost savings.


Source: John-Stone-Art
2. Identifying the VISION:
To look beyond the need and what is possible to what could be possible.

As we look forward into the Future Operational Environment, we must address those revolutionary technologies that, when developed and fielded, will provide a decisive edge over adversaries not similarly equipped. These potential Game Changers include:
Laser and Radio Frequency Weapons – Scalable lethal and non-Lethal directed energy weapons can counter Aircraft, UAS, Missiles, Projectiles, Sensors, and Swarms.
Swarms – Leverage autonomy, robotics, and artificial intelligence to generate “global behavior with local rules” for multiple entities – either homogeneous or heterogeneous teams.
• Rail Guns and Enhanced Directed Kinetic Energy Weapons (EDKEW) – Non explosive electromagnetic projectile launchers provide high velocity/high energy weapons.
• Energetics – Provides increased accuracy and muzzle energy.
• Synthetic Biology – Engineering and modification of biological entities has potential weaponization.
• Internet of Things – Linked internet “things” create opportunity and vulnerability. Great potential benefits already found in developing U.S. systems also create a vulnerability.
• Power – Future effectiveness depends on renewable sources and reduced consumption. Small nuclear reactors are potentially a cost-effective source of stable power.

Understanding these Future Operational Environment Game Changers is central to identifying the VISION and looking beyond the need to what could be possible.

The 3rd Gen FLIR program struggled early in its development to identify requirements necessary to sustain a successful program. Without the user community’s understanding of a vision of what could be possible, requirements were based around the perceived limitations of what technology could provide. To overcome this, the research community developed a comprehensive strategy for educational outreach to the Army’s requirement developers, military officers, and industry on the full potential of what 3rd Gen FLIR could achieve. This campaign highlighted not only the recognized need, but also a vision for what was possible, and served as the catalyst to bring the entire community together.


3. Identifying the EXPERTISE:
To gather expertise from all possible sources into a comprehensive solution.

Human creativity is the most transformative force in the world; people compound the rate of innovation and technology development. This expertise is fueling the convergence of technologies that is already leading to revolutionary achievements with respect to sensing, data acquisition and retrieval, and computer processing hardware.

Identifying the EXPERTISE leads to the exponential convergence and innovation that will afford strategic advantage to those who recognize and leverage them.

The expertise required to achieve 3rd Gen FLIR success was from the integration of more than 16 significant research and development projects from multiple organizations: Small Business Innovation Research programs; applied research funding, partnering in-house expertise with external communities; Manufacturing Technology (ManTech) initiatives, working with manufacturers to develop the technology and long-term manufacturing capabilities; and advanced technology development funding with traditional large defense contractors. The talented workforce of the Army research community strategically aligned these individual activities and worked with them to provide a comprehensive, interconnected final solution.


4. Identifying the RESOURCES:
To consistently invest in innovative technology by partnering with others to create multiple funding sources.

The 2017 National Security Strategy introduced the National Security Innovation Base as a critical component of its vision of American security. In order to meet the challenges of the Future Operational Environment, the Department of Defense and other agencies must establish strategic partnerships with U.S. companies to help align private sector Research and Development (R&D) resources to priority national security applications in order to nurture innovation.

The development of 3rd Gen FLIR took many years of appropriate, consistent investments into innovations and technology breakthroughs. Obtaining the support of industry and leveraging their internal R&D investments required the Army to build trust in the overall program. By creating partnerships with others, such as the U.S. Army Communications-Electronics Research, Development and Engineering Center (CERDEC) and ManTech, 3rd Gen FLIR was able to integrate multiple funding sources to ensure a secure resource foundation.




CONCLUSION
The successful 3rd Gen FLIR program is a prototype of the implementation of an innovative program, which transitions good ideas into actual capabilities. It exemplifies how identifying the need, the vision, the expertise and the resources can create an environment where innovation thrives, equipping warriors with the best technology in the world. As the Army looks to increase its exploration of innovative technology development for the future, these examples of past successes can serve as models to build on moving forward.

See our Prototype Warfare post to learn more about other contemporary innovation successes that are helping the U.S. maintain its competitive advantage and win in an increasingly contested Operational Environment.

Dr. Richard Nabors is Associate Director for Strategic Planning and Deputy Director, Operations Division, U.S. Army Research, Development and Engineering Command (RDECOM) Communications-Electronics Research, Development and Engineering Center (CERDEC), Night Vision and Electronic Sensors Directorate.