74. Mad Scientist Learning in 2050 Conference

Mad Scientist Laboratory is pleased to announce that Headquarters, U.S. Army Training and Doctrine Command (TRADOC) is co-sponsoring the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies this week (Wednesday and Thursday, 8-9 August 2018) in Washington, DC.

Future learning techniques and technologies are critical to the Army’s operations in the 21st century against adversaries in rapidly evolving battlespaces. The ability to effectively respond to a changing Operational Environment (OE) with fleeting windows of opportunity is paramount, and Leaders must act quickly to adjust to different OEs and more advanced and lethal technologies. Learning technologies must enable Soldiers to learn, think, and adapt using innovative synthetic environments to accelerate learning and attain expertise more quickly. Looking to 2050, learning enablers will become far more mobile and on-demand.

Looking at Learning in 2050, topics of interest include, but are not limited to: Virtual, Augmented, and Mixed Realities (VR/AR/MR); interactive, autonomous, accelerated, and augmented learning technologies; gamification; skills needed for Soldiers and Leaders in 2050; synthetic training environments; virtual mentors; and intelligent artificial tutors. Advanced learning capabilities present the opportunity for Soldiers and Leaders to prepare for operations and operate in multiple domains while improving current cognitive load limitations.

Plan to join us virtually at the conference as leading scientists, innovators, and scholars from academia, industry, and government gather to discuss:

1) How will emerging technologies improve learning or augment intelligence in professional military education, at home station, while deployed, and on the battlefield?

2) How can the Army accelerate learning to improve Soldier and unit agility in rapidly changing OEs?

3) What new skills will Soldiers and Leaders require to fight and win in 2050?

Get ready…

– Read our Learning in 2050 Call for Ideas finalists’ submissions here, graciously hosted by our colleagues at Small Wars Journal.

– Review the following blog posts:  First Salvo on “Learning in 2050” – Continuity and Change and Keeping the Edge.

– Starting Tuesday, 7 August 2018, see the conference agenda’s list of presentations and the associated world-class speakers’ biographies here.

and Go!

Join us at the conference on-line here via live-streaming audio and video, beginning at 0840 EDT on Wednesday, 08 Aug 2018; submit your questions to each of the presenters via the moderated interactive chat room; and tag your comments @TRADOC on Twitter with #Learningin2050.

See you all there!

 

50. Four Elements for Future Innovation

(Editor’s Note: Mad Scientist Laboratory is pleased to present a new post by returning guest blogger Dr. Richard Nabors addressing the four key practices of innovation. Dr. Nabors’ previous guest posts discussed how integrated sensor systems will provide Future Soldiers with the requisite situational awareness to fight and win in increasingly complex and advanced battlespaces, and how Augmented and Mixed Reality are the critical elements required for these integrated sensor systems to become truly operational and support Soldiers’ needs in complex environments.)


For the U.S. military to maintain its overmatch capabilities, innovation is an absolute necessity. As noted in The Operational Environment and the Changing Character of Future Warfare, our adversaries will continue to aggressively pursue rapid innovation in key technologies in order to challenge U.S. forces across multiple domains. Because of its vital necessity, U.S. innovation cannot be left solely to the development of serendipitous discoveries.

The Army has successfully generated innovative programs and transitioned them from the research community into military use. In the process, it has identified four key practices that can be used in the future development of innovative programs. These practices – identifying the need, the vision, the expertise, and the resources – are essential in preparing for warfare in the Future Operational Environment. The recently completed Third Generation Forward Looking Infrared (3rd Gen FLIR) program provides us with a contemporary use case regarding how each of these practices are key to the success of future innovations.


1. Identifying the NEED:
To increase speed, precision, and accuracy of a platform lethality, while at the same time increasing mission effectiveness and warfighter safety and survivability.

As the U.S. Army Training and Doctrine Command (TRADOC) noted in its Advanced Engagement Battlespace assessment, future Advanced Engagements will be…
compressed in time, as the speed of weapon delivery and their associated effects accelerate enormously;
extended in space, in many cases to a global extent, via precision long-range strike and interconnectedness, particularly in the information environment;
far more lethal, by virtue of ubiquitous sensors, proliferated precision, high kinetic energy weapons and advanced area munitions;
routinely interconnected – and contested — across the multiple domains of air, land, sea, space and cyber; and
interactive across the multiple dimensions of conflict, not only across every domain in the physical dimension, but also the cognitive dimension of information operations, and even the moral dimension of belief and values.

Identifying the NEED within the context of these future Advanced Engagement characteristics is critical to the success of future innovations.

The first-generation FLIR systems gave a limited ability to detect objects on the battlefield at night. They were large, slow, and provided low-resolution, short-range images. The need was for greater speed, precision, and range in the targeting process to unlock the full potential of infrared imaging. Third generation FLIR uses multiband infrared imaging sensors combined with multiple fields of view which are integrated with computer software to automatically enhance images in real-time. Sensors can be used across multiple platforms and missions, allowing optimization of equipment for battlefield conditions, greatly enhancing mission effectiveness and survivability, and providing significant cost savings.


Source: John-Stone-Art
2. Identifying the VISION:
To look beyond the need and what is possible to what could be possible.

As we look forward into the Future Operational Environment, we must address those revolutionary technologies that, when developed and fielded, will provide a decisive edge over adversaries not similarly equipped. These potential Game Changers include:
Laser and Radio Frequency Weapons – Scalable lethal and non-Lethal directed energy weapons can counter Aircraft, UAS, Missiles, Projectiles, Sensors, and Swarms.
Swarms – Leverage autonomy, robotics, and artificial intelligence to generate “global behavior with local rules” for multiple entities – either homogeneous or heterogeneous teams.
• Rail Guns and Enhanced Directed Kinetic Energy Weapons (EDKEW) – Non explosive electromagnetic projectile launchers provide high velocity/high energy weapons.
• Energetics – Provides increased accuracy and muzzle energy.
• Synthetic Biology – Engineering and modification of biological entities has potential weaponization.
• Internet of Things – Linked internet “things” create opportunity and vulnerability. Great potential benefits already found in developing U.S. systems also create a vulnerability.
• Power – Future effectiveness depends on renewable sources and reduced consumption. Small nuclear reactors are potentially a cost-effective source of stable power.

Understanding these Future Operational Environment Game Changers is central to identifying the VISION and looking beyond the need to what could be possible.

The 3rd Gen FLIR program struggled early in its development to identify requirements necessary to sustain a successful program. Without the user community’s understanding of a vision of what could be possible, requirements were based around the perceived limitations of what technology could provide. To overcome this, the research community developed a comprehensive strategy for educational outreach to the Army’s requirement developers, military officers, and industry on the full potential of what 3rd Gen FLIR could achieve. This campaign highlighted not only the recognized need, but also a vision for what was possible, and served as the catalyst to bring the entire community together.


3. Identifying the EXPERTISE:
To gather expertise from all possible sources into a comprehensive solution.

Human creativity is the most transformative force in the world; people compound the rate of innovation and technology development. This expertise is fueling the convergence of technologies that is already leading to revolutionary achievements with respect to sensing, data acquisition and retrieval, and computer processing hardware.

Identifying the EXPERTISE leads to the exponential convergence and innovation that will afford strategic advantage to those who recognize and leverage them.

The expertise required to achieve 3rd Gen FLIR success was from the integration of more than 16 significant research and development projects from multiple organizations: Small Business Innovation Research programs; applied research funding, partnering in-house expertise with external communities; Manufacturing Technology (ManTech) initiatives, working with manufacturers to develop the technology and long-term manufacturing capabilities; and advanced technology development funding with traditional large defense contractors. The talented workforce of the Army research community strategically aligned these individual activities and worked with them to provide a comprehensive, interconnected final solution.


4. Identifying the RESOURCES:
To consistently invest in innovative technology by partnering with others to create multiple funding sources.

The 2017 National Security Strategy introduced the National Security Innovation Base as a critical component of its vision of American security. In order to meet the challenges of the Future Operational Environment, the Department of Defense and other agencies must establish strategic partnerships with U.S. companies to help align private sector Research and Development (R&D) resources to priority national security applications in order to nurture innovation.

The development of 3rd Gen FLIR took many years of appropriate, consistent investments into innovations and technology breakthroughs. Obtaining the support of industry and leveraging their internal R&D investments required the Army to build trust in the overall program. By creating partnerships with others, such as the U.S. Army Communications-Electronics Research, Development and Engineering Center (CERDEC) and ManTech, 3rd Gen FLIR was able to integrate multiple funding sources to ensure a secure resource foundation.




CONCLUSION
The successful 3rd Gen FLIR program is a prototype of the implementation of an innovative program, which transitions good ideas into actual capabilities. It exemplifies how identifying the need, the vision, the expertise and the resources can create an environment where innovation thrives, equipping warriors with the best technology in the world. As the Army looks to increase its exploration of innovative technology development for the future, these examples of past successes can serve as models to build on moving forward.

See our Prototype Warfare post to learn more about other contemporary innovation successes that are helping the U.S. maintain its competitive advantage and win in an increasingly contested Operational Environment.

Dr. Richard Nabors is Associate Director for Strategic Planning and Deputy Director, Operations Division, U.S. Army Research, Development and Engineering Command (RDECOM) Communications-Electronics Research, Development and Engineering Center (CERDEC), Night Vision and Electronic Sensors Directorate.

48. Warfare at the Speed of Thought

(Editor’s Note: Mad Scientist Laboratory is pleased to present the second guest blog post by Dr. Richard Nabors, Associate Director for Strategic Planning and Deputy Director, Operations Division, U.S. Army Research, Development and Engineering Command (RDECOM) Communications-Electronics Research, Development and Engineering Center (CERDEC), addressing how Augmented and Mixed Reality are the critical elements required for integrated sensor systems to become truly operational and support Soldiers’ needs in complex environments.

Dr. Nabors’ previous guest post addressed how the proliferation of sensors, integrated via the Internet of Battlefield Things [IoBT], will provide Future Soldiers with the requisite situational awareness to fight and win in increasingly complex and advanced battlespaces.)

Speed has always been and will be a critical component in assuring military dominance. Historically, the military has sought to increase the speed of its jets, ships, tanks, and missiles. However, one of the greatest leaps that has yet to come and is coming is the ability to significantly increase the speed of the decision-making process of the individual at the small unit level.

Source: University of Maryland Institute for Advanced Computer Studies
To maximize individual and small unit initiative to think and act flexibly, Soldiers must receive as much relevant information as possible, as quickly as possible. Integrated sensor technologies can provide situational awareness by collecting and sorting real-time data and sending a fusion of information to the point of need, but that information must be processed quickly in order to be operationally effective. Augmented Reality (AR) and Mixed Reality (MR) are two of the most promising solutions to this challenge facing the military and will eventually make it possible for Soldiers to instantaneously respond to an actively changing environment.

AR and MR function in real-time, bringing the elements of the digital world into a Soldier’s perceived real world, resulting in optimal, timely, and relevant decisions and actions. AR and MR allow for the overlay of information and sensor data into the physical space in a way that is intuitive, serves the point of need, and requires minimal training to interpret. AR and MR will enable the U.S. military to survive in complex environments by decentralizing decision-making from mission command and placing substantial capabilities in Soldiers’ hands in a manner that does not overwhelm them with information.

Source: Tom Rooney III
On a Soldier’s display, AR can render useful battlefield data in the form of camera imaging and virtual maps, aiding a Soldier’s navigation and battlefield perspective. Special indicators can mark people and various objects to warn of potential dangers.
Source: MicroVision
Soldier-borne, palm-size reconnaissance copters with sensors and video can be directed and tasked instantaneously on the battlefield. Information can be gathered by unattended ground sensors and transmitted to a command center, with AR and MR serving as a networked communication system between military leaders and the individual Soldier. Used in this way, AR and MR increase Soldier safety and lethality.

In the near-term, the Army Research and Development (R&D) community is investing in the following areas:


Reliable position tracking devices that self-calibrate for head orientation of head-worn sensors.


• Ultralight, ultrabright, ultra-transparent display eyewear with wide field of view.

Source: CIO Australia

• Three-dimensional viewers with battlefield terrain visualization, incorporating real-time data from unmanned aerial vehicles, etc.




In the mid-term, R&D activities are focusing on:

• Manned vehicles with sensors and processing capabilities for moving autonomously, tasked for Soldier protection.

Robotic assets, tele-operated, semi-autonomous, or autonomous and imbued with intelligence, with limbs that can keep pace with Soldiers and act as teammates.

Source: BAE
• Robotic systems that contain multiple sensors that respond to environmental factors affecting the mission, or have self-deploying camouflage capabilities that stay deployed while executing maneuvers.

• Enhanced reconnaissance through deep-penetration mapping of building layouts, cyber activity, and subterranean infrastructure.

Once AR and MR prototypes and systems have seen widespread use, the far term focus will be on automation that could track and react to a Soldier’s changing situation by tailoring the augmentation the Soldier receives and by coordinating across the unit.

In addition, AR and MR will revolutionize training, empowering Soldiers to train as they fight. Soldiers will be able to use real-time sensor data from unmanned aerial vehicles to visualize battlefield terrain with geographic awareness of roads, buildings, and other structures before conducting their missions. They will be able to rehearse courses of action and analyze them before execution to improve situational awareness. AR and MR are increasingly valuable aids to tactical training in preparation for combat in complex and congested environments.

AR and MR are the critical elements required for integrated sensor systems to become truly operational and support Soldiers’ needs in complex environments. Solving the challenge of how and where to use AR and MR will enable the military to get full value from its investments in complex integrated sensor systems.

For more information on how the convergence of technologies will enhance Soldiers on future battlefields, see:

– The discussion on advanced decision-making in An Advanced Engagement Battlespace: Tactical, Operational and Strategic Implications for the Future Operational Environment, published by our colleagues at Small Wars Journal.

– Dr. James Canton’s presentation from the Mad Scientist Robotics, Artificial Intelligence, & Autonomy Conference at Georgia Tech Research Institute last March.

– Dr. Rob Smith’s Mad Scientist Speaker Series presentation on Operationalizing Big Data, where he addresses the applicability of AR to sports and games training as an analogy to combat training (noting “Serious sport is war minus the shooting” — George Orwell).

Dr. Richard Nabors is Associate Director for Strategic Planning, US Army CERDEC Night Vision and Electronic Sensors Directorate.

41. The Technological Information Landscape: Realities on the Horizon

(Editor’s Note: Mad Scientist Laboratory is pleased to present the following guest blog post by Dr. Lydia Kostopoulos, addressing the future of technological information and the tantalizing possible realities they may provide us by 2050.)

The history of technology and its contemporary developments is not a story about technology, it is a story about people, politics and culture. Politics encouraged military technologies to be developed which have had tremendous value for civilian use. Technologies that were too ahead of their cultural times were left behind. As the saying goes ‘need is the mother of all inventions’, and many technological advances have been thanks to the perseverance of people who were determined to solve a problem that affected their life, or that of their loved ones and community. Ultimately, technology starts with people, ideas come from people, and the perception of reality is a human endeavor as well.

The ‘reality’ related technologies that are part of the current and emerging information landscape have the potential to alter the perception of reality, form new digital communities and allegiances, mobilize people, and create reality dissonance. These realities also contribute to the evolving ways that information is consumed, managed, and distributed. There are five components:




1. Real World: Pre-internet real, touch-feel-and-smell world.






2. Digital Reality 1.0: There are many already existing digital realities that people can immerse themselves into, which include gaming, as well as social media and worlds such as Second Life. Things that happen on these digital platforms can affect the real world and visa-versa.

3. Digital Reality 2.0: The Mixed Reality (MR) world of Virtual Reality (VR) and Augmented Reality (AR). These technologies are still in their early stages; however, they show tremendous potential for receiving, and perceiving information, as well as experiencing narratives through synthetic or captured moments.

Virtual Reality allows the user to step in a “virtual” reality, which can be entirely synthetic and a created digital environment, or it could be a suspended moment of an actual real-world environment. The synthetic environment could be modeled after the real world, a fantasy, or a bit of both. Most virtual realities do not fully cross over the uncanny valley, but it is only a matter of time. Suspended moments of actual real-world environments involve 360 degree cameras which capture a video moment in time; these already exist and the degree in which it feels like the VR user is teleported to that geographical and temporal moment in time will, for the most part, depend on the quality of the video and the sound. This VR experience can also be modified, edited and amended just like regular videos are edited today. This, coupled with technologies that authentically replicate voice (ex: Adobe VoCo) and technologies that can change faces in videos, create open-ended possibilities for ‘fake’ authentic videos and soundbites that can be embedded.

Augmented Reality allows the user to interact with a digital layer superimposed on their physical real world. The technology is still in the early stages, but when it reaches its full potential, it is expected to disrupt and transform the way we communicate, work, and interact with our world. Some say the combination of voice command, artificial intelligence, and AR will make screens a thing of the past. Google is experimenting with their new app Just a Line, which allows the users to play with their augmented environment and create digital graffiti in their physical space. While this is an experiment, the potential for geographic AR experiences, messages (overt or covert), and storytelling is immense.

4. Brain Computer Interface (BCI): Also called Brain Machine Interface (BMI). BCI has the potential to create another reality when the brain is seamlessly connected to the internet. This may also include connection to artificial intelligence and other brains. This technology is currently being developed, and the space for ‘minimally invasive’ BCI has exploded. Should it work as intended, the user would, in theory, be directly communicating to the internet through thought, the lines would blur between the user’s memory and knowledge and the augmented intelligence its brain accessed in real-time through BCI. In this sense it would also be able to communicate with others through thought using BCI as the medium. The sharing of information, ideas, memories and emotions through this medium would create a new way of receiving, creating and transmitting information, as well as a new reality experience. However, for those with a sinister mind, this technology could also have the potential to be used as a method for implanting ideas into others’ minds and subconscious. For an in-depth explanation on one company’s efforts to make BCI a reality, see Tim Urban’s post “Neuralink and the Brain’s Magical Future”.

5. Whole Brain Emulation (WBE): Brings a very new dimension to the information landscape. It is very much still in the early stages, however, if successful, this would create a virtual immortal sentient existence which would live and interact with the other realities. It is still unclear if the uploaded mind would be sentient, how it would interact with its new world (the cloud), and what implications it would have on those who know or knew the person. As the technology is still new, many avenues for brain uploading are being explored which include it being done while a person is alive and when a person dies. Ultimately a ‘copy’ of the mind would be made and the computer would run a simulation model of the uploaded brain, it is also expected to have a conscious mind of its own. This uploaded, fully functional brain could live in a virtual reality or in a computer which takes physical form in a robot or biological body. Theoretically, this technology would allow uploaded minds to interact with all realities and be able to create and share information.

Apart from another means for communicating with others, and transmitting information, it can also be used as a medium to further ideologies. For example, if Osama bin Laden’s brain had been uploaded to the cloud, his living followers for generations to come could interact with him and acquire feedback and guidance. Another example is Adolf Hitler; if his brain were to have been uploaded, his modern-day followers would be able to interact with him through cognitive augmentation and AI. This of course could be used to ‘keep’ loved ones in our lives, however the technology has broader implications when it is used to perpetuate harmful ideologies, shape opinions, and mobilize populations into violent action. As mind-boggling as all this may sound, the WBE “hypothetical futuristic process of scanning the mental state of a particular brain substrate and copying it to a computer” is being scientifically pursued. In 2008, the Future of Humanity Institute at Oxford University published a technical report about the roadmap to Whole Brain Emulation.

Despite the many questions that remain unanswered and a lack of a human brain upload proof of concept, a new startup, Nectome, which is “Committed to the goal of archiving your mind,” offers a brain preservation service and when the technology is available, they will upload the brains. In return, the clients pay a service fee of $10,000 and agree for the embalming chemicals to be introduced into their arteries (under general anesthesia) right before they pass away, so that the brain can be freshly extracted.

These technologies and realities create new areas for communication, expression and self-exploration. They also provide spaces where identities transform, and where the perception of reality within and among these realities will hover somewhere above these many identities as people weave in and through them in their daily life.

For more information regarding disruptive technologies, see Dr. Kostopoulos’ blogsite.

Please also see Dr. Kostopoulos’ recent submission to our Soldier 2050 Call for Ideas, entitled Letter from the Frontline: Year 2050, published by our colleagues at Small Wars Journal.

Dr. Lydia Kostopoulos is an advisor to the AI Initiative at The Future Society at the Harvard Kennedy School, participates in NATO’s Science for Peace and Security Program, is a member of the FBI’s InfraGard Alliance, and during the Obama administration received the U.S. Presidential Volunteer Service Award for her pro bono work in cybersecurity. Her work lies in the intersection of strategy, technology, education, and national security. Her professional experience spans three continents, several countries and multi-cultural environments. She speaks and writes on disruptive technology convergence, innovation, tech ethics, cyber warfare, and national security.