74. Mad Scientist Learning in 2050 Conference

Mad Scientist Laboratory is pleased to announce that Headquarters, U.S. Army Training and Doctrine Command (TRADOC) is co-sponsoring the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies this week (Wednesday and Thursday, 8-9 August 2018) in Washington, DC.

Future learning techniques and technologies are critical to the Army’s operations in the 21st century against adversaries in rapidly evolving battlespaces. The ability to effectively respond to a changing Operational Environment (OE) with fleeting windows of opportunity is paramount, and Leaders must act quickly to adjust to different OEs and more advanced and lethal technologies. Learning technologies must enable Soldiers to learn, think, and adapt using innovative synthetic environments to accelerate learning and attain expertise more quickly. Looking to 2050, learning enablers will become far more mobile and on-demand.

Looking at Learning in 2050, topics of interest include, but are not limited to: Virtual, Augmented, and Mixed Realities (VR/AR/MR); interactive, autonomous, accelerated, and augmented learning technologies; gamification; skills needed for Soldiers and Leaders in 2050; synthetic training environments; virtual mentors; and intelligent artificial tutors. Advanced learning capabilities present the opportunity for Soldiers and Leaders to prepare for operations and operate in multiple domains while improving current cognitive load limitations.

Plan to join us virtually at the conference as leading scientists, innovators, and scholars from academia, industry, and government gather to discuss:

1) How will emerging technologies improve learning or augment intelligence in professional military education, at home station, while deployed, and on the battlefield?

2) How can the Army accelerate learning to improve Soldier and unit agility in rapidly changing OEs?

3) What new skills will Soldiers and Leaders require to fight and win in 2050?

Get ready…

– Read our Learning in 2050 Call for Ideas finalists’ submissions here, graciously hosted by our colleagues at Small Wars Journal.

– Review the following blog posts:  First Salvo on “Learning in 2050” – Continuity and Change and Keeping the Edge.

– Starting Tuesday, 7 August 2018, see the conference agenda’s list of presentations and the associated world-class speakers’ biographies here.

and Go!

Join us at the conference on-line here via live-streaming audio and video, beginning at 0840 EDT on Wednesday, 08 Aug 2018; submit your questions to each of the presenters via the moderated interactive chat room; and tag your comments @TRADOC on Twitter with #Learningin2050.

See you all there!

 

48. Warfare at the Speed of Thought

(Editor’s Note: Mad Scientist Laboratory is pleased to present the second guest blog post by Dr. Richard Nabors, Associate Director for Strategic Planning and Deputy Director, Operations Division, U.S. Army Research, Development and Engineering Command (RDECOM) Communications-Electronics Research, Development and Engineering Center (CERDEC), addressing how Augmented and Mixed Reality are the critical elements required for integrated sensor systems to become truly operational and support Soldiers’ needs in complex environments.

Dr. Nabors’ previous guest post addressed how the proliferation of sensors, integrated via the Internet of Battlefield Things [IoBT], will provide Future Soldiers with the requisite situational awareness to fight and win in increasingly complex and advanced battlespaces.)

Speed has always been and will be a critical component in assuring military dominance. Historically, the military has sought to increase the speed of its jets, ships, tanks, and missiles. However, one of the greatest leaps that has yet to come and is coming is the ability to significantly increase the speed of the decision-making process of the individual at the small unit level.

Source: University of Maryland Institute for Advanced Computer Studies
To maximize individual and small unit initiative to think and act flexibly, Soldiers must receive as much relevant information as possible, as quickly as possible. Integrated sensor technologies can provide situational awareness by collecting and sorting real-time data and sending a fusion of information to the point of need, but that information must be processed quickly in order to be operationally effective. Augmented Reality (AR) and Mixed Reality (MR) are two of the most promising solutions to this challenge facing the military and will eventually make it possible for Soldiers to instantaneously respond to an actively changing environment.

AR and MR function in real-time, bringing the elements of the digital world into a Soldier’s perceived real world, resulting in optimal, timely, and relevant decisions and actions. AR and MR allow for the overlay of information and sensor data into the physical space in a way that is intuitive, serves the point of need, and requires minimal training to interpret. AR and MR will enable the U.S. military to survive in complex environments by decentralizing decision-making from mission command and placing substantial capabilities in Soldiers’ hands in a manner that does not overwhelm them with information.

Source: Tom Rooney III
On a Soldier’s display, AR can render useful battlefield data in the form of camera imaging and virtual maps, aiding a Soldier’s navigation and battlefield perspective. Special indicators can mark people and various objects to warn of potential dangers.
Source: MicroVision
Soldier-borne, palm-size reconnaissance copters with sensors and video can be directed and tasked instantaneously on the battlefield. Information can be gathered by unattended ground sensors and transmitted to a command center, with AR and MR serving as a networked communication system between military leaders and the individual Soldier. Used in this way, AR and MR increase Soldier safety and lethality.

In the near-term, the Army Research and Development (R&D) community is investing in the following areas:


Reliable position tracking devices that self-calibrate for head orientation of head-worn sensors.


• Ultralight, ultrabright, ultra-transparent display eyewear with wide field of view.

Source: CIO Australia

• Three-dimensional viewers with battlefield terrain visualization, incorporating real-time data from unmanned aerial vehicles, etc.




In the mid-term, R&D activities are focusing on:

• Manned vehicles with sensors and processing capabilities for moving autonomously, tasked for Soldier protection.

Robotic assets, tele-operated, semi-autonomous, or autonomous and imbued with intelligence, with limbs that can keep pace with Soldiers and act as teammates.

Source: BAE
• Robotic systems that contain multiple sensors that respond to environmental factors affecting the mission, or have self-deploying camouflage capabilities that stay deployed while executing maneuvers.

• Enhanced reconnaissance through deep-penetration mapping of building layouts, cyber activity, and subterranean infrastructure.

Once AR and MR prototypes and systems have seen widespread use, the far term focus will be on automation that could track and react to a Soldier’s changing situation by tailoring the augmentation the Soldier receives and by coordinating across the unit.

In addition, AR and MR will revolutionize training, empowering Soldiers to train as they fight. Soldiers will be able to use real-time sensor data from unmanned aerial vehicles to visualize battlefield terrain with geographic awareness of roads, buildings, and other structures before conducting their missions. They will be able to rehearse courses of action and analyze them before execution to improve situational awareness. AR and MR are increasingly valuable aids to tactical training in preparation for combat in complex and congested environments.

AR and MR are the critical elements required for integrated sensor systems to become truly operational and support Soldiers’ needs in complex environments. Solving the challenge of how and where to use AR and MR will enable the military to get full value from its investments in complex integrated sensor systems.

For more information on how the convergence of technologies will enhance Soldiers on future battlefields, see:

– The discussion on advanced decision-making in An Advanced Engagement Battlespace: Tactical, Operational and Strategic Implications for the Future Operational Environment, published by our colleagues at Small Wars Journal.

– Dr. James Canton’s presentation from the Mad Scientist Robotics, Artificial Intelligence, & Autonomy Conference at Georgia Tech Research Institute last March.

– Dr. Rob Smith’s Mad Scientist Speaker Series presentation on Operationalizing Big Data, where he addresses the applicability of AR to sports and games training as an analogy to combat training (noting “Serious sport is war minus the shooting” — George Orwell).

Dr. Richard Nabors is Associate Director for Strategic Planning, US Army CERDEC Night Vision and Electronic Sensors Directorate.

41. The Technological Information Landscape: Realities on the Horizon

(Editor’s Note: Mad Scientist Laboratory is pleased to present the following guest blog post by Dr. Lydia Kostopoulos, addressing the future of technological information and the tantalizing possible realities they may provide us by 2050.)

The history of technology and its contemporary developments is not a story about technology, it is a story about people, politics and culture. Politics encouraged military technologies to be developed which have had tremendous value for civilian use. Technologies that were too ahead of their cultural times were left behind. As the saying goes ‘need is the mother of all inventions’, and many technological advances have been thanks to the perseverance of people who were determined to solve a problem that affected their life, or that of their loved ones and community. Ultimately, technology starts with people, ideas come from people, and the perception of reality is a human endeavor as well.

The ‘reality’ related technologies that are part of the current and emerging information landscape have the potential to alter the perception of reality, form new digital communities and allegiances, mobilize people, and create reality dissonance. These realities also contribute to the evolving ways that information is consumed, managed, and distributed. There are five components:




1. Real World: Pre-internet real, touch-feel-and-smell world.






2. Digital Reality 1.0: There are many already existing digital realities that people can immerse themselves into, which include gaming, as well as social media and worlds such as Second Life. Things that happen on these digital platforms can affect the real world and visa-versa.

3. Digital Reality 2.0: The Mixed Reality (MR) world of Virtual Reality (VR) and Augmented Reality (AR). These technologies are still in their early stages; however, they show tremendous potential for receiving, and perceiving information, as well as experiencing narratives through synthetic or captured moments.

Virtual Reality allows the user to step in a “virtual” reality, which can be entirely synthetic and a created digital environment, or it could be a suspended moment of an actual real-world environment. The synthetic environment could be modeled after the real world, a fantasy, or a bit of both. Most virtual realities do not fully cross over the uncanny valley, but it is only a matter of time. Suspended moments of actual real-world environments involve 360 degree cameras which capture a video moment in time; these already exist and the degree in which it feels like the VR user is teleported to that geographical and temporal moment in time will, for the most part, depend on the quality of the video and the sound. This VR experience can also be modified, edited and amended just like regular videos are edited today. This, coupled with technologies that authentically replicate voice (ex: Adobe VoCo) and technologies that can change faces in videos, create open-ended possibilities for ‘fake’ authentic videos and soundbites that can be embedded.

Augmented Reality allows the user to interact with a digital layer superimposed on their physical real world. The technology is still in the early stages, but when it reaches its full potential, it is expected to disrupt and transform the way we communicate, work, and interact with our world. Some say the combination of voice command, artificial intelligence, and AR will make screens a thing of the past. Google is experimenting with their new app Just a Line, which allows the users to play with their augmented environment and create digital graffiti in their physical space. While this is an experiment, the potential for geographic AR experiences, messages (overt or covert), and storytelling is immense.

4. Brain Computer Interface (BCI): Also called Brain Machine Interface (BMI). BCI has the potential to create another reality when the brain is seamlessly connected to the internet. This may also include connection to artificial intelligence and other brains. This technology is currently being developed, and the space for ‘minimally invasive’ BCI has exploded. Should it work as intended, the user would, in theory, be directly communicating to the internet through thought, the lines would blur between the user’s memory and knowledge and the augmented intelligence its brain accessed in real-time through BCI. In this sense it would also be able to communicate with others through thought using BCI as the medium. The sharing of information, ideas, memories and emotions through this medium would create a new way of receiving, creating and transmitting information, as well as a new reality experience. However, for those with a sinister mind, this technology could also have the potential to be used as a method for implanting ideas into others’ minds and subconscious. For an in-depth explanation on one company’s efforts to make BCI a reality, see Tim Urban’s post “Neuralink and the Brain’s Magical Future”.

5. Whole Brain Emulation (WBE): Brings a very new dimension to the information landscape. It is very much still in the early stages, however, if successful, this would create a virtual immortal sentient existence which would live and interact with the other realities. It is still unclear if the uploaded mind would be sentient, how it would interact with its new world (the cloud), and what implications it would have on those who know or knew the person. As the technology is still new, many avenues for brain uploading are being explored which include it being done while a person is alive and when a person dies. Ultimately a ‘copy’ of the mind would be made and the computer would run a simulation model of the uploaded brain, it is also expected to have a conscious mind of its own. This uploaded, fully functional brain could live in a virtual reality or in a computer which takes physical form in a robot or biological body. Theoretically, this technology would allow uploaded minds to interact with all realities and be able to create and share information.

Apart from another means for communicating with others, and transmitting information, it can also be used as a medium to further ideologies. For example, if Osama bin Laden’s brain had been uploaded to the cloud, his living followers for generations to come could interact with him and acquire feedback and guidance. Another example is Adolf Hitler; if his brain were to have been uploaded, his modern-day followers would be able to interact with him through cognitive augmentation and AI. This of course could be used to ‘keep’ loved ones in our lives, however the technology has broader implications when it is used to perpetuate harmful ideologies, shape opinions, and mobilize populations into violent action. As mind-boggling as all this may sound, the WBE “hypothetical futuristic process of scanning the mental state of a particular brain substrate and copying it to a computer” is being scientifically pursued. In 2008, the Future of Humanity Institute at Oxford University published a technical report about the roadmap to Whole Brain Emulation.

Despite the many questions that remain unanswered and a lack of a human brain upload proof of concept, a new startup, Nectome, which is “Committed to the goal of archiving your mind,” offers a brain preservation service and when the technology is available, they will upload the brains. In return, the clients pay a service fee of $10,000 and agree for the embalming chemicals to be introduced into their arteries (under general anesthesia) right before they pass away, so that the brain can be freshly extracted.

These technologies and realities create new areas for communication, expression and self-exploration. They also provide spaces where identities transform, and where the perception of reality within and among these realities will hover somewhere above these many identities as people weave in and through them in their daily life.

For more information regarding disruptive technologies, see Dr. Kostopoulos’ blogsite.

Please also see Dr. Kostopoulos’ recent submission to our Soldier 2050 Call for Ideas, entitled Letter from the Frontline: Year 2050, published by our colleagues at Small Wars Journal.

Dr. Lydia Kostopoulos is an advisor to the AI Initiative at The Future Society at the Harvard Kennedy School, participates in NATO’s Science for Peace and Security Program, is a member of the FBI’s InfraGard Alliance, and during the Obama administration received the U.S. Presidential Volunteer Service Award for her pro bono work in cybersecurity. Her work lies in the intersection of strategy, technology, education, and national security. Her professional experience spans three continents, several countries and multi-cultural environments. She speaks and writes on disruptive technology convergence, innovation, tech ethics, cyber warfare, and national security.