[Editor’s Note: Mad Scientist Laboratory is pleased to publish today’s post by guest blogger Dr. Zac Rogers, addressing the on-going cognitive war (i.e., what COL Steve Banach describes in as Virtual War — see his blog posts Parts I & II]). In the race to achieve a cognitive edge, Dr. Rogers cautions the West about hidden assumptions that may prove to be cognitive vulnerabilities — Enjoy!]
A growing portion of the national security, intelligence, and defense (NSID) communities in the US, UK, Europe, Australia, and elsewhere are exploring the concept of cognitive war. The idea is basically that irregular and unconventional methods and means, which increasingly include non-kinetic and non-lethal delivery and effects leveraging digital connectivity, have shifted the center of gravity of political conflict from a violent clash of arms on the conventional battlefield to a narrative contest among the population. In the process, traditional concepts within the art and science of violent political conflict associated with boundaries, thresholds, levels, and phases are all deeply disrupted.
While many in the NSID community are willing to accept we are fighting a cognitive war, few are willing to recognize the extent to which it is being lost. Losing the cognitive war raises another fashionable topic emerging lately – strategic surprise. This is not the fight we thought we would get; it is not the fight we’ve invested in; nor is it the fight we wanted. But it is the fight we’ve got. The radical shifts in how society is organized and how warfare is conducted have exposed the NSID community to strategic surprise.
Losing without fighting
Cognitive warfare is not only an attack on what we think. It is an attack on our way of thinking. Not only about the conduct of warfare but about whole-of-nation security and prosperity. And one of its unique properties is the extent to which we do it to ourselves. We participate. The adversary, in the age of hyper-connectivity, need only show up, inject, nudge, exploit, and disappear. The concept of ‘below the threshold’ conflict becomes meaningless when we prove ourselves capable of losing without fighting. The threshold of what?
The target of this type of warfare is obvious enough. It is the fabric of trust which underpins and enables the most basic functionality of open society. Trust that extends beyond heredity and beyond the purely transactional is the fabric that supports every aspect of the nation’s strategic strength. Instead of investing in the true strengths of open society after the Cold War, we have left it to atrophy in the hubristic belief that the open way of life was universalizing.
Gamers will get gamed
Easy to overlook often goes hand-in-hand with difficult to measure. Scientists really hate talking about this, but part of the reason for that overlooking is the resurgence of Positivism. Without always understanding it, and often without stating it, the majority of research and development in defence science and technology inherits both its epistemology and its methodology from Positivism. And R&D into the cluster of technologies associated with AI proceeds under many of the assumptions of Behaviourism.
These are currents in the historical drift of European thought – not arrows to truth. They are ‘ways of thinking’. The unresolved controversies in these Occidental thought trajectories are many. The discomfort, if not outright dismissal, of the assumptions they accommodate by the scientific community amount to cognitive vulnerabilities. The heavy reliance on these communities by the NSID community means people in the latter should, at a minimum, be aware of the assumptions which so often go unstated by people in the former.
When Occidentalism and Positivism combine in the race for the next false dawn in technological supremacy, blind spots are produced. Believers in an ‘AI race’ should be wary. We in the West see this as an S&T contest, while largely ignoring its socio-political implications. For the Chinese, AI is politics, politics, politics. Is there something about non-Occidental cultural orientations that makes AI applicable to human affairs in ways not amenable to us? It’s an important strategic question. Positivism, by masking the salience of cultural orientation, is an exploitable weakness of our epistemic communities in need of addressing.
Proceed with caution
When ‘behavioural scientists’ get excited about manipulating people, either for benign or malign ends, what is the effect on the fabric of trust open society depends on? Military organizations now scrambling to incorporate ‘the cognitive’ into their operational concepts face a steep curve and many roadblocks. Friction is not always a bad thing. Hubristic behavioural interventions into complex anthropological systems involving AI should be approached with great caution. Hidden assumptions are cognitive vulnerabilities, and what appears to be a branch of S&T competition could turn out to be a strategic cul de sac we might want to back out of later.
It’s one thing to know thy enemy. In the cognitive war, it’s more important than ever to know thyself.
If you enjoyed this post, please also see:
– Man-Machine Rules, by Dr. Nir Buras
Dr. Zac Rogers PhD is Research Lead at the Jeff Bleich Centre for the US Alliance in Digital Technology, Security, and Governance at Flinders University of South Australia. Research interests combining national security, intelligence, and defence with social cybersecurity, digital anthropology, and democratic resilience.