219. Insights from the GEN Z and the OE Livestream Event

[Editor’s Note: As Mad Scientist continues to broaden our aperture on the Operational Environment (OE) and the changing character of warfare, we are seeking out and engaging diverse populations to glean their insights in order to overcome our own confirmation biases. In January, we engaged defense subject matter experts from France, the Netherlands, Germany, the UK, Canada, and NATO’s Innovation Hub at our Global Perspectives in the Operational Environment Virtual Conference on a diverse array of topics affecting the OE.

Last month we collaborated with The College of William and Mary’s Project on International Peace and Security (PIPS) Program to livestream our GEN Z and the OE event, where PIPS Research Fellows discussed the ramifications of their respective research topics in two moderated discussion panels. Today’s post encapsulates what we learned from these two panels – Enjoy!]

Panel 1: Development and Security Implications of Cutting-Edge Technologies, moderated by Patricia DeGanerro:

Megan Hogan addressed the underlying cost-benefit analysis associated with the US developing and maintaining Deepfake technologies as a capability to deter, deny, or defeat any adversary that seeks to harm U.S. national interests.

Deepfakes are a form of synthetic media that use Artificial Intelligence (AI) to produce highly realistic, fake videos. They are extremely effective weapons of disinformation capable of undermining trust in institutions and elections and inciting political violence. The “power and peril” of deepfakes is that they lower the cost of disinformation. Previously, a high degree of technical expertise was needed to produce a realistic fake video of someone. Today, all you need is an internet connection.

Deepfakes have a wide array of applications both on and off the battlefield. At the most basic level, a deepfake attack during wartime can cause momentary confusion within an adversary’s political or military leadership. This confusion, in turn, can influence or stall adversary decision-making, providing American troops with a short window of time to either attack or escape, depending on the situation. Under coercive diplomacy strategy, which relies on the threat or limited use of military force to influence an adversary’s decision-making, weaponized deepfakes could act as a more cost-effective, limited use of force.

Weaponized deepfake attacks can be deployed in conjunction with conventional military operations. Such a “brute force” application is particularly compelling, because not only do deepfake attacks impede our adversaries’ ability to react, respond, and communicate, but they also allow us to shape the international narrative of a military operation in the critical early days of a conflict.

The US faces a choice — the DoD can either continue to restrict its research to developing video authentication algorithms or expand its effort to include deepfake weaponization for coercive diplomacy and warfighting.

        • From an offensive standpoint, deepfakes are a powerful tactic to incorporate into our arsenal: they’re difficult to defend against, cheap, fast-acting, and have no clear escalation thresholds.
        • From a defensive standpoint, weaponizing deepfakes will enhance our current detection capabilities by improving our understanding of how these weapons are made.

Clara Waterman addressed how the DoD’s current approaches to data collecting, cleaning, and sharing is impeding its ability to achieve its Artificial Intelligence (AI) goals.

The U.S. DoD is spending billions of dollars on AI research and development with the ultimate goal of integrating AI into its tactical and operational decisions and autonomous weaponry.

However, if the data that feeds AI is insufficient or inaccurate, then military leaders will misunderstand the operating environment. This is further exacerbated by biases in training, input, and feedback data, all of which create blind spots and perpetuate inaccuracies in the continuous data cycle.

The DoD needs a data clearinghouse that facilitates communication about data collection, vetting, and labeling between offices, departments, and agencies. The sooner that the DoD can capitalize on this opportunity, the faster it will be able to achieve its AI goals, as high-quality data sets are crucial to the DoD mission.

Caroline Duckworth discussed how, as biotechnology innovation accelerates globally, asymmetric ethical regulations between countries could put the US at a disadvantage.

The US engages in extensive public debate surrounding ethical, medical, and biotechnology practices, while China’s collectivist culture is more tolerant of individual sacrifice to benefit public progress. The use of embryonic stem cells isn’t inhibited by its religion or its culture, allowing them to more rapidly develop new genetic treatments and medicines.

The effects of these asymmetric ethics will continue to expand as biotechnology innovation accelerates, presenting the following risks to the US:

        • China’s pivot to promote the development of innovative bio-pharmaceuticals could allow China to create a choke hold on essential medicine, threatening a critical U.S. supply chain.
        • States with tolerant ethical standards in biotech will also be able to more rapidly pursue and adopt controversial military capabilities, including low cost, precise bio weapons that can target specific individuals or populations based on their genetic codes and soldiers that are genetically modified for disease resistance and enhanced cognition.

The US should:

        • Formalize its guiding principles in biotech development and broadcast them to serve as a model for others, allowing us to condemn violations consistently.
        • Expand the U.S.-China Program for Biomedical Collaborative Research, allowing us to influence their ethical norms.
        • Develop a DARPA task force to identify specific biotechnologies where the US will lag due to its ethical standards, and then have the DoD strategically invest in competing, ethical alternative technologies.

Key Mad Scientist Observations from Panel 1:

        • China is amassing data sets in all three of these areas: 1) facial recognition data for both internal security applications and scraping social media for deepfake applications, 2) AI data sets, and 3) DNA data sets
        • Our adversaries will employ these convergent capabilities as a hemispheric threat, targeting our forces (and their dependents) all the way from their home station installations (i.e., the Strategic Support Area) to the Close Area fight.
        • Much like we’ve seen in business over the past thirty years (i.e., first with Walmart and Target harnessing real-time inventory and supplier data, then with Amazon and Alibaba dominating on-line sales), the nation that is able to harness and weaponize the aforementioned and other data sets will dominate in future competition and conflict.

Panel 2: Geopolitical Strategy of Authoritarian Regimes and Near-Peer Competitors Utilizing Technology moderated by Marie Murphy:

Katherine Armstrong discussed how regimes are increasingly reaching over their borders to track, hack, blackmail, and assassinate emigrants using a technologically based and facilitated repertoire. Authoritarian states are beginning to use these capabilities, developed for use against co-ethnics and co-nationals, to target extraterritorial actors who are more central to U.S. security.

The ability of non-democratic governments to suppress the voice and control the behavior of people in the United States threatens civil society and democracy, violates U.S. sovereignty, and jeopardizes U.S. partnerships with other countries.

The technology-based tactics in the toolkit of transnational repression—disinformation, passive cyber attacks, and active cyber attacks—pose the greatest risk because they are easily transferable. Economic and social influencers, politicians, and members of the military and intelligence communities can be targeted with the same tactics.

The U.S. government and multilateral organizations should establish a standard of acceptable behavior regarding transnational repression, while also protecting victims.

        • Establish a watchlist of victims and perpetrators. This collaborative effort between NGOs and the U.S. Department of State (DoS) should form the core of efforts to curb transnational repression. The watchlist of victims would help NGOs (i.e., Freedom House and Citizen Lab) notify targets of an attack and provide educational resources to improve personal cybersecurity. The watchlist of perpetrators would include a ranking of states’ propensity to engage in transnational repression. These rankings can be tied to aid, trade, and diplomatic relations.
        • The US should set standards for the use of INTERPOL alerts in U.S. legal proceedings (as described in the TRAP Act of 2019) and increase funding to INTERPOL. INTERPOL should punish abusers of its system by suspending membership or barring them from leadership positions.

Lincoln Zaleski addressed how technology-enabled disinformation campaigns threaten liberal democratic society by targeting exploitable population-based vulnerabilities inherent to the democratic system.

Modern disinformation campaigns, enabled by emerging technologies, allow authoritarian regimes to exploit inherent democratic vulnerabilities. These regimes identify key targets within our society that can be exploited, including both traditional targets (i.e., corrupt politicians or corporations seeking financial linkages with the attacking country) and emergent targets (i.e., disenfranchised identity groups or co-ethnic communities).

Through repeated false messaging and direct/indirect economic support, authoritarian regimes strengthen feelings of disenfranchisement and emphasize the need to disrupt and change the status quo. The false messaging becomes mainstream, and the attacking regime gains political influence. These networks can be repeatedly tapped into for different campaigns over time, furthering authoritarian-led disinformation and political influence.

Liberal democratic responses are limited in scope and require significant time to enact, such as nationwide education campaigns or perfecting social media detection of fake news.

The United States and other liberal democracies should respond to authoritarian disinformation attacks through a counter-disinformation offensive campaign intended to impose costs on the attacking country. By targeting inherent authoritarian vulnerabilities, democratic regimes can raise the social and political cost of authoritarian disinformation campaigns, ideally stemming further attacks.

Michaela Flemming discussed how China’s strategic export of its surveillance state will improve Chinese intelligence, creating stronger but more dependent allies for China, while contributing to democratic backsliding worldwide.

China is exporting its model of digital authoritarianism abroad, creating a network of dependent client states and threatening the United States’ global influence. China’s digital authoritarian model is composed of both Chinese surveillance capable technology such as camera systems, smart city tech, and telecom infrastructure as well as cyber sovereignty ideology. China sells this model to developing and autocratic client states through the Belt and Road Initiative.

By exporting Chinese technology from companies like Huawei, ZTE, and Hikvision, China can collect vast amounts of data from client states. This will strengthen both domestic and foreign surveillance capabilities. High quality foreign data enables China to manipulate public opinion outside its borders through responsive and targeted propaganda campaigns.

Poor authoritarian states are gaining access to technology they could neither develop nor afford on their own, which they will use to consolidate their control over populations. By arming authoritarian and developing states with the tools of repression and a framework on how to use them, China is encouraging and enabling democratic backsliding abroad.

China will use its leverage to shape governing norms around the internet, promoting authoritarian control and facilitating the theft of intellectual property while eroding freedom of speech and democracy. China also benefits from stronger, but more dependent allies as the number of client states grow.

        • As China feels more secure both at home and abroad, Beijing may act more aggressively, especially in areas where it has a number of client states.
        • As China promotes digital authoritarianism as a viable substitute for democracy, the United States will lose valuable partners in the developing world.

The US should work with targeted countries to improve domestic cyber security expertise and counter Chinese efforts to spread digital authoritarianism:

        • Employ blimps to provide secure internet access to citizens in authoritarian countries.
        • Develop interchangeable parts / system interoperability to lower the cost of eliminating Huawei equipment.

Key Mad Scientist Observations from Panel 2:

        • Authoritarian regimes are employing track, hack, blackmail, and assassination against their citizens and ex-pat dissidents living abroad.  In the future, these tactics could be employed in competition phase operations, targeting other states’ influencers and leaders.
        • Many of the authoritarian tools used for social control translate well to tactical and operational ISR capabilities (e.g., AI sensor networking, facial and gait recognition, and smart city sensors).
        • Current technologies used to track COVID-19 outbreaks internally in China will be mainstreamed and exported to enhance social control (e.g., tracking populations, assigning risk portfolios).
        • The convergence and export of these trends (enhanced social controls, track/hack/blackmail/assassination, and disinformation) is creating a toolkit of capabilities that will challenge the US objective of globally expanding the community of liberal democracies.

If you enjoyed this post, check out this event’s page on the Mad Scientist APAN site to watch both panel videos and read each of the PIPS Research Fellows’ abstracts.

 

Share on Facebook Share on LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *