Impact on Issues and Uncertainties

Credit Image: Mike MacKenzie on Flickr
Image via www.vpnsrus.com – (CC BY 2.0).

Critical Uncertainty ➚➚➚ Disruption of the current AI-power race for private and public actors alike – The U.S. takes a very serious lead in the race.
➚➚  Accelerating expansion of AI
➚➚  Accelerating emergence of the AI-world
➚➚ Increased odds to see the U.S. consolidating its lead in the AI-power race.
➚➚ Escalating AI-power race notably between the U.S. and China.
➚➚ Rising challenge for the rest of the world to catch up
Potential for escalating tension U.S. – China, including between AI actors

Facts and Analysis

Related

Ongoing series: Portal to AI – Understanding AI and Foreseeing the Future AI-powered World
★ Artificial Intelligence – Forces, Drivers and Stakes
Militarizing Artificial Intelligence – China (1)
★ Militarizing Artificial Intelligence – China (2)

Articles starting with a ★ are premium articles, members-only. The introduction remains nonetheless open access.

On 7 September 2018, the U.S. Defense Advanced Research Projects Agency (DARPA) of the Department of Defense (DoD) launched a multi-year investment of more than $2 billion in new and existing programs to favour and let emerge “the third wave” of Artificial Intelligence (AI). According to DARPA, this next generation AI should notably improve and focus upon “contextual adaptation,” i.e. “machines that understand and reason in context”.

The goal is to enable the creation of machines that “function more as colleagues than as tools” and thus to allow for “partner[ing] with machines”. As a result, the DARPA wants to create “powerful capabilities for the DoD”, i.e.:

“Military systems that collaborate with warfighters will
– facilitate better decisions in complex, time-critical, battlefield environments;
– enable a shared understanding of massive, incomplete, and contradictory information;
– and empower unmanned systems to perform critical missions safely and with high degrees of autonomy.”

The last point is highly likely to include notably the famously feared Lethal Autonomous Weapon Systems (LAWS) aka killer robots.

Even though the USD 2 billion announcement includes existing programs, DARPA’s new campaign indicates the importance of AI for the American Defence. The U.S. shows here again its willingness to remain at the top of the race for AI-power, by breaking new ground in terms of “algorithms” as well as “needs and usage”, to use our five drivers and stakes’ terminology. It also thereby adopts a distinctively disruptive  strategy as it intends to go beyond the current Deep Learning wave.

Disruption would impact both public and private actors, states and companies alike.

In terms of power struggle, we may also see the launch of the DARPA campaign as an answer to the call by Alphabet (Google), Tesla and 116 international experts to  ban autonomous weapons.  With such an amount of funding available, it is likely that more than one expert and laboratory will see their initial reluctance circumvented.

Should the U.S. succeed, then it would take a very serious lead in the current race for AI power, notably with China, as it would deeply shape the very path on which the race takes place.

Sources and Signals

Darpa: AI Next Campaign

DARPA Announces $2 Billion Campaign to Develop Next Wave of AI Technologies

Over its 60-year history, DARPA has played a leading role in the creation and advancement of artificial intelligence (AI) technologies that have produced game-changing capabilities for the Department of Defense. Starting in the 1960s, DARPA research shaped the first wave of AI technologies, which focused on handcrafted knowledge, or rule-based systems capable of narrowly defined tasks.

Elon Musk leads 116 experts calling for outright ban of killer robots

Some of the world’s leading robotics and artificial intelligence pioneers are calling on the United Nations to ban the development and use of killer robots. Tesla’s Elon Musk and Alphabet’s Mustafa Suleyman are leading a group of 116 specialists from across 26 countries who are calling for the ban on autonomous weapons.

Published by Dr Helene Lavoix (MSc PhD Lond)

Dr Helene Lavoix, PhD Lond (International Relations), is the President/CEO of The Red Team Analysis Society. She is specialised in strategic foresight and warning for international relations, national and international security issues. Her current focus is on the war in Ukraine, international order and the rise of China, the overstepping of planetary boundaries and international relations, the methodology of SF&W, radicalisation as well as new tech and security.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

EN