The U.S. and China are locked in an increasingly heated struggle for superpower status. Many perceived this confrontation initially only through the lenses of a trade war. However, the ZTE “saga” already indicated the issue was broader and involved a battle for supremacy over 21st century technologies and, relatedly, for international power (see When AI Started Creating AI – Artificial Intelligence and Computing Power, 7 May 2018).

The Sino-American technological battle increasingly looks like a fight to the death, with the offensive against Huawei aiming notably to protect future 5G networks (Cassell Bryan-Low, Colin Packham, David Lague, Steve Stecklow And Jack Stubbs, “The China Challenge: the 5G Fight“, Reuters Investigates, 21 May 2019). For Huawei and China, as well as for the world, consequences are far reaching, as, after Google “stopping Huawei’s Android license”, and an Intel and Qualcomm ban, the British chip designer ARM, held notably by Japanese Softbank, now stops relations with Huawei (Paul Sandle, “ARM supply halt deals fresh blow to Chinese tech giant Huawei“, Reuters, 22 May 2019; “DealBook Briefing: The Huawei Backlash Goes Global“, The New York Times, 23 May 2019; Tom Warren, “Huawei’s Android And Windows Alternatives Are Destined For Failure“, The Verge, 23 May 2019).

The highly possible coming American move against Chinese Hikvision, one of the largest world producers of video surveillance systems involving notably “artificial intelligence, speech monitoring and genetic testing” would only further confirm the American offensive (Doina Chiacu, Stella Qi, “Trump says ‘dangerous’ Huawei could be included in U.S.-China trade deal“, Reuters, 23 May 2019; Ana Swanson and Edward Wong, “Trump Administration Could Blacklist China’s Hikvision, a Surveillance Firm“, The New York Times, 21 May 2019).

China, for its part, answers to both the trade war and the technological fight with an ideologically martial mobilisation of its population along the lines of “People’s War”, “The Long March”, and changing TV scheduling to broadcast war films (Iris Zhao and Alan Weedon, “Chinese television suddenly switches scheduling to anti-American films amid US-China trade war“, ABC News, 20 May 2019; Michael MartinaDavid Lawder, “Prepare for difficult times, China’s Xi urges as trade war simmers“, Reuters, 22 May 2019). This highlights how much is as stake for the Middle Kingdom, as we explained previously (★ Sensor and Actuator (4): Artificial Intelligence, the Long March towards Advanced Robots and Geopolitics).

These moves underline the immense interests involved. Indeed, the new technologies, from artificial intelligence (AI) in its multiple forms to the internet of things (IoT) and communication, through the quantum information sciences (QIS) and Technologies, participate in a paradigmatic change, which also includes governance, international power and the way wars may be fought and won.

Content
  1. How human beings become the actuators of AI-agents
    1. Case study
      1. The example of Google DeepMind’s Go Game
      2. Traveling by air
      3. The case of smart homes
    2. Training human beings in acting without thinking first
  2. Winning a War through Submission of the Enemy: reflection on a dystopian scenario
  3. From bridging worlds to changing the balance of worlds
    1. Changing the worlds to overcome difficulties
      1. Digital gateways
      2. Dematerialising the world
      3. Welcome to the Matrix
    2. The impossible total dematerialisation of the world and vulnerabilities
      1. There is no such thing as a solely digital world
      2. Energy, the physical hidden component of digitalisation
      3. The IT companies, climate change-related disasters and responsibility
    3. Users’ countries pay the bill, systemic threats and a strategic twist
      1. Users’ countries and systemic threats
      2. A strategic twist

Here, we shall focus on such possible new faces of security in general and war in particular. These potential – and already operating – changes stem from the complex dynamics that have been unleashed. As we found out previously, the difficult march towards advanced robots, added to the strong interest stakeholders have in obtaining Artificial Intelligence (AI) systems, notably Deep-Learning (DL), which are operational and profitable, lead to an unexpected consequence. Human beings themselves are increasingly being dragged into the ecosystem of AI-agents. They are actually turned into the actuators of algorithms.

We shall first look at what is happening and explain how human beings become the actuators of AI-agents, giving examples. Then we shall sketch a scenario explaining how this evolution could lead to a dystopian future where a state actor mastering AI-agents could win a war in a new way.

Third, we shall turn to the digital and the material worlds and to the bridges between them. We shall highlight that the need to see AI developing will also lead to a further dematerialisation of the world, with, as extreme, virtual reality. However, we shall explain that total dematerialisation is impossible and comes with a major hidden cost, rising energy consumption, with thus impacts on climate-change. We shall also highlight how users’ countries bear the brunt of the burden and face major systemic threats. Finally, we shall identify a way for them to preempt these systemic threats, in an interesting strategic twist.

How human beings become the actuators of AI-agents

Case study

The example of Google DeepMind’s Go Game

First let us take again our initial example of Google DeepMind’s go game (see Inserting Artificial Intelligence in Reality). As we explained, the setting of the game looks as follows:

Screenshot of the video Google DeepMind: Ground-breaking AlphaGo masters the game of Go – 1:19

We pointed out that to see DeepMind’s AI-agent becoming fully operational, one had to provide a sensor to replace the lady in C and an actuator instead of the gentleman in A.

However, to get an actuator in A, we would ideally need an advanced robot. As seen, such sophisticated advanced robots are not yet available (★ Sensor and Actuator (4)…). We are still a long way from getting the kinds of advanced robots we would need for many of the actuating tasks AI/DL-agents would ideally require (Ibid.).

Thus, what is happening is that A will remain a human being for the near future, while AI and notably DL will go on expanding because stakeholders need their expansion (Ibid.).

In other terms, stakeholders promoting AI-agents and their use, to overcome the still current dearth of non-human actuators, will turn the very human beings that AI are meant to help into the actuators of these AI-agents.

Traveling by air

As another example, let us take a series of AI-agents that aim at selling airline tickets. The final aim of the traveler is to be able to go from its home to place P. Through a series of digitalisation of the process and use of various algorithms, the best ones being of the deep-learning kind, the future traveler will be presented with a series of destinations and airlines routes and tickets. S/he will choose one then pay the airline for her or his ticket.

If our traveler has a smart phone, then s/he will be able to get the ticket on her or his smart phone. If not s/he will have to print it. At the airport, without a smart phone s/he will have to print a boarding pass.

In any case s/he will have to print the luggage tickets. According to the robots available in the airport, s/he will have to scan the boarding pass and luggage ticket, put the luggage on the carrier belt, check the weight, or, alternatively haul the luggages in a robot that will then check the luggage tickets and weight.

Finally, s/he be ready to go through security checks.

For most steps, we can see how the absence of a smart advanced device is compensated by the user, i.e. a human being. Users have been turned into the actuators of the AI-agents of the airline, meanwhile also replacing the former employees of the airline company. Furthermore, when a smart device is operational, the consumers or users are those who have to buy the smart devices. They thus also now bear parts of the investments that were once paid by companies.

We observed something similar in the case of smart agriculture when advanced agricultural machineries were not available or built within the whole AI-powered process (see ★ Artificial Intelligence, the Internet of Things and the Future of Agriculture: Smart Agriculture Security? (1) and (2)).

The case of smart homes

The case is less clear when we look at smart homes and some of their components, such as the famous AI-assistants Amazon Alexa that connects with the smart speaker Echo or Google assistant that connects with IoT devices from cell phone to tablet through speaker Google Home.

We can imagine that one of these assistants could voice a suggestion such as “to reach on time the place where you must meet this client or that person, you should depart now and drive according to this itinerary.”

A first series of actuators would be at work that translate the result of DL algorithms into a series of sentences ordered in a way that makes sense in terms of a human agenda. Other actuators would then operate to voice the suggestions in a way a human being can hear and understand. In other cases, if speech capabilities are not available, then the advice could be displayed on a screen.

Probably, the person receiving the suggestions would perceive the AI-assistant as helping her or him, which is likely true.

Yet, from the point of view of the AI-agents, the individual would also be acting on AI-agents’ suggestions. The individual would making the AI-agents output exist in the physical world.

Training human beings in acting without thinking first

What feels disturbing from a human being point of view is that our own “cognition to action” sequence, built over 40.000 years if we consider only the Cro-magnon (Encyclopaedia Britannica), is broken. In a nutshell, if we make a very simplistic assessment of the sequence leading to our actions, we have more or less the following pattern: sensing the world, analysing the data collected, deciding according to analysis, acting. This model should be refined using available research. Yet, whatever the findings and most recent research, with the AI-assistant, our usual process is changed and one part of it is removed.

In our case, the AI-agents make the analysis, and then suggest possibilities for decisions. This is meant to reassure us and let us believe that we are free to decide to act or not, then to act accordingly.

However, deciding without any control over inputs and analysis, then acting upon this decision goes very much against the efforts at understanding, knowledge and education of thousands of years of history. It “feels” as if we were transformed into at best children, at worst slaves … or robots. Even though decision remains into our hands, decision without awareness of the analysis is not a real decision and the door is opened to the possibility for any manipulation or error.

Hence, here, the absolute need to develop trust, as well as the capacity to “enter into” and supervise the analysis, i.e. to overcome the AI “blackbox problem” (e.g. Will Knight, “The Dark Secret at the Heart of AI“, MIT Technology Review, 11 April 2017)

Actually, the fact that the corporate sector owns the AI-agents and thus will use it for its own benefit first and second for the benefit of its client only heightens the problem. Decades of advertisement and marketing attempts at manipulating the decision-taking process of consumers only makes the problem worse, to say nothing of centuries of lobbying for the benefit of companies, most often against the public good.

Thus, only a very strong role of political authorities as warrant of the public good and of the security of each and every citizen, be it an individual or a legal entity, may, at the end of the day, establish the conditions for the trust that will absolutely be necessary to see AI-agents that turn human beings into actuators develop safely.

Moreover, it will be crucial to make sure that human capabilities are not lost in the meantime. A couple of authors evoke this possibility in the case of strategic decision-making, for example (Andrew Hill, “Artificial intelligence creates real strategic dilemmas“, Financial Times, 20 May 2019).

Winning a War through Submission of the Enemy: reflection on a dystopian scenario

A dystopian scenario can be imagined to highlight some of the features of this possible reality.

The new society is segmented in two.

Wealthier citizens and companies can buy the robots that then act in their place, when these advanced robots are available. In that case, these richer people save time and resources for a certain number of tasks, completely outsourced to AI-agents and their advanced robots actuators. True enough, in the meanwhile, they also abandon part of their power, as action – as in the German Macht or in the English Might – is fundamentally power. Yet, a few of them, those who are wise enough to do so, use the time spared for other, more evolved tasks.

Poorer citizens and businesses, the large majority, is increasingly turned into the actuators of the AI-agents and their stakeholders. Their willpower is apparently maintained, but, because they act on suggestions and analyses made by AI-agents belonging to corporate stakeholders, they are de facto subservient to the interests of theses stakeholders.

For instance, continuing with our previous example, when on his or her way to a meeting, the poorer citizen’s connected device will choose an itinerary that will come close to this or that shop. The device will then tell her or him that s/he needs to buy this very product, by chance available in that shop. On the contrary, our wealthier citizen, with his or her set of robots, will not have to go through this. S/he will find the products already delivered in his or her home.

It could appear as if the poorer people were actually better off in terms of freedom than the wealthier class. This is however questionable, because in the poorer people’s case, a habit to rely on something that tells you what to do without thinking is developed. Thus, the appearance of freedom of decision is indeed only an appearance. Then, once the habit is formed and, as a result, the capability to think before to act is progressively lost, the door is opened to any manipulation.

True enough, the wealthier individuals will be put in front of the fait accompli, but the very sequence leading from reflection to action will not have been broken and damaged. If – and this is a big if – the wealthier people use the time spared to educate themselves further, then they can escape another danger, which is to completely give up any mastery over some sectors of their life.

In both cases, without strong control and protection, citizens are at great risk to lose a part of their humanity and to be transformed into things. They may progressively become the tool of AI-agents and of their stakeholders, without ever fighting, because the transition will have been slow and apparently innocuous.

Now, consider, that the main stakeholder(s) having sold the range of AI-agents is a foreign power. Alternatively, the businesses selling these AI-agents may be foreign and, for a host of reasons, including national interest and national security, have to obey foreign political authorities.

That foreign power would then have near complete control over the population using the AI-agents. In case of war, assuming the army and the political authorities of the targeted country intend to fight, the foreign actor ruling over the AI-agents could easily manipulate the using population, be it rich or poor, each according to the way they were transformed. The army could then be faced with possible attacks without and especially with a mass of enemy within, as the population could be turned in various ways against its own army. The aggressor would fight and possibly win with a minimum level of casualties.

Considering the danger, political authorities – again assuming they are neither predatory nor “sold” to a stronger and more powerful actor – have an even greater interest in making sure the population they rule do not end up actually being ruled by others.

In general terms, the point here is not to refuse technological progress, nor to heighten the fear of and hostility against AI. What matters is to be aware of the risks and to try making sure we take right actions so that we use progress at best, while we mitigate unintended adverse consequences.

More specifically, for each and every polity, it becomes important to understand the stakes, to make sure an alien and possibly negative rule is not being imposed on a candid population. Even without intended agressive intention, the very possibility that the capacity to ease a foreign rule is set up should ring the alarm bell and trigger protective actions.

Now, as was clear notably from the example of the airline travel, what is at stake here is not only the use of AI-agents. The issue is broader and includes the whole digitalisation process, as we shall now explore.

From bridging worlds to changing the balance of worlds

Initially, we identified that the sensors and actuators for an AI-agent (or a series of them) also serve as bridges between different types of worlds or reality (Inserting Artificial Intelligence in Reality).

Changing the worlds to overcome difficulties

Digital gateways

We can have AIs that operate solely within the digital world. In that case, sensors and actuators bridge mainly different ways to understand the digital world. For example, a sensor will “read” a digital input initially intelligible to humans or to another device and make it intelligible to the AI-agent. The actuator will take the AI output and make it understandable digitally to whichever actor needs it, be it human or not.

The latest feat realised by Google DeepMind’s AI-agent AlphaStar, when it mastered Blizzard Game StarCraft II, exemplifies such digital-only environments (AlphaStar Team, “AlphaStar: Mastering the Real-Time Strategy Game StarCraft II“, DeepMind Blog, 2019 – check their website for more photos and videos).

AlphaStar in action with the difference sequence, input and output – Check DeepMind original image for animation

Dematerialising the world

In a more complex way, we have sensors and actuators that must act as bridges between the physical or material world and the digital one.

Faced with the difficulty to bridge truly different worlds, one way forward, besides transforming human beings into actuator, is to bring as much as possible of the physical world into the digital one. This is exactly what the example above of the airline travel described.

We may thus expect that, in the years to come, the digitalisation of the world will be even more promoted. Indeed, we saw previously the interest various stakeholders have in further developing and making operational and profitable AI systems, notably involving DL (see part 3 of ★ Sensor and Actuator (4): Artificial Intelligence, the Long March towards Advanced Robots and Geopolitics). Thus, these actors are highly likely to turn human beings into actuators, while also reducing as much as possible the need for actuators bridging the digital to physical world, in a two-pronged strategy.

The digitalisation of the world becomes a dematerialisation with which human beings have to find ways to interact.

Welcome to the Matrix

An extreme evolution would be to further develop what is called virtual reality, thus bringing ever further human beings within the world of AI-agents. In that case, actuators would be turned upside down. They would not be any more devices actings as bridges from the digital world to the physical one and allowing AI-agents to output in the physical world. They would be device bridging the physical world into the digital one, thus similar to sensors, and bringing human beings along into the world of AI-agents.

Welcome to the Matrix!

The device could be external, as for example with the famous headsets (e.g. “The Best VR Headsets for 2019“, PC Magazine) or with Google glasses (latest generation for businesses, released on 20 May 2019). They could even be implanted within human beings. They could be a mix between both, as with AlterEgo, “a non-invasive, wearable, peripheral neural interface that allows humans to converse in natural language with machines, artificial intelligence assistants, services, and other people without any voice…The feedback to the user is given through audio, via bone conduction.”…The sensors “captures peripheral neural signals when internal speech articulators are volitionally and neurologically activated” (AlterEgo website; see also Lauren Golembiewski, “How Wearable AI Will Amplify Human Intelligence“, HBR, 30 April 2019).

TED2019 | April 2019

The dematerialisation of the world up to virtual reality and the inversion of actuators, as well as before that stage the transformation of human beings into actuators, has crucial impacts for the armed forces, because the nature of their possible targets change. As a result the ends, ways and means of attack and defence also need to change correspondingly. Yet, as explained below, we must not either overstate changes. Considering the time needed to develop new weapon systems and armaments, it is crucial to anticipate such evolutions.

The impossible total dematerialisation of the world and vulnerabilities

Let us now look at these sequences schematically but as a whole, as a system, with the diagram below.

PDDP sensor actuator model
Artificial Intelligence, Digitalisation and Dematerialisation of the World

There is no such thing as a solely digital world

Because we, as human beings, live in the real world and are physical beings, then, at one stage or another, even what appeared initially as taking place only in the digital world will have to be translated into the physical one. No matter how much dematerialisation will take place, there will have to be bridges with the physical world.

No matter how much dematerialisation will take place, there will have to be bridges with the physical world.

Thus, actually, the proper way to look at the issue from a systemic point of view is not to envision two types of sequences, digital-digital on the one hand, and digital-physical on the other hand, the two being separated.

What we have is always a unique sequence that, in terms of worlds or environments is physical-digital-digital-physical. If the digital part is equal to zero, then we find back classical physical interactions. But we cannot remove in any case the two physical extremities, even though we thought we were simply in a digital-digital sequence.

Even in the extreme case of a widespread virtual reality, human beings would still need to see their basic needs met, such as food and drink, as indeed portrayed in the films The Matrix. Their emotional and cognitive process would have to be kept healthy as in Total Recall. Meanwhile the digital system would need to function.

Energy, the physical hidden component of digitalisation

Indeed, however hidden, the dematerialisation of society comes always with a fundamental bridge or link to the physical or material world. This link is the use of the most basic fundamental resource of the physical world, energy, as Thomas Homer Dixon so insightfully highlighted (The Upside of down, catastrophe, creativity and the renewal of civilization, 2006).

In this framework, Janine Morley, Kelly Widdicks, and Mike Hazas examine “the phenomenal growth in Internet traffic, as a trend with important implications for energy demand” (“Digitalisation, energy and data demand: The impact of Internet traffic on overall and peak electricity consumption”, Energy Research & Social Science, Volume 38, April 2018, Pages 128-137, https://doi.org/10.1016/j.erss.2018.01.018). Calling for an agenda to better understand and then mitigate “the most problematic projections of Internet energy use”, they also highlight the large energy use the Internet and thus digitalisation imply, even though various scenarios remain possible considering uncertainty. For example:

“Most estimates of ICT-related energy consumption also predict steady growth. For instance, Van Heddeghem et al. estimate that the electricity consumed by digital devices and infrastructures is growing faster (at 7% per year) than global electricity demand itself (at 3% per year), with the rate of growth of networks highest of all (at 10.4%). Andrae and Edler, also anticipating a compound rate of growth of 7% per year, calculate that the production and operation of ICT will rise to 21% of global electricity consumption by 2030: this is an absolute rise to 8000 TWh, from a base of around 2000 TWh in 2010. In a worst case scenario, this could reach as high as 50% of global electricity use by 2030, but only 8% in the best case.” 

W. Van Heddeghem, S. Lambert, B. Lannoo, et al. “Trends in worldwide ICT electricity consumption from 2007 to 2012”, Comput. Commun., 50 (2014), pp. 64-76A and Andrae, T. Edler, “On global electricity usage of communication technology: trends to 2030”, Challenges, 6 (2015), p. 117, quoted by Morley, Widdicks, and Hazas.

If efforts at energy efficiency exist, they have been, so far, unable to offset the growth in energy usage (Ibid.).

The IT companies, climate change-related disasters and responsibility

Needless to say, the impact in terms of climate change and then multiple related adverse effects are similarly important.

Could Microsoft’s move show not only full awareness of their heavy energy footprint but also of their participation in global climate-change related disasters?

As a further signal of this heavy energy footprint, with adverse climate-change-related impacts, “Microsoft has joined a conservative-led group that demands fossil fuel companies be granted legal immunity from attempts to claw back damages from the climate change they helped cause”… It thus “become[s] the first technology company to join the CLC [Climate Leadership Council], which includes oil giants BP, ExxonMobil, Shell, Total and ConocoPhillips among its founding members.” (Oliver Milman, “Microsoft joins group seeking to kill off historic climate change lawsuits“, The Guardian, 1 May 2019). Besides emphasising how much we have to take “corporate communication” with a pinch of salt, could Microsoft’s move show not only full awareness of their heavy energy footprint but also of their participation in global climate-change related disasters?

Users’ countries pay the bill, systemic threats and a strategic twist

Users’ countries and systemic threats

Furthermore, Morley, Widdicks, and Hazas, highlights a crucial point:

“If accurate [the studies], this suggests that the bulk of energy consumption in Internet infrastructures takes place in the country of use.”

As a result, in the same dynamic, first the population (be it citizens or businesses) of countries using AI-systems are turned into actuators. Second, they see their world dematerialised and have to find ways to cope with it. Third, when they can, they also have to invest in expensive equipments if they want to avoid being completely “robotised”. Fourth, they also have to pay for upfront and hidden energy cost and their aftermath, through their energy bill and through their taxes.

Obviously the consequences for a state and its population are very different whether a country is a producer of dematerialisation and AI or a consumer thereof. The position in terms of leadership and race to AI and computing, thus in terms of influence as well as market share also matters. Those ahead of the race and the most influential develop a power over the others that is immense and multi-dimensional.

For the other countries, only strong political authorities, aware of the challenges faced, may hope to tackle such systemic threat to a whole population.

As a result, considering the heavy American supremacy in the matter, and the enormous Chinese efforts to become leader in that field, the confrontation of the two countries becomes even more of a logical outcome, not to say inevitable. Meanwhile, other countries, if they can, would better wake up rather sooner than later, with a whole array of responses, if they do not want to pay an extremely heavy price.

A strategic twist

In an interesting strategic twist, energy dependency on the one hand, human dependency on the other, for the whole system, could well be the keys less influential countries may play.

In other words, as a first type of responses, concerned political authorities could seek to educate a population not to fall prey to the worse cognitive impacts of “being turned into actuators”.

Second, actors could develop a range of actions, tools and weapons aimed at threatening the energy usage of the purveyors of the dematerialised world and of AI-agents. They could then use the very existence of these device as a preemptive insurance to make sure the dematerialising and AI suppliers do not go against their population, or behave in a way that would have adverse consequences for population and country. In case of need, such as a declared war, actions against energy, the bridge between the digital and the physical world, could make the whole digital edifice of the enemy crumble.

Finally, from a defensive and security perspective both human beings and the “energy bridge” must be secured in priority. This means also acting to make sure that climate change and its impacts, as well as energy depletion, do not finally destroy those that contributed to the spread of these existential threats for the Earth living species.


Featured image: Samsung’s Virtual Reality MWC 2016 Press Conference, by Maurizio Pesce from Milan, Italia [CC BY 2.0] via Wikimedia Commons.

Published by Dr Helene Lavoix (MSc PhD Lond)

Dr Helene Lavoix, PhD Lond (International Relations), is the President/CEO of The Red Team Analysis Society. She is specialised in strategic foresight and warning for international relations, national and international security issues. Her current focus is on the war in Ukraine, international order and the rise of China, the overstepping of planetary boundaries and international relations, the methodology of SF&W, radicalisation as well as new tech and security.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

EN