This article focuses on the political and geopolitical consequences of the feedback relationship linking Artificial Intelligence (AI) in its Deep Learning component and computing power – hardware – or rather high performance computing power (HPC). It builds on a first part where we explained and detailed this connection.

Related

Artificial Intelligence, Computing Power and Geopolitics (1): the connection between AI and HPC

High Performance Computing Race and Power – Artificial Intelligence, Computing Power and Geopolitics (3): The complex framework within which the responses available to actors in terms of HPC, considering its crucial significance need to be located.

Winning the Race to Exascale Computing – Artificial Intelligence, Computing Power and Geopolitics (4) : The race to exascale computing, state of play, and impacts on power and the political and geopolitical (dis)order; possible disruptions to the race.

There we underlined notably three typical phases where computation is required: creation of the AI program, training, and inference or production (usage). We showed that a quest for improvement across phases, and the overwhelming and determining importance of architecture design – which takes place during the creation phase – generates a crucial need for ever more powerful computing power. Meanwhile, we identified a feedback spiral between AI-DL and computing power, where more computing power allows for advances in terms of AI and where new AI and the need to optimize it demand more computing power. Building upon these findings we envision here how the feedback spiral between  computing power and AI-DL systems is increasingly likely to impact politics and geopolitics.

Considering thus the crucial and rising importance of computing power, with the next article we shall address how the resulting race for computing power could play out and has already most probably started. There we shall notably consider a supplementary uncertainty we identified previously, the evolution and even mutation of the field of computing power and hardware as it is impacted by AI-DL.

Members-only

Download article as pdf – Artificial Intelligence, Computing Power and Geopolitics part-2.pdf – 12 pages

Become a member of The Red (Team) Analysis Society.

Here we first imagine the political and geopolitical impacts faced by actors with insufficient computing power. We examine these potential consequences according to the choices the actors have. We focus on the very creation of the AI-systems and speak more briefly of the training phase. Then we look at the distribution of power in the emerging AI-world according to computing power and underline a possible threat to our current modern international order.

Living without high performance computing power in the age of Artificial Intelligence: dependency and loss of sovereignty

Understanding and imagining for the future the political and geopolitical impacts of the feedback relationship between computing power and Artificial Intelligence-Deep Learning may be more easily grasped when looking first at what the absence of computing power or rather high performance computing could entail.

As we started pointing out in the previous article, not having the computing power necessary for the phase of the creation of AI systems (phase 0 in our previous article) will de facto make various actors dependent upon those who have computing power.

What we are facing is a situation comparable to a brain drain of a new age, or rather an initial brain deficiency, which forbids or, to the least, makes very difficult evolution. As shown in the detailed example of Google’s AutoML (see When AI started creating AI), if AI-created Deep Neural Networks are always or most of the time more efficient than those created by humans, then those actors who cannot perform best during this initial phase, when AIs are designed, will have less efficient AIs, no AIs, will depend upon external computing power to create their AIs or, worse, upon others for the very core programme of the AI-DL they will use. If these AI-systems are crucial for their governance or AI-management, then potential negative impacts may ripple throughout the whole system. As a result their AI-power status in the international relative distribution of power will be impacted thrice: once because of potentially sub-efficient AI-governance or AI-management, once because they cannot wield influence thanks to their optimal AI-systems and once because they do not have useful and necessary computing power. Impact on general international influence and international power status will follow and stem from all the areas where AI-governance and AI-management are increasingly used positively, when this will only be fully realised by those who have computing power. We shall look more in detail to each of the choices available to those actors who do not have sufficient computing power.

Here we assume – and this is indeed a very strong assumption – that potential negative effects and unintended consequence of the use of AI-systems for governance and management are mitigated. Note that detailed scenarios would be necessary to move from assumption to a better understanding of the future across the whole range of possibilities.

For example, we may think about a completely opposite possibility, according to which actors using AI-systems abundantly have completely underestimated and mismanaged adverse impacts and where, finally, those actors who had no computer power and decided for not using AI in governance or in management end up faring much better that their AI-friendly counterparts.

Choice 1: No AI-systems and Non-AI actors

Notably if we consider the still emerging and highly changing field of AI, as well as the cost entailed notably in terms of computing power, we may imagine a scenario in terms of international interactions where, out of a conscious political decision or out of sheer necessity and duress, some AI-free actors finally develop strategic, operational, and tactical advantages across governance or management, which allow them to fare better than AI-endowed actors. We should here remember the famous war simulation Millenium Challenge 2002 – a war simulation exercise sponsored by the defunct U.S. Joint Forces Command – where an a-doctrinal Red Team (playing ‘the enemy’) initially won over the Blue Team (the U.S.), including by not using the expected technology (Micah Zenko,  “Millennium Challenge: The Real Story of a Corrupted Military Exercise and Its Legacy“, War On The Rocks, 5 Nov 2015; Malcolm Gladwell, Blink: The Power of Thinking Without Thinking, 2005: pp. 47-68).

If political authorities faced with a large deficit in computing power make the conscious and willed choice to decide to exclude AI, then, on top of the possibility to develop unexpected advantages – which is however, in no way a given – evoked above, they may be able to try capitalising on this strategy. As an analogy, everything being equal, we may think about what Bhutan decided in terms of national policy. The country – true enough, so far, largely “guided” by India in terms of external relations, with a revision of the Indo-Bhutan Friendship Treaty in 2007, and by an international system where peace has rather prevailed as a norm since the end of World War 2, despite a grimmer reality – chose a specific cultural “official stress on Bhutanese distinctiveness” for development, foregoing a mad quest for modernity, and erecting this specificity as national pride, policy and asset (Syed Aziz-al Ahsan and Bhumitra Chakma, “Bhutan’s Foreign Policy: Cautious Self-Assertion?“, Asian Survey, Vol. 33, No. 11 (Nov., 1993), pp. 1043-1054.

Short of this thoughtful and planned approach, which furthermore may neither remain viable in the medium-term and even shorter term future in a changing international order, nor be adaptable to each and every actor, governing without AI may soon become complex. Indeed, if many areas of governance increasingly involve AI-systems in most countries, then, a “non-AI country”, when interacting internationally on a host of issues with others, may rapidly face challenges, ranging from speed of reaction and capability to handle data, to inability to communicate and misunderstanding because of different ways to handle problems (with or without AI). Non-AI companies would most probably face similar difficulties, even more so if they are located in countries where AI is promoted by the political authorities. In that case, these non-AI businesses would most probably have to move towards AI, assuming they can, or disappear.

Choice 2: Suboptimal AI-systems

Similar problems, with even worse hurdles, may arise if the absence or inadequacy of computing power leads to the use of suboptimal AI.

All areas of governance or management where a less efficient AI is used can be impacted.

By less efficient, we cover a very large scope of problems from energy inefficiency to accuracy decrease through speed, i.e. all those elements for which a quest for optimisation and improvement is ongoing, as we saw previously (see “Artificial Intelligence, Computing Power and Geopolitics (1)“, part 3).

Imagine, for example, that drones, enabled with carrying weapons and firing, use AI-systems for object detection (for an example with open source Google-created NASNet, see “When AI Started Creating AI”). If your AI-system for Object Detection is less efficient than the system used by the adversary, then your drone may be destroyed even before it started doing anything. It may also be tricked with a whole range of decoys.

artificial intelligence, smart city, geopolitics, AI, driver, scenarios, strategic warning, deep learning high performance computing power, risk analysis, risk management, strategic foresight, red team analysis society, indicator, drones, LAWS
“To achieve the upper hand on a battlefield that’s expected to be complex and multidimensional”, the U.S. Army Research Laboratory, ” ARL is developing interconnected weapons that will incorporate advances in shared sensing, computing and navigating.” Image by Evan Jensen, ARL – from Dr. Frank Fresconi, Dr. Scott Schoenfeld and Dan Rusin, Lt. Col., USA (Ret.), “On Target”, January – March 2018 issue of Army AL&T magazine, Public Domain.

We could even imagine that the enemy’s superior computing power having allowed for creating better and more numerous AI-systems, could have the capability to feed fake or slightly skewed information into the sub-optimal drone, leading the latter to target exclusively its own army’s troops and material. Here, even with the best will in the world, the actor deficient in computing power cannot – it truly does not have the capability – protect itself nor preempt what superior computing power and thus, de facto, AIs can create and do. Furthermore, because AIs create strategies that are AI specific and are not usually imagined by humans, as shown by Google’s series of AI programs devoted to the game of Go (see “Artificial Intelligence and Deep Learning – The New AI-World in the Making“), it is likely that, as in the offensive example imagined above, only efficient and optimal AIs will be able to counter AIs.

This is only one example but it may be declined across the whole spectrum of AI-powered objects such as the Internet of Things (IoT).

Choice 3:  Optimal AI but created on external computing power

Let us turn now to an actor with insufficient computing power available, yet having a willingness to develop and optimize its own AI systems – assuming this actor also has the other necessary ingredients to do so, such as scientists for example.

This actor may have no other choice than using others’ computing power. This actor will have to pay for this usage, be it in monetary terms, if it uses commercial facilities, or in terms of independence if, for example, specific cooperation agreements are imagined. This may, or not, involve security liabilities according to the actors, to the providers of computing power, and to the specific goal of the AI-systems being developed.

In terms of national security, for example, can we really imagine a ministry of Defence or a Home ministry developing highly sensitive AI-systems on a commercial computing facility?

Actually, yes, it can be imagined as, already, the U.S. army is moving “to the cloud with the help of industry”, with, for example, the “Joint Enterprise Defense Infrastructure (JEDI)”, to be finally awarded in Autumn 2018 (e.g. “Army modernizes, migrates to cloud computing“, Military and Aerospace Electronic, 20 March 2018; Frank Konkel,, “Pentagon’s Commercial Cloud Will Be a Single Award—And Industry Isn’t Happy“, NextGov, 7 March 2018; LTC Steven Howard, U.S. Army (Ret.), “DoD to Award Joint Enterprise Defense Infrastructure Cloud Contract in Fall 2018“, Cyberdefense, 23 May 2018). This cloud should be used for war and “a commercial company” – probably Amazon – will be “in charge of hosting and distributing mission-critical workloads and classified military secrets to warfighters around the globe” (Howard, Ibid; Frank Konkel “How a Pentagon Contract Sparked a Cloud War“, NextGov, 26 April 2018). JEDI could be awarded to Amazon during Autumn 2018 (Ibid.). True enough, we do not know if this cloud will be used as distributed architecture also to create AI-systems, but it may be. Using commercial companies for governance, even more so if the purpose is related to defence, demands that commercial companies assume a security mission that was, until recently, a prerogative of the state. The power thus given to a commercial company makes even more the American political dynamics. Notably, Eisenhower’s military-industrial complex could well be changing (e.g.”Military-Industrial Complex Speech“, Dwight D. Eisenhower, 1961, Avalon Project, Yale).

Now, this is about American security, privatised to American companies. However, would the Pentagon award such contracts to Chinese companies or European ones?

Similarly, we may wonder if the creation of AI-systems may be done on commercial super computers belonging to foreign companies, and/or localized abroad. This is even more so if the foreign company is already contracted by a foreign Army or Defence ministry, as, in that case, the foreign Army has a larger power of coercion on the commercial companies: it may threaten to withhold the contract, or delay payment if the commercial company does not do its bidding, whatever the bidding.

The possibility to face hacks and other security vulnerabilities rapidly increases.

A similar phenomenon may also occur for elements constituting computing power, such as foreign manufactured chips, as recently shown by two researchers of the Department of Electrical and Computer Engineering of U.S. Clemson University, pointing out computing power supply chain vulnerabilities for machine learning (Joseph Clements and Yingjie Lao, “Hardware Trojan Attacks on Neural Networks“, arXiv:1806.05768v1 [cs.LG] 14 Jun 2018).

The use of distributed architecture, i.e. computing power distributed over various machines, as in the example of JEDI above, which may be envisioned to a point to offset the absence of super computers, not only multiplies the power needed (see Artificial Intelligence, Computing Power and Geopolitics (1)), but also opens the door to new dangers, as data travel and as each computer of the network must be secured. It may thus not be such an easy way out of super computing power deficiency.

Outside the field of cybersecurity, using others’ computing power also opens the door to very simple vulnerabilities: a round of sanctions of the type favoured by the U.S., for example, may suddenly forbid any actor, public or private, access to the needed computing power, even if the provider is a commercial entity. The dependent actor may be actually so dependent upon the country host to computing power that it has lost much of its sovereignty and independence.

This is also true for companies, if they hand their fate to other countries and competitors – without an adequate policy of diversification of supply of computing power, assuming this is possible – as shows the example of ZTE and American sanctions, even though the case involves more elements than computing power (e.g. Sijia Jiang, “ZTE’s Hong Kong shares rise after clarification of U.S. bill impact“, Reuters, 20 June 2018; Erik Wasson, Jenny Leonard, and Margaret Talev “Trump to Argue ZTE Fine, Penalties Are Punishment Enough, Official Says“, Bloomberg, 20 June 2018; Li Tao, Celia Chen, Bien Perez, “ZTE may be too big to fail, as it remains the thin end of the wedge in China’s global tech ambition“, SCMP, 21 April 2018; Koh Gui Qing, “Exclusive – U.S. considers tightening grip on China ties to Corporate America“, Reuters, 27 April 2018).

Choice 4: Optimal AIs but created by others

Finally, using AI-systems designed and created by others may also lead to similar vulnerabilities and dependency, which may be acceptable for companies when using mass-market products but not for actors such as countries when national interest and national security is at stake, nor by companies when competitively sensitive areas are at stake (especially when faced with predatory practice, see “Beyond the end of globalisation – from the Brexit to U.S. President Trump“, The Red (Team) Analysis, 27 February 2017).

We already evoked the influence gained by those being able to sell such systems and the risks borne by those buying and using them in the case of China that “export[ed] facial ID technology to Zimbabwe” (Global Times, 12 April 2018), in “Big Data, Driver for Artificial Intelligence… but not in the Future?” (Helene Lavoix, The Red (Team) Analysis, 16 April 2018).

Let us take another example with the future smart cities. We may imagine that a country, not endowed with sufficient computing power, has to rely on either computing power or directly on foreign AI-systems for their cities. The video below, although not focused on AI, gives an idea of the trend towards connected and “smart cities”.

Now, knowing that, in war, urban operations are considered as being a major component of the future (e.g. UK DCDC Strategic Trends Programme: Future Operating Environment 2035: 2-3, 25), it is highly likely that urban operations will increasingly take place in smart and AI-powered cities. To better envision what is likely to happen in the future, we should thus mentally juxtapose the video and the urban combat images below created by the U.S. Army Research Laboratory. In other words, instead of a devastated “modern world” traditional background for the Army pictures, we should have a smart, AI-powered city as background.

artificial intelligence, smart city, geopolitics, AI, driver, scenarios, strategic warning, deep learning high performance computing power, risk analysis, risk management, strategic foresight, red team analysis society, indicator, drones, LAWS
Images from the U.S. ARL – Used in the article by Dr. Alexander Kott, “The ARTIFICIAL Becomes REAL”, pp. 90-95, Army-ALT January-March2018.
artificial intelligence, smart city, geopolitics, AI, driver, scenarios, strategic warning, deep learning high performance computing power, risk analysis, risk management, strategic foresight, red team analysis society, indicator, drones, LAWS

Now, if a foreign actor has created the AI-systems that manage the AI-powered city, what stops that actor to potentially include “elements” that would play in its favour should its troops have to carry out offensive operations in the future within this very city?

Or, as another example, if strategically wise political authorities wanted to endow their cities with AI-powered defence, able to counter both traditional and AI-endowed attacks, but if these very political authorities did not have any computing power to develop such systems, would foreign commercial companies be allowed by their own political authorities to develop such systems?

In an AI-powered world, sovereignty and independence become dependent upon computing power.

The absence of computing power for the training phase of the AI-system somehow corresponds to a country that would have no education system and would have to completely rely on external and foreign sources to deliver this education. This is true for supervised learning when training on big data set has to be done and only heightens the hurdles already identified previously in  Big Data, Driver for Artificial Intelligence… This is also true, as we saw previously (part 1), for reinforcement learning as computing power is even more important for this type of deep learning, even though it does not need external big data. Could this also be true with the latest Google Deep Mind’s approach, Transfer Learning? This will need to be examined later, with a deep dive into this latest AI-DL approach.

Distribution of Power in the AI-World, High Performance Computing Power and Threat to the Westphalian System?

As a result, the Top500 list of supercomputers, which is produced biannually, and thus ranks every 6 months supercomputers throughout the world, becomes a precious indication and tool to evaluate present and future AI-power of actors, be they companies or states. It also gives us a quite precise picture of power on the international scene.

For example, according to the November 2017 Top500 list (the next issue was presented on 25 June 2018, and made public after the publication of this article – watch out for a signal on the June 2018 list), and assuming all supercomputers have been submitted to the benchmark of the list, in the whole Middle East, only Saudi Arabia possesses supercomputers among the top 500 most powerful computers of the world. It has four of them, ranked 20, 60, 288 and 386. The last three belong to oil company Aramco. Saudi Arabia’s most powerful supercomputer delivers a performance of 5,5 Petaflops, i.e almost 17 times less than China’s most powerful computer and 36 times less than the U.S. new Summit (see for more details on Summit, When AI started creating AI). If Saudi Arabia wants to be independent in terms of AI, then it will need to construct a strategy allowing it to overcome a possible lack of capacity in terms of computing power. The situation is even more challenging for a country such as the U.A.E, which, despite a willingness to develop A.I., does not have any supercomputer (U.A.E. AI Strategy 2031 – video).

Meanwhile, as another example, NVIDIA put online in 2016 supercomputer DGX Saturn V, which ranked 36 in November 2017 and delivers a performance of 3,3 Petaflops, but is built with DL in mind. Added to its other supercomputer, DGX SaturnV Volta, this means that NVIDIA has a computing power equal to 4,37 Petaflops thus superior to Russia, with its three supercomputers ranked 63, 227, 412 and exhibiting respectively performances of 2,1; 0,9 and 0,7 petaflops. Note that  NVIDIA latest GPU accelerator, NVIDIA DGX-2 and its 2-petaFLOPS may only reinforce the company’s power (see part 1). In terms of international power, of course, Russia benefits from the attributes and capabilities of a state, notably its monopoly of violence, which NVIDIA does not have. Yet, imagining as seems to be the case, that the new emerging AI-world in construction increasingly integrates AI throughout state’s functions and governance, then Russia would face new dependency as well as new security challenges stemming from its relatively lower computing power. For its part, NVIDIA – or other companies – could progressively take over state functions, as shown in the example above of the U.S. defense JEDI. If we recall the British East India Company, that would not be the first time in history that a company behaves as a ruling actor.

Here these are the very principles of our modern Westphalian world that may potentially change.

However, things are even more complex than the picture just described, because the very hardware field is also being impacted by the AI-revolution, as identified in the first part. If we consider these hardware evolutions and changes, where is the necessary computing power, and, more difficult, where will it be? 

Furthermore, if High Performance Computing power is so important, then, what can actors decide to do about it? They can build and reinforce their computing power, deny others’ computing power or find alternative strategies? This is what we shall see next, alongside changes in the hardware field.

Featured image: U.S. Army illustration, “Army research explores individualized, adaptive technologies focused on enhancing teamwork within heterogeneous human-intelligent agent teams.” in U.S. Army Research Laboratory (ARL), “Army researchers advance human-intelligent agent teaming“, Public Domain.

Published by Dr Helene Lavoix (MSc PhD Lond)

Dr Helene Lavoix, PhD Lond (International Relations), is the President/CEO of The Red Team Analysis Society. She is specialised in strategic foresight and warning for international relations, national and international security issues. Her current focus is on the war in Ukraine, international order and the rise of China, the overstepping of planetary boundaries and international relations, the methodology of SF&W, radicalisation as well as new tech and security.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

EN