On 10 October 2017, Yandex (the Russian equivalent of Google)  launched Alice, a “conversational intelligent assistant” using notably Deep Learning, as well as “SpeechKit, Yandex’s proprietary speech recognition toolkit”, to help Russian internet users accomplish many tasks not only over the internet but also regarding the management of their own computers (see Alice website; Yandex Press release “Yandex Launches Alice – The First AI Assistant Designed For The Russian Market“; George Anadiotis, “Alice, the making of: Behind the scenes with the new AI assistant from Yandex“, 10 Oct 2017, ZDNet).

The capability to use Alice in English could be developed in the (near-)future (Anadiotis, Ibid.). This would open the whole Yandex world to English speakers. Meanwhile this would also allow Yandex to get access to all the data of these very English speakers, so far the preserve mainly of Google, Apple and Amazon. Considering the current tensions between the U.S. and, with variations, NATO members, on the one hand, Russia on the other, one may only too well imagine the political paranoia that might then develop. Meanwhile, international competition among internet giants for users’ data, crucial to part of Deep Learning, as we shall see when explaining what is Deep Learning below, will very likely be intensified. On a more positive side, better understanding may also emerge as a result of non-Russian people discovering the Russian world. Nonetheless, this would also impact perceptions and thus international relations.

The AI world, notably in its Deep Learning component, is already here. It impacts everything, even though the extent and depth of its impacts are still hardly perceptible. We must understand Deep Learning to be able to live within this new world in the making, rather than only reacting to it.

This article thus focuses on Deep Learning (DL), the sub-field of Artificial Intelligence (AI) that leads the current exponential development of the sector. As we seek to envision how a future AI-powered world will look and what it will mean to its actors, notably in terms of politics and geopolitics, it is indeed fundamental to first understand what is AI.

Previously, we presented AI, looking first at AI as a capability, then as a scientific field. Finally, we introduced the various types of AI capabilities that scientists seek to achieve and the ways in which they approach their research.

In this article, we shall first give examples of how Deep Learning is used in the real world. We distinguish two types of activities: classical AI-powered activities and totally new AI-activities, related to the very emergence of DL. In both cases we shall point out their revolutionary potential, impacting three major emerging functions within polities we had previously started identifying: AI management, AI governance  and AI-power status, when AI is most likely to be, to the least, part of the relative power ranking for world actors (Helene Lavoix, “When Artificial Intelligence will Power Geopolitics – Presenting AI“, 29 November 2017, and Jean-Michel Valantin, “The Chinese Artificial Intelligence Revolution“, 13 November 2017, The Red (Team) Analysis Society).

Then, we shall take a deeper dive in the world of Deep Learning, taking as practical example the evolution of Google’s DeepMind AI-DL program initially developed to win against human Go masters: AlphaGo, then AlphaGo Zero and finally AlphaZero. After briefly presenting where DL is located within AI, we shall focus first on Deep Neural Networks and Supervised Learning. Second we shall look at the latest evolution with Deep Reinforcement Learning and start wondering if a new AI-DL paradigm, which could revolutionise the current dogma regarding the importance of Big Data, is not emerging.

Deep Learning in the real world, AI-governance and AI-power status

In a nutshell, Deep Learning (DL) is used to solve at best complex problems and functions and to take the best possible decisions regarding whatever question it is applied or to succeed in whatever field it is used.

For example, DL is increasingly used in the oil and gas industry. Southwest Research Institute (SwRI) developed the Smart LEak Detection (SLED) system, which “uses algorithms to process images from sensors scanning infrastructure” to “autonomously and accurately detect liquid hydrocarbon leaks and spills” (Maria S. Araujo and Daniel S. Davila, “Machine learning improves oil and gas monitoring“, 9 June 2017, Talking IoT in Energy). DNV GL has explored the use of DL (actually Microsoft Azur Machine Learning) to predict corrosion in pipelines and concluded that the “performance achieved” was “extremely promising” (Jo Øvstaas, “Big data and machine learning for prediction of corrosion in pipelines“, 12 Jun 2017, DNV GL). Had Italy and the UK benefited from such systems, both the “explosion at a major processing facility in Austria, which is the main point of entry for Russian gas into Europe”, and the “shutdown of the North Sea’s most important oil and gas pipeline system”, respectively on 11 and 12 December 2017, with major consequences for European supply (Jillian Ambrose and Gordon Rayner, “Gas shortage to push up bills after ‘perfect storm’ of energy problems“, 12 Dec 2017, The Telegraph), would most probably not have happened – assuming, of course, investments related to response had been done.

Further, DL is also increasingly part of the development of what is called “Smart Factory”. “In April 2017, PCITC and Huawei jointly announced a smart manufacturing platform… a core part of Smart Factory 2.0 within the Sinopec Group”. Notably, one of the capability of the platform “creates a ‘smart brain’ for petrochemical plants using deep learning and reasoning data.” (Huawei, “Huawei Joins Hands with PCITC to Embrace Smart Factory 2.0“, 13 Nov 2017, PRNewswire).

With NVDIA’s “Metropolis AI Smart Cities Platform”, Huawei’s video content management product supports and uses Deep Learning for “accurate face recognition, pedestrian-vehicle structuring and reverse image search”, also cooperating with the Shenzhen Police. Always with Metropolis, Alibaba Cloud’s City Brain uses AI for services such as “real-time traffic management and prediction, city services and smarter drainage systems”, improving for example “traffic congestion by as much as 11 percent in Hangzhou’s pilot district” (Saurabh Jain, “Alibaba, Huawei Adopt NVIDIA’s Metropolis AI Smart Cities Platform“, 25 Sept 2017, NVDIA blog).

Most famously, Deep Learning has been and is still used to play games such as go or chess, which allows for developing and testing new AI programs, in their architecture and algorithms. It is these programs, notably those developed by Google’s DeepMind, that we shall use below to further deepen our understanding of what is DL.

These may appear as classical cases of the way AI in its DL component may revolutionise already existing ways and practices.

For the very first time in human history, we could start thinking we could manage activities in near-perfect ways, as well as govern, in the multiple dimensions ruling demands, also in near-perfect ways. This, in itself, in a world of very imperfect humans is a revolution. It leads us to wonder about new issues such as how we, humans, with all our imperfections, with our multiple cognitive biases – i.e. mental errors which we are systematically doing but which were useful to survive and reach our current level of development (Richards J. Jr. Heuer,, Psychology of Intelligence Analysis, Center for the Study of Intelligence, Central Intelligence Agency, 1999) – are we to handle suddenly near-perfect activities? The very simple example of the self-driving car springs to mind immediately. The high number of crashes involving self-driving cars seems indeed to come from their inability to handle human imperfect driving (James Titcomb, “Driverless car involved in crash in first hour of first day“, 9 November 2017, The Telegraph).

However, new activities are also starting to appear, which are less classical to say the least. We have the case of learning platforms, where AI-DL agents learn and train (Cade Metz, “In OpenAI’s Universe, Computers Learn to Use Apps Like Humans Do“, 12 May 2016, Wired). For example, Universe, developed by OpenAI (the AI Lab backed by Tesla CEO Elon Musk) is a software platform where scientists can train their AI to interact with applications and programs, many of them open source (Ibid).

DeepMind Lab is a similar platform offered by Google’s DeepMind (Ibid). The older ImageNet, created in 2009, helped AI agents to learn to “see” (Ibid.). Is this the birth of a truly new AI-activity, similar to education, and which is to be part of the emerging AI-governance?

How will these two types of activities, classical AI-powered activities and new AI-activities, be integrated within AI-management and, in the area of politics that primarily concerns us, AI-governance? How are AI-management and AI-governance be organised? How will AI-governance interact with older remaining state, regime and government structures and processes?

Further, how will be organised a world that has been so far dominated by the quest for relative competitive advantage? Is the notion of competitive advantage even still relevant? What will happen when so far competing actors, from states to companies, are each using AI-DL in such a way that management and governance are all near-perfect? The first phase will most probably be a race to obtain this AI-DL advantage, while trying possibly to deprive others. But what will happen when two or more actors reach the same AI-stage of development? As the example give in introduction points out, shall will also see competition regarding who can access to citizens’ data rise?

This is nothing less than a completely new world that is possibly being created.

We shall, however, also have to wonder if and how such developments could fail.

We shall now take a deeper dive in the world of Deep Learning, which will allow us then, throughout the series, to better understand which activities are susceptible to be impacted by AI-DL, to start envisioning which new AI-activities could be born, as well as to map out how the likely race for AI-power status could take place and around which elements.

To continue reading, become a member of The Red (Team) Analysis Society.
If you are already a member, please login (don’t forget to refresh the page).

Full article 3368 words – approx. 9 PAGES
Incl. Members-only 1728 words / 5 Pages


Featured image: Neurons by Geralt, Pixabay, Public Domain – Cropped and re-colorized.


References

Anadiotis, George , “Alice, the making of: Behind the scenes with the new AI assistant from Yandex“, 10 Oct 2017, ZDNet.

Araujo, Maria S. and Daniel S. Davila, “Machine learning improves oil and gas monitoring“, 9 June 2017, Talking IoT in Energy.

Ambrose Jillian, and Gordon Rayner, “Gas shortage to push up bills after ‘perfect storm’ of energy problems“, 12 Dec 2017, The Telegraph. 

DeepMind, AlphaGo webpage.

DeepMind Blog, “Deep Reinforcement Learning“.

Heuer, Richards J. Jr. Psychology of Intelligence Analysis, Center for the Study of Intelligence, Central Intelligence Agency, 1999.

Huawei, “Huawei Joins Hands with PCITC to Embrace Smart Factory 2.0“, 13 Nov 2017, PRNewswire.

Jain, Saurabh, “Alibaba, Huawei Adopt NVIDIA’s Metropolis AI Smart Cities Platform“, 25 Sept 2017, NVDIA blog

JASON, study sponsored by the Assistant Secretary of Defense for Research and Engineering (ASD R&E) within the Office of the Secretary of Defense (OSD), Department of Defense (DoD) Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoDJanuary 2017.

Metz, Cade “Google’s AlphaGo Levels Up From Board Games to Power Grids“, 24 may 2017, Wired.

Metz, Cade, “In OpenAI’s Universe, Computers Learn to Use Apps Like Humans Do“, 12 May 2016, Wired.

Nielsen,  Michael A., “Neural Networks and Deep Learning“, Determination Press, 2015.

Øvstaas, Jo, “Big data and machine learning for prediction of corrosion in pipelines“, 12 Jun 2017, DNV GL.

Rosenblatt, Frank. “The perceptron: a probabilistic model for information storage and organization in the brain.” Psychological review 65.6 (1958): 386.

Silver, et al. “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm“, arXiv:1712.01815 [cs.AI], 5 December 2017.

Silver, David Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel & Demis Hassabis, “Mastering the game of Go without human knowledge“, Nature 550, 354–359, 19 October 2017, doi:10.1038/nature24270

Silver, David Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel & Demis Hassabis, “Mastering the game of Go with deep neural networks and tree search” Nature 529, 484–489, 28 January 2016, doi:10.1038/nature16961.

Titcomb, James “Driverless car involved in crash in first hour of first day“, 9 November 2017, The Telegraph.

Van Veen, Fjodor, “The Neural Network Zoo“, The Asimov Institute, 14 Dec 2016.

Yandex Press release “Yandex Launches Alice – The First AI Assistant Designed For The Russian Market“.

Published by Dr Helene Lavoix (MSc PhD Lond)

Dr Helene Lavoix is President and Founder of The Red Team Analysis Society. She holds a doctorate in political studies and a MSc in international politics of Asia (distinction) from the School of Oriental and African Studies (SOAS), University of London, as well as a Master in finance (valedictorian, Grande École, France). An expert in strategic foresight and early warning, especially for national and international security issues, she combines more than 25 years of experience in international relations and 15 years in strategic foresight and warning. Dr. Lavoix has lived and worked in five countries, conducted missions in 15 others, and trained high-level officers around the world, for example in Singapore and as part of European programs in Tunisia. She teaches the methodology and practice of strategic foresight and early warning, working in prestigious institutions such as the RSIS in Singapore, SciencesPo-PSIA, or the ESFSI in Tunisia. She regularly publishes on geopolitical issues, uranium security, artificial intelligence, the international order, China’s rise and other international security topics. Committed to the continuous improvement of foresight and warning methodologies, Dr. Lavoix combines academic expertise and field experience to anticipate the global challenges of tomorrow.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

EN