Of Fire and Storm – Climate Change, the “Unseen” Risk for the U.S. Economy – State of Play

This is an update of the 17 September 2018 release of this article analysing the economic costs of climate change on the U.S. economy in 2018. This update integrates the consequences, and especially the costs, of the super hurricane “Michael”, which hammered the Florida panhandle, then Georgia, North Carolina and Virginia, between the 10 and the 14 of October 2018 (Camilla Domonoske, “Michael Will Costs Insurers Billions, but Won’t Overwhelm the Industry, Analysts Say”, NPR, October 14, 2018).

“Michael” took over from “Florence”, the monster storm that hit and battered the U.S. East Coast on 12 September 2018. It looks like a new climate-related disaster “peak”.  It could announce a transition towards possibly worse, considering the last 12 months of climate hellish conditions.

Thus, a major question arises: is climate change becoming a major risk for the U.S. economy? If yes, how should economic actors react (Jean-Michel Valantin, “Climate Change: The Long Planetary Bombing”, The Red (Team) Analysis Society, September 18, 2017)? Continue reading “Of Fire and Storm – Climate Change, the “Unseen” Risk for the U.S. Economy – State of Play”

The Coming Quantum Computing Disruption, Artificial Intelligence and Geopolitics (1)

On 12 October, Chinese Huawei launched its new Quantum Computing Simulation HiQ Cloud Service Platform (Press Release).  On 13 September 2018, the U.S. House of Representatives approved the “H.R. 6227: National Quantum Initiative Act” with $1.275 billion budget from 2019 to 2023 on quantum research. The Chinese government yearly investment in quantum science is estimated to $ 244 million (CRS, “Federal Quantum Information Science: An Overview”, 2 July 2018). The EU Quantum Flagship plans so far to invest €100 million per year, to which national investments must be added. The largest tech companies, be they American, European or Asian, and more particularly Chinese, fund quantum R&D. This heralds the start of a new race for quantum technologies.

Indeed, ongoing scientific and technological innovations related to the quantum universe have the potential to fundamentally alter the world as we know it, while also accelerating and even disrupting more specifically the field of artificial intelligence (AI). Advances in quantum technologies have been dubbed the “Second Quantum Revolution”(Jonathan P. Dowling, Gerard J. Milburn, “Quantum Technology: The Second Quantum Revolution”, 13 Jun 2002, arXiv:quant-ph/0206091v1).

In this first article, we shall explain what is this quantum revolution, then narrow it down to where it interacts with AI, indeed potentially accelerating and disrupting current dynamics.  This article is aimed at non-quantum physicists, from the analysts to decision-makers and policy-makers, through interested and concerned readers, who need to understand quantum technologies. Indeed, the latter will revolutionise the world in general, AI in particular, as well as governance, management, politics and geopolitics, notably when combined with AI. We shall use as much as possible real world examples to illustrate our text.

We shall explain first where quantum technologies are located, i.e. quantum mechanics. We shall then focus upon these quantum technologies – called the Quantum Information Science (QIS) – concentrating notably on quantum computing and simulation, but also briefly reviewing quantum communication and quantum sensing and metrology. We shall aim at understanding what is happening, how dynamics unfold and the current state of play, while also addressing the question of timing, i.e. when will quantum computing start impacting the world.

Related

Artificial Intelligence – Forces, Drivers and Stakes

The Quantum Computing Battlefield and the Future – Quantum, AI and Geopolitics (2)

Mapping The Race for Quantum Computing – Quantum, AI and Geopolitics (3)

Finally, we shall look at the intersection between the quantum technologies and AI – indeed the emerging Quantum Machine Learning sub-field or even Quantum AI –  pointing out possible accelerations and disruptions. We shall therefore highlight why and how quantum technologies are a driver and stake for AI.

Building upon the understanding achieved here, the next articles shall delve more in detail on the potential future impacts on the political and geopolitical world.

From Quantum Mechanics to the new Quantum Technologies

Currently, the principles of quantum mechanics are being newly applied to an array of fields creating the potential for new possibilities in many areas.

Members-only

Download article as pdf (English only)- The Coming Quantum Disruption Artificial Intelligence and Geopolitics – part 1.pdf – 22 pages

Become a member of The Red (Team) Analysis Society.

Quantum mechanics or quantum physics is a scientific discipline, which started at the very beginning of the 20th century, with, initially, work by Max Planck on the colour spectrum (for a rapid and clear summary of the development of the field, read, for example, Robert Coolman, “What Is Quantum Mechanics?“, LiveScience, 26 September 2014).

Quantum mechanics is about “the general expression of the laws of nature in a world made of omnipresent and almost imperceptible particles” (Roland Omnes, Quantum Philosophy: Understanding and Interpreting Contemporary Science, 1999, p.82). This is the reign of the infinitesimally small. Quantum mechanics contributed to a series of scientific changes that stroke at the very heart of the way we understand. As Omnes put it,

“We are loosing the spontaneous representation of the world… common sense is defeated” (ibid.).

Even though common sense was challenged, scientists did not abandon the scientific project and continued their work. Now, the very properties that shocked the scientific community and the new understanding of the world that emerged with quantum mechanics are being used to develop new technologies.

In a nutshell, at the level of the quantum world, we observe a “wave-like nature of light and matter” (Biercuk and Fontaine, “The Leap into Quantum Technology…“, War on the Rocks, Nov 2017). Two resulting properties of quantum systems are then fundamental to the current technological effort, namely superposition and entanglement.

Superposition means that “quantum systems may be (loosely) described as simultaneously existing in more than one place until the system is observed” (Ibid.). Once the system is observed, then the system fixes itself in one place, and one says that “the superposition collapses” (Ibid.).

Entanglement means that “linked particles can be “remotely controlled” no matter how far apart they may be. Manipulate the local partner of an entangled pair and you instantaneously manipulate its entangled partner as well” (Ibid.).

Building notably on these properties, scientists are developing the technological field called the Quantum Information Science (QIS), composed of Quantum sensing and metrology, Quantum communication and Quantum computing and simulation, to which can be added research in Quantum materials. We shall more particularly focus here on Quantum computing.

Understanding Quantum Information Science

Quantum computing and simulation

Quantum computing means harnessing quantum properties, notably superposition and entanglement, “to perform some computation” (CRS, July 2018) in a way that is incredibly faster than what is achieved today by the most powerful High Performance Computing (HPC) capabilities, even the exascale computers, which are currently being built (see Winning the Race to Exascale Computing).

Using quantum computing should be particularly promising for quantum simulations, i.e. “using some controllable quantum system  [the quantum computer] to study another less controllable or accessible quantum system” (Georgescu, et al,  “Quantum Simulation” 2013). In other words, quantum computing is the best approach to studying and simulating systems located at the quantum level and thus displaying quantum properties.

Quantum computing, a development initiated by security concern

The idea of a quantum computer was developed in 1981 (published in 1982) by American physicist Richard P. Feynman, who thought about using quantum properties to simulate physics and indeed quantum mechanics (“Simulating Physics with Computers“, International Journal of Theoretical Physics, VoL 21, Nos. 6/7, 1982). It was initially mainly of theoretical interest (Simon Bone and Matias Castro, “A Brief History of Quantum Computing”, Imperial College London).

Then, the incredible computing power that a functioning quantum computer could have led to the rising awareness that a “cryptopocalypse” could happen. Indeed, in 1994, mathematician Peter Shor formulated an algorithm, “Shor’s algorithm”, showing that “a quantum computer with a few tens of thousands of quantum bits and capable of performing a few million quantum logic operations could factor large numbers and so break the ubiquitous RSA public key cryptosystem” – the most widely used way to encrypt data transmission (Peter Shor, “Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer,” 1994, 1995; Seth Lloyd, & Dirk Englund, Future Directions of Quantum Information Processing, August 2016, p.6 ).

It is the 1994 Shor’s findings that created the interest in quantum computing, from which evolved Quantum Technologies (Bone and Castro, Ibid.; Lloyd & Englund, Ibid, Biercuk, “Building The Quantum Future“, video, 2017). The QIS’ birth thus would stem from both the fear of and interest in developing such a quantum computer: Shor’s algorithm would indeed give an incredible security advantage to those benefiting from a quantum computer, as they could break all the codes present, past and future of their ‘competitors’ if these actors use current classical computing capabilities as well as current encryption systems.

What is quantum computing?

Quantum computing is currently being developed. The two main challenges of the field are to develop a usable quantum computer and we are now only at the very early stages of building the hardware, and to learn to program these new computers. 

Qubits, hardware and some of the challenges faced

Classical computers store information as 0s and 1s, the bits or binary digits.

For interested and scientifically-minded readers we recommend, among a host of explanations:

Sam Sattel, “The Future of Computing – Quantum & Qubits“, Autodesk.com blog.

Quantum computers use qubits, with which, “you can have zero, one, and any possible combination of zero and one, so you gain a vast set of possibilities to store data” (Rachel Harken, “Another First for Quantum“, ORNL Blog, 23 May 2018).

The short video below (Seeker, 15 July 2018) explains (relatively) simply what are qubits, superposition, and entanglement, as well as the very practical challenges faced to build a quantum computer – i.e. the hardware, such as refrigeration, how to control the state of a qubit and finally how long the information can last inside a qubit, a property called coherence. It then moves to a couple of examples of possible simulations and usage.

For an even better understanding of quantum computing, and although the video is a bit long – 24:15 – we recommend taking the time to watch  the very clear, lively and fascinating video by Michael J. Biercuk of the University of Sydney, “Building the Quantum Future“.

Number of qubits, power, and error

Thus, to get a functioning quantum computer, in terms of hardware, you need to have enough qubits to proceed with your computing and to do so in a way where the errors generated by the specificities of quantum computing, notably loss of coherence or decoherence, are not too serious to defeat the whole system. The necessity to consider the errors generated by the quantum system used implies to imagine, create and then implement the best possible quantum error correction, tending towards full quantum error correction. One of the difficulties is that the error correction is also a function of the qubits, which thus multiplies the number of qubits that must be operational.

For example, Justin Dressel of the Californian Institute for Quantum Studies of  Chapman University applied Austin G. Fowler et al., “Surface codes: Towards practical large-scale quantum computation” (2012)  to Shor’s algorithm using as case study the aim to decrypt a strong RSA encryption using a 2048-bit keys. He calculated that for a quantum computer to meet this goal, its minimum qubit number would be 109. Such a machine would then need to run for 27 hours, to “compare with 6.4 quadrillion years for a classical desktop computer running the number sieve”. Of course, as with classical computers, more qubits would reduce the run-time (for the paragraph, Justin Dressel, Quantum Computing: State of Play, OC ACM Chapter Meeting, 16 May 2018).

Actually, we are still quite far from a 109 qubit computer.

The state of play in terms of qubits processors…

On 16 May 2018, according to Dressel (Ibid.), two main competing implementations (others being in development) are used to obtain physical qubits, and have so far given the following results:

Method 1. Trapped ions –  with as best performance

  • University of Maryland (UMD)/ Joint Quantum Institute  (JQI)*: 53 qubits

Method 2. Superconducting circuits –  with as best performance

… and quantum simulators running on classical computers

Besides the creation of very real quantum computing hardware, we also have the design and development of quantum computing simulators. These allow researchers and scientists to start experimenting with quantum computing and notably to begin learning to program these computers. Indeed, the specificities of quantum computing demand new ways to program these computers.

For example, Atos used its HPC supercomputers to develop Atos Quantum Learning Machine (QLM) with appliances from 30 and 40 Qubits according to power level (Atos QLM Product). Meanwhile, Atos developed “universal quantum assembly programming language (AQASM, Atos Quantum Assembly Language) and a high-level quantum hybrid language” (Ibid.).

Other similar efforts are at work, with, for example, the Centre for Quantum Computation and Communication Technology at the University of Melbourne able “to simulate the output of a 60-qubit machine”, but for “only” an instance of Shor’s algorithm  (Andrew Tournson, “Simulation Breaks Quantum  Computing World Record“, Futurity, 2 July 2018).

As mentioned in the opening paragraph, Chinese Huawei announced on 12 October that it launched its very first quantum computing simulation platform through its Cloud Service, HiQ (Press release). “The HiQ platform can simulate quantum circuits with at least 42-qubits for full-amplitude simulations” (ibid.), which would make it slightly more powerful than Atos QLM. Of course, performance must be tested by scientists before such conclusions may be drawn with certainty. As Atos, Huawei also developed its quantum programming framework. Unlike Atos’s system, HiQ “will be fully open to the public as an enabling platform for quantum research and education” (Ibid.). We see here emerging two different approaches and strategies to the development of quantum computing, which do and will matter for companies, state actors as well as citizens, as well as for the field. We shall come back to this point in the next article.

When shall we have functioning quantum computers? What is quantum supremacy?

Actually, we already have functioning quantum computers, but their computing power is still weak and they may be considered as prototypes.

Because we already have these prototypes as well as the simulators on classical machines, the current real and relevant question must be transformed into two questions.

1- How powerful does my quantum computer need to be to answer my question or solve my problem?

The first part of our initial timing-related question could be phrased as follows: how powerful does my quantum computer need to be to answer my question or solve my problem?

In other words, the type of computation needed to solve a problem may be more easily and more quickly achieved on a quantum computer with a small number of qubits, but de facto using quantum properties, than on a classical computer, where the very quantum characteristics necessary for solving the problem at hand would demand an enormous HPC, or would just not be feasible. Here, the quantum understanding of the problem under consideration and the algorithm developed become as important, if not more, than the very quantum hardware problem. As a result, current quantum machines and quantum simulations may be considered as already operational.

For example, Vanderbilt University physicist Sokrates Pantelides and postdoctoral fellow in physics Jian Liu, developed detailed quantum mechanical simulations on the atomic scale to help the oil industry know the promise of recovery experiments before they start (Heidi Hall, “Quantum mechanics work lets oil industry know promise of recovery experiments before they start“, Vanderbilt University News,  Sep. 27, 2018). They used classical HPC computing facilities at the U.S. National Energy Research Scientific Computing Center of the Department of Energy (DOE).  It is likely that if quantum computers had been available to them it would have facilitated their research.  Note that the Oak Ridge National Laboratory (ORNL) of DOE has a group focusing on Quantum Information Science – sensing, communicating, computing – and is using Atos Quantum Learning Machine (Atos QLM), a “quantum simulator, capable of simulating up to 40 quantum bits (Qubits)” (Atos Press release, “Atos Quantum Learning Machine can now simulate real Qubits“, 9 April 2018).

As another example, on 4 October 2018, Spanish researchers U. Alvarez-Rodriguez et al. (“Quantum Artificial Life in an IBM Quantum Computer“, Nature, 2018) published the results of their research, according to which they were able to  create a quantum artificial life algorithm.  Interviewed by Newsweek, Lamata, a member of the scientific team, explained:

“We wanted to know whether emergent behaviors of macroscopic biological systems could be reproduced at the microscopic quantum level,” he said. “What we found in this research is that very small quantum devices with a few quantum bits could already emulate self-replication, combining standard biological properties, such as the genotype and phenotype, with genuine quantum properties, such as entanglement and superposition,” (Hannah Osborne, “Quantum Artificial Life Created for First Time, Newsweek, 11 October 2018).

The life creating simulation was realised using “the superconducting circuit architecture of IBM cloud quantum computer”, with “the IBM ibmqx4 quantum computing chip” (Alvarez-Rodriguez, et al., Ibid.), i.e. using IBM 5 Q, which counts 5 qubits with a maximum qubits connectivity of 4 (“Qubit Quality“, Quantum Computing Report) .

This simulation illustrates perfectly how quantum computing can be both accelerating and disruptive for artificial intelligence, as we shall synthesise in the third part. Indeed, as pointed out in the research paper’s conclusions and prospects, the successful quantum artificial life algorithm could potentially be combined with the new emerging field of quantum machine learning to pursue “the design of intelligent and replicating quantum agents” (Alvarez-Rodriguez, et al., Ibid.). We would reach here potentially a completely new level of AI.

2- When shall we have quantum computers with such a power that classical computers, even the most powerful, are out-powered?

The second part of our question regarding timing could be rephrased as follows: when shall we have quantum computers with such a power that classical computers, even the most powerful, are out-powered, i.e. when will quantum simulations made on classical computers become irrelevant?

This is what Google called achieving “quantum supremacy”, or crossing the “quantum supremacy frontier”, i.e. finding out  “the smallest computational task that is prohibitively hard for today’s classical computers” and then going beyond it thanks to a quantum computer (Sergio Boixo, “The Question of Quantum Supremacy“, Google Ai Blog, 4 May 2018). The idea of achieving quantum supremacy is best explained by the following slide from John Martinis’ (Google) presentation “Quantum Computing and Quantum Supremacy” (HPC user Forum, Tuscon, April 16-18, 2018).

Quantum supremacy. quantum computing, quantum disruption, quantum computing and AI, quantum technologies, QIS, Artificial Intelligence, Quantum AI, Quantum Machine Learning, risk, political risk, geopolitical risk, the Red (Team) Analysis Society
Slide from John Martinis’ (Google) presentation “Quantum Computing and Quantum Supremacy” (HPC user Forum, Tuscon, April 16-18, 2018).

Building upon Google’s slide, Dressel believes we have almost reached “the scale that is no longer possible to simulate using classical supercomputers. The current challenge is to find “near-term” applications for the existing quantum devices” (Ibid.).

Quantum supremacy. quantum computing, quantum disruption, quantum computing and AI, quantum technologies, QIS, Artificial Intelligence, Quantum AI, Quantum Machine Learning, risk, political risk, geopolitical risk, the Red (Team) Analysis Society
Figure from a slide from Justin Dressel, Quantum Computing: State of Play, OC ACM Chapter Meeting, 16 May 2018

However, as improvements in terms of ways to construct quantum simulations on classical machines are also ongoing, then the timeline as well as the numbers of qubits necessary to achieve quantum supremacy could change (Phys.org, “Researchers successfully simulate a 64-qubit circuit“, 26 June 2018; original research: Zhao-Yun Chen et al, “64-qubit quantum circuit simulation“, Science Bulletin, 2018).

Meanwhile, Dressel  (Ibid.) also estimates that we can expect chips with one billion qubits in approximately 10-15 years.

Quantum supremacy. quantum computing, quantum disruption, quantum computing and AI, quantum technologies, QIS, Artificial Intelligence, Quantum AI, Quantum Machine Learning, risk, political risk, geopolitical risk, the Red (Team) Analysis Society
Figure from a slide from Justin Dressel, Quantum Computing: State of Play, OC ACM Chapter Meeting, 16 May 2018

The availability of such a powerful computing power would most obviously be accelerating for AI while completely disrupting the current landscape surrounding the contemporary AI revolution, from the microprocessors developed and used for example in the race to exascale, to the power of those who succeeded in being at the top of the race in terms of classical HPC, but we shall come back to the political and geopolitical implications in the second article of the series.

Quantum communications

As logically evolving from the way quantum technologies were born, quantum communications are mainly concerned with the development of  “quantum-resistant cryptography”, as underlined in the U.S. National Strategic Overview for Quantum Information Science, September 2018. If quantum computing can be used to break existing encryption, then quantum mechanics may also be used to protect encryption, notably with quantum cryptography (see phys.org definition) or quantum key distribution (QKD).

Quantum communications is thus about “generating quantum keys for encryption” and more largely, “sending quantum-secure communications (any eavesdropping attempt destroys the communication and the eavesdropping is detected)” (CRS, July 2018, Ibid.).

Quantum sensing and metrology

“’Quantum sensing’ describes the use of a quantum system, quantum properties or quantum phenomena to perform a measurement of a physical quantity” (Degen et al, 2016). Thanks to quantum sensors, we “measure physical quantities such as frequency, acceleration, rotation rates, electric and magnetic fields, or temperature with the highest relative and absolute accuracy.” (Wicht et al. 2018). This video by the UK National Quantum Technology Hub, “Sensors and Metrology“, explains very simply this sub-field.

Applications, including in terms of national security, are numerous, from global positioning systems (GPS), to sub-marines through, for example, improving considerably our understanding of the human brain and of cognition, as explained in the video shown in the last part of the article.

Don’t overstate boundaries

As always, however, if categories between different sub-disciplines are convenient to define fields, focus and explain subject matters, boundaries tend to be porous. Feedbacks with other sub-fields may take place when new discoveries are made. Innovations also emerge at the intersection of the different subfields, as illustrated below with the production of vortices of light in quantum sensing, which then feed into quantum communication – as, for example, unique and identifiable petal patterns can form the alphabet to transmit  information (Matthew O’Donnell, “Petal Patterns“, Quantum Sensing and Metrology Group at Northrop Grumman, 17 May 2018).

Accelerating and Disruptive Impacts on AI: the Emergence of Quantum Machine Learning

Related:

When Artificial Intelligence will Power Geopolitics – Presenting AI

Artificial Intelligence and Deep Learning – The New AI-World in the Making

The intersection between the current AI development, which takes place mainly in the area of machine learning and more specifically deep learning and Quantum Information Science is potentially so fruitful that it is giving rise to a new sub-discipline, Quantum Machine Learning.

Below are some of the main areas where research takes place or could take place and where current AI development could be accelerated or disrupted by quantum technologies, while AI possibilities would also impact positively quantum computing.

The first obvious accelerating and potentially disruptive impact quantum computing could have on AI is that once hardware with high number qubits are available, then the (quantum) computing power available also for AI will reach new heights. This is likely to allow for so far impossible to test methodologies, while until now too complex or computing power-hungry algorithms will be developed.

Then, we are likely to see an intensification and multiplication of the development of “creating-AIs”, such as what was done by the combination of evolutionary algorithms and reinforcement learnings by Google Brain Team as well as by scientists at U.S. Department of Energy’s Oak Ridge National Laboratory (ORNL) (see Helene Lavoix, When AI Started Creating AI – Artificial Intelligence and Computing PowerThe Red (Team) Analysis Society, 7 May 2018).

Meanwhile, the capacity to see the birth of a third generation AI will be immensely enhanced(see Helene Lavoix $2 Billion for Next Gen Artificial Intelligence for U.S. Defence – Signal ).

As for quantum simulations, some scientists “postulate that quantum computers may outperform classical computers on machine learning tasks.” In that case, Quantum Machine Learning is understood as the field where scientists focus on “how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers” (Jacob Biamonte, et al., “Quantum machine learning“, v2 arXiv:1611.09347v2., May 2018). Quantum Machine Learning algorithms are sought and developed (Ibid., Dawid Kopczyk, “Quantum machine learning for data scientists“, arXiv:1804.10068v1, 5 Apr 2018).

Furthermore, as expected from the second part of this article, the explanations above on QIS, the intersection and feedbacks between quantum systems and AI are also more complex, as far as we can understand and foresee now.

The very challenges involved in quantum computing, i.e. mainly developing the hardware and developing the program and algorithms, could be served by AI. In other words, one would apply the current understanding of AI to quantum computing’s development. Potentially, as we shall proceed through trials and errors, and because of the specificities of quantum computing, AI will evolve, potentially reaching new stages of development. Indeed, for example, as new quantum capabilities are reached, and new simulations become available, new understanding of and approaches to AI may be uncovered.

Also quantum simulations on the one hand, quantum sensing on the other, will produce a new host of big data, which will need AI to be understood.

We can find an example of such a case where AI has been used for these newly available quantum large dataset, which in turn could benefit quantum computing and then most probably AI,  in the field of physics in general, superconductivity in particular. On 1st August 2018, Yi Zhang et al. published an article explaining their use of an AI, a specifically designed “array of Artificial Neural Network (ANN)” –  i.e. deep learning  – on a large body of data, “experimentally derived electronic quantum matter (EQM) image archive”, which allowed for  progress in our understanding of superconductivity – notably as far as temperature is concerned, a key challenge in quantum computing (Yi Zhang et al., “Using Machine Learning for Scientific Discovery in Electronic Quantum Matter Visualization Experiments“,  1 August 2018, arXiv:1808.00479v1; for a simplified but detailed explanation, Tristan Greene, “New physics AI could be the key to a quantum computing revolution“, TNW, 19 September 2018).

As a result of this experiment, usage of AI-Deep Learning will most probably increase in physics and more largely in science, while new advances in superconductivity could help towards qubits processors.

Should such a development occur in superconductivity, then this also means that the race to exascale we previously detailed could be disrupted. According to the time when exascale is reached and to the processors used, compared with the time when the new advances in superconductivity can be engineered, as well as when competing quantum processors are available, then the huge computing power finally obtained with exascale as well as the so far developed processor could be more or less obsolete or about to be. The industrial risk should here be carefully estimated and monitored, probably through scenarios as most adapted and efficient methodology. We shall see in the next article the related potential political and geopolitical impacts.

The new types of data gathered by quantum sensing may also enrich our understanding of intelligence in general as with the University of Birmingham project “Quantum Sensing the Brain” (11 June 2018) described in the video below.

This specific quantum sensing achievement may, in turn, thrice change and enrich approaches to AI: first, because we would have had to create new AI-systems to make sense of these specific data, second because these deep learning agents would have had access to new and so far unknown understanding of intelligence, thus would have learned something different enhancing the potential to develop different outputs, and third because the resulting overall new understanding of intelligence could, in turn, generate different and better types of AI.

In the same area, the emerging field of quantum cognition (see Peter Bruza et al., “Introduction to the Special Issue on Quantum Cognition“, Journal of Mathematical Psychology, 23 September 2013; Peter Bruza et al., “Quantum cognition: a new theoretical approach to psychology“,  Trends in Cognitive Science, July 2015), now benefiting from quantum simulations, could lead to completely novel approaches to cognition and intelligence. In turn, a disruption of the current status quo in terms of AI around deep learning could occur. Totally new approaches to AI could emerge.

As a result, quantum technologies are indeed a driver as well as a stake for AI.

Although it is still very early in the field of Quantum Information Science, and notably quantum computing and simulations, and even more so in its intersection with AI, considerable innovations have already taken place both in QIS and Quantum AI / Quantum Machine Learning, and the fields are already starting to bear fruits. Many challenges remain, but the efforts endeavoured to overcome these very hurdles could also lead to new breakthrough in both QIS and AI.  We could be at the dawn of a real change of paradigm with a whole range of consequences from the already discernible to those difficult to imagine for polities and its actors. It is to these possible impacts we shall turn with the next article.


Featured Image: An image of a deuteron, the bound state of a proton (red) and a neutron (blue). Image Credit: Andy Sproles, ORNL

Notes

*The Joint Quantum Institute (JQI) is actually a group operating “through the work of leading quantum scientists from the Department of Physics of the University of Maryland (UMD), the National Institute of Standards and Technology (NIST) and the Laboratory for Physical Sciences (LPS). Each institution brings to JQI major experimental and theoretical research programs that are dedicated to the goals of controlling and exploiting quantum systems.” (JQI – About). Note that notably through the NIST they will benefit from the 2019 US budget for QIS.

Some references

Alvarez-Rodriguez, U., M. Sanz, L. Lamata & E. Solano, “Quantum Artificial Life in an IBM Quantum Computer“, Nature, Scientific Reports volume 8, Article number: 14793 (2018) – Published: 04 October 2018.

Biamonte Jacob, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe & Seth Lloyd, “Quantum machine learning“, Nature volume 549, pages 195–202, 14 September 2017; revised 10 May 2018 arXiv:1611.09347v2.

Biercuk Michael J., and Richard Fontaine, “The Leap into Quantum Technology: A Primer for National Security Professionals,” War on the Rocks, November 17, 2017.

Biercuk, Michael J., The University of Sydney, “Building the Quantum Future”Pause Fest, Mar 2, 2017.

Bruza, Peter D., Jerome Busemeyer, Liane Gabora, “Introduction to the Special Issue on Quantum Cognition“, Journal of Mathematical Psychology, 53, 303-305, arXiv:1309.5673v1

Bruza, Peter D., Zheng Wang, Jerome R. Busemeyer, “Quantum cognition: a new theoretical approach to psychology“,  Trends in Cognitive Science, Volume 19, Issue 7, July 2015, Pages 383-393.

Congressional Research Service, Federal Quantum Information Science: An Overview, 2 July 2018.

Degen, C. L., F. Reinhard, P. Cappellaro, “Quantum sensing“,  Submitted on 8 Nov 2016 (v1), last revised 6 Jun 2017 (this version, v2), arXiv:1611.02427v2 – quant-ph.

Dirjish, Mathew, “Quantum Sensing Platform Now A Reality”, SensorsOnline, July 30, 2018.

Executive Office of the President of the United States, National Strategic Overview for Quantum Information Science, September 2018.

Fowler, Austin G., Matteo Mariantoni, John M. Martinis, Andrew N. Cleland, “Surface codes: Towards practical large-scale quantum computation“, Phys. Rev. A 86, 032324 (2012),  arXiv:1208.0928v2

The Red (Team) Analysis Weekly – An Obvious 21st Century Conundrum – 11 October 2018

Each week our scan collects weak – and less weak – signals for political and geopolitical risk of interest to private and public actors.

Find out more on horizon scanning, signals, what they are and how to use them:

Horizon Scanning and Monitoring for Anticipation: Definition and Practice“.

Welcome to the now obvious 21st century conundrum: The already impacting (we told you so) climate change entails huge costs. To reduce them – to put it mildly – immense and rising might and expenses would be necessary. As a result, profits and use of what is at the heart of our current civilisation – fossil fuels – appear to be necessary, but then climate change and its costs will heighten… How is that for an interesting riddle?

The solution out of this mad and accelerating vicious spiral could be in thinking and acting out of the box, including by being powerful and smart enough to entice outdated elite and power players in not derailing efforts. Expect nonetheless unavoidable direct and collateral damage.

Read below our latest complimentary Weekly horizon scanning. 
Continue reading “The Red (Team) Analysis Weekly – An Obvious 21st Century Conundrum – 11 October 2018”

The U.S. Economy, between the Climate Hammer and the Trade War Anvil – the Soybean Case

On 24 September 2018, the U.S. Secretary of Commerce imposed new tariffs on 200 billion dollars worth of Chinese goods, thus widely escalating the “trade war” initiated by president Donald Trump against China in April 2018. Beijing immediately retaliated with tariffs on 60 billions worth of American goods (Will Martin, “China Hits Back at Trump with Tariffs on $60 Billion of US Goods”, Business Insider, 18 September, 2018). Some analysts and commentators are worried that the new tariffs could backfire and may impact the prices of consumption goods on the domestic market, and thus the U.S. consumer (Scott Lincicone, “Here are 202 Companies Hurt by Trump’s Tariffs”, Reason.com, September 14, 2018).

However, these analyses do not take into account the “unseen” but intensifying stress that climate change is exercising on the current geo-economic conditions and how its impacts combine nationally and globally with the way the U.S.-China Trade war unfolds and triggers unintended consequences.

Continue reading “The U.S. Economy, between the Climate Hammer and the Trade War Anvil – the Soybean Case”

The Red (Team) Analysis Weekly – US-China tensions escalate – 4 October 2018

Each week our scan collects weak – and less weak – signals for political and geopolitical risk of interest to private and public actors.

Find out more on horizon scanning, signals, what they are and how to use them:

Horizon Scanning and Monitoring for Anticipation: Definition and Practice“.

Read below our latest complimentary Weekly horizon scanning. 

Each section of the scan focuses on signals related to a specific theme: world (international politics and geopolitics); economy; science; analysis, strategy and futures; AI, technology and weapons; energy and environment. However, in a complex world, categories are merely a convenient way to present information, when facts and events interact across boundaries.

Read the 4 October 2018 scan

The Weekly is the complimentary scan of The Red (Team) Analysis Society. It focuses on political and geopolitical uncertainty, on national and international security issues.

The information collected (crowdsourced) does not mean endorsement but points to new, emerging, escalating or stabilising problems and issues.

Featured image: Antennas of the Atacama Large Millimeter/submillimeter Array (ALMA), on the Chajnantor Plateau in the Chilean Andes. The Large and Small Magellanic Clouds, two companion galaxies to our own Milky Way galaxy, can be seen as bright smudges in the night sky, in the centre of the photograph. This photograph was produced by European Southern Observatory (ESO), ESO/C. Malin [CC BY 4.0], via Wikimedia Commons.

Revisiting Timeliness for Strategic Foresight and Warning and Risk Management

[Fully rewritten version v3] To exist, risk and foresight products as well as warnings must be delivered to those who must act upon them, the customers, clients or users. These anticipation analyses must also be actionable, which means that they need to include the right information necessary to see action taken.

Yet, if you deliver your anticipation when there is no time anymore to do anything, then your work will be wasted.

Yet, even if you deliver your impeccable strategic foresight or risk analysis, or your crucial actionable warning to your clients in time to see a response implemented, but at a moment when your customers, decision-makers or policy-makers cannot hear you, then your anticipation effort will again be wasted. Let me give you an example. If you look at the picture used as featured image, what you see is the Obama government in a situation room as it awaits updates on the 2011 Operation Neptune’s Spear, the mission against Osama bin Laden. Imagine now that you have another warning to deliver (and the authorisation to do so) on any other issue, one with high impact but meant to happen in, say, 2 years time. Do you seriously believe that anyone in that room would – or rather could – listen to you?  If ever you nonetheless delivered your warning, you would not be heard. Obviously, as a result, decisions would not be taken. Your customer would be upset, while the necessary response would not be implemented. Finally, endless problems, including crises, would emerge and propagate.

Delivering an anticipation analysis or product must thus obey a critical rule: it must be done in a timely fashion. Timeliness is a fundamental criterion for good anticipation, risk management and strategic foresight and warning.

In this article, we shall look, first, at timeliness as a criterion that enables the coordination of response. We  shall explain it with the example of the controversial “Peak Oil”. Second, timeliness means that customers or users will not only have enough time to decide and then implement any necessary course of action as warranted by your strategic foresight and warning or risk analysis, but also be able to hear you. This is the problem of fostering credibility and overcoming other biases. We shall explain this part using again the examples of Peak Oil and taking as second example Climate Change. Finally, we shall point out a synthetic approach to understand timeliness and ways forward to achieve it.

Timeliness: enabling the coordination of response

timeliness, timely, credibility, cognitive biases, the Red (Team) Analysis Society, risk management, strategic foresight, strategic warning, anticipation, scenario, peak oil, climate change

Most often, the challenge of timeliness is understood as stemming from the need to conciliate, on the one hand, the dynamics which are specific to the issue, object of anticipation, and, on the other, the related decisions and the coordination of the response.

Let us take the example of Peak Oil, i.e. the date when “world oil production will reach a maximum – a peak – after which production will decline” (Hirsch, 2005, 11), which implies the end of a widespread availability of cheap (conventional crude) oil. Hirsch underlined that the problem of timing, i.e. identifying when oil will peak is complex

“When world oil peaking will occur is not known with certainty. A fundamental problem in predicting oil peaking is the poor quality of and possible political biases in world oil reserves data. Some experts believe peaking may occur soon. This study indicates that “soon” is within 20 years. ” (Hirsch, 2005, 5)

Thus, according to Hirsch, oil should peak before 2025.

In 2018, the idea of Peak Oil may be thought as being outdated or plainly false, grounded in mistaken false science as exemplified by Michael Lynch, “What Ever Happened To Peak Oil?“, Forbes, 29 June 2018. Note that these arguments were already used prior to a phase of relatively wide recognition of the Peak Oil phenomenon around 2010, from scientists’ reports, associations, institutions and books (see, for example, the creation of the Association for the Study of Peak Oil & Gas in 2000 , Robert Hirsch report (2005), the Institut Français du Pétrole (IFP), Thomas Homer Dixon in 2006, Michael Klare or Jeff Rubin in 2010), to web resources such as the now defunct The Oil Drum and Energy Bulletin to finally the International Energy Agency (IEA – it recognised the peaking of Peak Oil in 2010, e.g. Staniford, 2010), despite still some resistance by then a shrinking number of actors. Since then, notably, the shale revolution took place, while climate change allowed an easier access to northern oil and gas fields (e.g Jean-Michel Valantin, “The Russian Arctic Oil: a New Economic and Strategic Paradigm?”, The Red Team Analysis Society, October 12, 2016).

Peak oil is thus not very much on the agenda, although some still argue that it will happen, as, exemplified by the websites Peak Oil Barrel or Crude Oil Peak, which suggests that the oil will peak when the U.S. shale will peak (“What happened to crude oil production after the first peak in 2005?“, Sept 2018.) The peak in U.S. shales thus becomes a significant issue (e.g. Robert Rapier, “Peak Tight Oil By 2022? EIA Thinks It’s Possible, Without Even Accounting For This Risk“. Forbes, 20 February 2018; Tsvetana Paraskova, “Peak U.S. Shale Could Be 4 Years Away“, OilPrice, 25 Feb 2018).

If the remaining proponents of peak oil are right and if some of the hypotheses of the EIA are correct, then Peak Oil could take place around 2022. This is not that far away from Hirsch estimates according to which Peak Oil could occur by 2025.

We should nonetheless allow for the considerable evolutions that took place over the last 13 years, notably in terms of technology, including Artificial Intelligence, consuming behaviour, global consumption, and climate change. We should also allow for coming revolutions such as Quantum technologies which could completely upset many estimates.  As long as all these developments with their complex feedbacks have not been considered, without forgetting that Hirsch addressed availability of cheap oil, not availability  of expensive oil, then we must remain conservative and treat 2025 as only a possibility (a probability of 50%) for Peak Oil.

timeliness, timely, credibility, cognitive biases, the Red (Team) Analysis Society, risk management, strategic foresight, strategic warning, anticipation, scenario, peak oil, climate change, energy security

Notwithstanding other impacts, Hirsch estimates that 20 years of a “mitigation crash program before peaking” would have allowed avoiding “a world liquid fuels shortfall” (Hirsch, 2005, 65).

Thus, assuming that oil peaks in 2025, if we want to have an energy mix of replacement for the soon gone cheap oil, then we should have decided implementing and then coordinating a response… back in 2005. Note that, interestingly, this corresponds to the time when Hirsch published his report, and the time when the world started being worried about Peak Oil. We can thus wonder if, in specific countries, as well as collectively, SF&W on this issue was not actually delivered.

To answer more precisely this question, further research, when archives are declassified, will need to be done. Meanwhile, it will be useful to follow precisely the delivery process, notably, according to countries and actors, to know where exactly the warning was delivered and to whom.

If we now assume that Hirsch estimates of the time needed to develop mitigation and a new energy mix is correct, then we may consider that Hirsch, as well as the “peak oil” interest of the second part of the first decade of the 21st century, delivered a timely waning, as far as the time needed to implement answers is concerned.

If and where the right decisions were taken and the right responses implemented would need to be evaluated on a case by case basis.

Let us turn now to other criteria that condition the timeliness of the delivery of a risk or foresight analysis or of a warning.

Timeliness, credibility and biases

Jack Davis, writing on strategic warning in the case of U.S. national security, hints at the importance of another criterion linked to timeliness, credibility:

timeliness, timely, credibility, cognitive biases, the Red (Team) Analysis Society, risk management, strategic foresight, strategic warning, anticipation, scenario, peak oil, climate change, intelligence

“Analysts must issue a strategic warning far enough in advance of the feared event for US officials to have an opportunity to take protective action, yet with the credibility to motivate them to do so. No mean feat. Waiting for evidence the enemy is at the gate usually fails the timeliness test; prediction of potential crises without hard evidence can fail the credibility test. When analysts are too cautious in estimative judgments on threats, they brook blame for failure to warn. When too aggressive in issuing warnings, they brook criticism for “crying wolf.”

Davis, Jack, “Improving CIA Analytic Performance: Strategic Warning,” The Sherman Kent Center for Intelligence Analysis Occasional Papers: Volume 1, Number 1, accessed September 12, 2011.

For Davis, credibility is the provision of “hard evidence” to back up strategic foresight, or actually any anticipation analysis. Of course, as we deal with the future, hard evidence will consist in understanding of processes and their dynamics (the model used, preferably an explicit model) added to facts indicating that events are more or less likely to unfold according to this understanding. This is why, building an excellent model (see our online course), grounded in science is so important, as this will be key in achieving the credibility criterion.

Credibility is, however, also something more than hard evidence. To obtain credibility, people must believe you. Hence, the biases of the customers, clients or users must be overcome. Thus, whatever the validity of the hard evidence in the eyes of the analyst, it must also be seen as such by others. The various biases that can be an obstacle to this credibility have started being largely documented (e.g. Heuer). Actually, explaining the model used and providing indications, or describing plausible scenarios are ways to overcome some of the biases, notably out-dated cognitive models. Yet, relying only on this scientific logic is insufficient, as shown by Craig Anderson, Mark Lepper, and Lee Ross in their paper “Perseverance of Social Theories: The Role of Explanation in the Persistence of Discredited Information.” Thus, other ways to minimize biases must be imagined and included. The possibility to deliver the SF&W or risk product will be accordingly delayed.

Credibility and, more broadly, overcoming biases are so important that I would go further than Davis and incorporate them within the very idea of timeliness. This would be much closer to the definition of timely, according to which something is “done or occurring at a favourable or useful time; opportune” (Google dictionary result for timely). Indeed, there cannot be timely SF&W or risk management if those who must act cannot hear the warning or analysis we seek to deliver.

If the SF&W product or the risk analysis is delivered at the wrong time, then it will be neither heard nor considered, decisions will not be taken, nor actions implemented.

More difficult, biases also affect the very capability of analysts to think the world and thus to even start analysing issues. We are there faced with cases of partial or full collective blindness, when timeliness cannot be achieved because SF&W or risk analysis cannot even start in the specific sectors of society where this analysis needs to be done.

If we use again our example of Peak Oil, the 2005 warning could have lost part of its timeliness, because of debate regarding its credibility, which remains nowadays and is exemplified in the Forbes article above mentioned. On the other hand, the decision by the International Energy Agency (IEA) to finally recognise the peaking of Peak Oil in 2010 (e.g. Staniford, 2010) lent an official character to the phenomenon, that was very likely extremely important in finally allowing for the credibility of the warning.

We face very similar stakes and challenges with Climate Change, as shown once more by the latest debates presiding to the October 2018 IPCC report (Matt McGrath, “IPCC: Climate scientists consider ‘life changing’ report“, BBC News, 1 October 2018). Tragically, in that case, the ongoing attacks on the credibility of the various warnings regarding climate change over years, has also finally most probably endangered the timely possibility of response to remain below a 1.5C warming:

“For some scientists, there is not enough time left to take the actions that would keep the world within the desired limit.
‘If you really look seriously at the feasibility, it looks like it will be very hard to reach the 1.5C,’ said Prof Arthur Petersen, from University College London and a former IPCC member.
‘I am relatively sceptical that we can meet 1.5C, even with an overshoot. Scientists can dream up that is feasible, but it’s a pipedream.'” (MacGrath, “IPCC: Climate scientists …)

This shows how the credibility issue is absolutely crucial for a warning to respect the timeliness criterion.

Timeliness as the intersection of three dynamics

To summarise, timeliness is best seen as the intersection of three dynamics:

timeliness, timely, credibility, cognitive biases, the Red (Team) Analysis Society, risk management, strategic foresight, strategic warning, anticipation, scenario, peak oil, climate change
  • The dynamics and time of the issue or problem at hand, knowing that, especially when they are about nature, those dynamics will tend to prevail (Elias, 1992)
  • The dynamics of the coordination of the response (including decision)
  • The dynamics of cognition (or evolution of beliefs and awareness, including biases resulting from interests) – at collective and individual level – of the actors involved.

To understand each dynamic is, in itself, a challenge. Even more difficult, each dynamic acts upon the others, making it impossible to truly hope to achieve timeliness if the impact of one dynamic on the others is ignored.

For example, if we continue with the case of climate change, having been unable to truly even properly think collectively the possibility of climate change in its dire reality and with a more accurate timeline before the turn of the century – despite multiple efforts in this direction (e.g. Richard Wiles, “It’s 50 years since climate change was first seen. Now time is running out“, The Guardian, 15 March 2018), has dramatically changed the current possible dynamics of the response, while both the cognitive delay and the absence of previous decisions and actions have orientated the dynamics of the issue towards some paths, while others are definitely closed. Any SF&W or risk assessment delivered on this issue now, as shown by the October 2018 IPCC Panel discussions (Ibid.), is quite different from what was delivered previously.

To acknowledge the difficulty of finding the timely moment, and the impossibility to ever practice an ideal SF&W in an imagined world where everyone – at individual and collective level – would have perfect cognition, is not to negate SF&W or risk management. Answering the “timeliness challenge” with a “what is the point to do it now as we did not do it when things were easy/easier” is at best childish, at worst suicidal.

On the contrary, fully acknowledging hurdles is to have a more mature attitude regarding who we are as human beings, accepting our shortcomings but also trusting in our creativity and capacity to work to overcome the most difficult challenges. It is to open the door to the possibility to develop strategies and related policies with adequate tools to improve the timeliness of SF&W and risk management, thus making it more actionable and efficient:

  • Creating evolving products that will be adapted to the moment of delivery;
  • Using the publication of groups, communities, scholarly or other work on new dangers, threats and opportunities as potential weak signals that are still unthinkable by the majority;
  • Developing and furthering our understanding of the dynamics of cognition and finding ways to act on them or, to the least, to accompany them;
  • Keeping permanently in mind this crucial issue in anticipation to seek and implement adequate strategies to overcome it, according to the ideas, moods, science and technologies available at the time of delivery.

——–

This is the 3rd edition of this article, considerably revised from the 1st edition, 14 Sept 2011.

Featured image: Situation Room, Pete Souza [Public domain], via Wikimedia Commons

About the author: Dr Helene Lavoix, PhD Lond (International Relations), is the Director of The Red (Team) Analysis Society. She is specialised in strategic foresight and warning for national and international security issues. Her current focus is on Artificial Intelligence and Security.


References

Anderson, Craig A., Mark R. Lepper, and Lee Ross, “Perseverance of Social Theories: The Role of Explanation in the Persistence of Discredited Information,” Journal of Personality and Social Psychology 1980, Vol. 39, No.6, 1037-1049.

Campbell, Colin J. and Jean H. Laherrere, “The end of cheap oil,” 
Scientific American, March 1998.

Davis, Jack, “Improving CIA Analytic Performance: Strategic Warning,” The Sherman Kent Center for Intelligence Analysis Occasional Papers: Volume 1, Number 1, accessed September 12, 2011.

Dixon, Thomas Homer, The Upside of Down: Catastrophe, Creativity and the Renewal of civilization, (Knopf, 2006).

Elias, Norbert,  Time: An Essay, (Oxford: Blackwell, 1992)

Hirsch, Robert L., SAIC, Project Leader, Roger Bezdek, MISI, Robert Wendling, MISI Peaking of World Oil Production: Impacts, Mitigation & Risk Management, For the U.S. DOE, February 2005.

International Energy Agency (IEA), World Energy Outlook 2010.

Klare, Michael, Blood and Oil: The Dangers and Consequences of America’s Growing Dependency on Imported Petroleum, (New York: Metropolitan Books, 2004; paperback, Owl Books, 2005).

Klare, Michael, Rising Powers, Shrinking Planet: The New Geopolitics of Energy (Henry Holt & Company, Incorporated, 2008).

Rubin, Jeff, Why Your World is About to Get a Whole Lot Smaller: Oil and the End of Globalization, Random House, 2009.

Staniford, Stuart, “IEA acknowledges peak oil,” Published Nov 10 2010, Energy Bulletin.

Winning the Race to Exascale Computing – AI, Computing Power and Geopolitics (4)

This article focuses on the race to exascale computing and its multi-dimensional political and geopolitical impacts, a crucial response major actors are implementing in terms of High Performance Computing (HPC) power, notably for the development of their artificial intelligence (AI) systems.  It thus ends for now our series on HPC as driver of and stake for AI, among the five we identified in Artificial Intelligence – Forces, Drivers and Stakes: the classical big data, HPC and the race to quantum supremacy as related critical uncertainty, algorithms, “sensors and expressors”, and finally needs and usages.

Related

Artificial Intelligence, Computing Power and Geopolitics (1): the connection between AI and HPC

Artificial Intelligence, Computing Power and Geopolitics (2): what could happen to actors with insufficient HPC in an AI-world, a world where the distribution of power now also results from AI, while a threat to the Westphalian order emerges

High Performance Computing Race and Power – Artificial Intelligence, Computing Power and Geopolitics (3): The complex framework within which the responses available to actors in terms of HPC, considering its crucial significance need to be located.

This final piece builds on the first part, where we explained and detailed the connection between AI and HPC, and on the second part, where we looked at the related political and geopolitical impacts: what could happen to actors with insufficient HPC in an AI-world, a world where the distribution of power now also results from AI, while a threat to the Westphalian order emerges. The responses available to actors in terms of HPC, considering its crucial significance need to be located within the complex framework we explained in part three. Accordingly, first, decisions regarding which HPC capability to develop must be taken in relative terms, i.e. considering others’ HPC and AIs. Second, each actor engaged in the race must consider how fast other actors will develop stronger HPC capabilities. Finally, the longer the lead time investing actors have over the next revolutionary advance in HPC, the longer they delay the loss of value of their investment, and the more performing the AI-systems they can create, which gives them a window of opportunity to take full advantage of their superior AI.

In this dynamical framework we look at the obvious policy response actors designed: having more and better HPC, earlier, than the others. This is translated as the ongoing race to exascale computing, i.e. bringing online a computer with as capability a thousand petaflops or 1018 floating point operations per second, knowing that, currently, the most powerful computer in the world, U.S. Summit, shows a performance of 122.3 petaflops (TopList 500 June 2018). We start with a state of play on the ongoing “race to exascale”, which involves chronologically Japan, the U.S., France, China and the EU. We notably includes latest information on the indigenous European Processor Initiative (4-6 Sept 2018). The table summarizing the State of Play below is open access/free.

We then point out linkages between this race and economic, business, political, geopolitical and global dynamics, focusing notably on the likely disappearance of American supremacy in terms of processors.

Finally, strategically, if this race means going quicker than others developing better machines, it may also imply, logically, slowing down as much as possible, by all means, one’s competitors. Disrupting the very race would be an ideal way to slow down others, while completely upsetting the AI and HPC field, relativising the race to exascale and thus changing the whole related technological, commercial, political and geopolitical landscape. Thus, we underline two major possible disruptive evolutions and factors that could take place, namely quantum computing and a third generation AI. We shall detail both in forthcoming articles as quantum computing is a driver and stake for AI in its own right, while a third generation AI can best be seen as belonging to the driver and stake constituted by algorithms (“Artificial Intelligence – Forces, Drivers and Stakes”, ibid).

The Race to Exascale

Access for non-members is limited to the table summarising the state of play of the race to exascale. The complete text is a premium article. To access this article, you must become one of our members. Log in if you are a member.
A pdf version of the complete article (EN and FR) is available for members
ARTICLE 3951 WORDS – pdf 17 pages

State of Play

As of September 2018, the state of play for the race to exascale is as follows:

The Race to Exascale – Full State of Play – 24 September 2018
  Japan U.S. China EU France
Pre-ES and prototype   2013-2020 June 2018 Sunway exascale computer prototype
July 2018 Tianhe-3 prototype
2021 2015, 2018 Tera1000
Peak-ES and Sustained-ES 2021 Post-K 2021 Aurora – Argonne National Laboratory (ANL)
2021 Frontier -Oak Ridge National Laboratory  (ORNL)
2022 El Capitan – Lawrence Livermore National Laboratory (LLNL)
2022-2023 Aurora upgrades
2020 – Tianhe-3

 

2d half 2020 or first half 2021 – Sunway Exascale

 2023-2024 (EPI)

 

(Initially 2022-2023 -Official) 

2020-2021

 

BullSequana X

Energy Efficiency Goal   20-30 MW 20-32MW 20-40 MW 20 MW by 2020
Initiative Flagship 2020 Project ECP 13th 5 Year Plan EuroHCP CEA
Budget $0.8 to 1 billion*** $ 1.8 billion for the 2 ORNL and LLNL machines ? € 1billion+ by 2020** ?
Vendors Japan U.S. Chinese Europe France
Processor, Accelerator, Integrator Japan

 

Fujitsu-designed Arm

U.S.  Chinese ARM based, Sugon: x86 European Processor Initiative (EPI)  (Arm, RISC-V) RHEA, First generation processor for pres Exascale – CRONOS for Exascale
Bull integrator BXI
Intel, ARM, Bull Exascale Interconnect (BXI)
Cost per System   Aurora:$300 to $600 million.
Frontier and El Capitan: $400 to 600 million
$350 to 500 million*** $350 million*** ?
Research by The Red (Team) Analysis Society – Detailed sources in the text.

Feedbacks and impacts

To continue reading, become a member . If you are already a member, please login (don’t forget to refresh the page).

Notes and Additional Bibliography

*”does not manufacture the silicon wafers, or chips, used in its products; instead, it outsources the work to a manufacturing plant, or foundry. Many of these foundries are located in Taiwan and China…” (Investopedia)

**€486 million matched by a similar amount from the participating countries plus in kind contributions from private actors (“Commission proposes to invest EUR 1 billion in world-class European supercomputers“, European Commission – Press release, 11 January 2018)

***Hyperion Research, “Exascale Update“, 5 Sept 2018.

CEA, Atos et le CEA placent TERA 1000, le supercalculateur le plus puissant d’Europe, dans le Top 15 mondial, 25 June 2018

CEA, TERA 1000 : 1er défi relevé par le CEA pour l’Exascale, 12 Nov 2015

Collins, Jim, “Call for Proposals: Aurora Early Science Program expands to include data and learning projects“, Argonne National Laboratory, 18 January 2018

e-IRG, “Interview of EPI’s project coordinator Philippe Notton from Atos“, 10 April 2018

ECP, “SECRETARY OF ENERGY RICK PERRY ANNOUNCES $1.8 BILLION INITIATIVE FOR NEW SUPERCOMPUTERS“, 9 April 2018

Lobet, Mathieu, Matthieu Haefele, Vineet Soni, Patrick Tamain, Julien Derouillat, et al.. High Performance Computing at Exascale: challenges and benefits. 15ème congrés de la Société Française de Physique division Plasma, Jun 2018, Bordeaux, France.

Thielen, Sean, “Europe’s advantage in the race to exascale“, The NextPlatform, 5 September 2018

Valero, Mateo, European Processor Initiative & RISC-V, 9 May 2018

Featured image: Computational Science, Argonne National Laboratory, Public Domain.

Intelligence, Strategic Foresight and Warning, Risk Management, Forecasting or Futurism?

The focus of this website is anticipation for all issues that are relevant to political and geopolitical risks and uncertainties, national and international security, traditional and non-traditional security issues, or, to use a military approach, conventional and unconventional security[1]. In other terms we shall deal with all uncertainties, risks, threats, but also opportunities, which impact governance, and international relations, from pandemics to artificial intelligence through disruptive technology, from energy to climate change through water to wars. This activity may be called more specifically strategic foresight and warning (SF&W) or risk management, even though there are slight differences between both. The definition we use builds upon the practice and research of long time experts and practitioners Fingar, Davis, Grabo and Knight.

Definition of Strategic Foresight and Warning – Risk management for strategic uncertainties

“Strategic Foresight and Warning (risk management for strategic uncertainties) is an organized and systematic process to reduce uncertainty regarding the future that aims at allowing policy-makers and decision-makers to take decisions with sufficient lead time to see those decisions implemented at best.” (Fingar, Davis, Grabo and Knight)

Broadly speaking, it is part of the field of anticipation – or approaches to the future, which also includes other perspectives and practices centred on other themes.

SF&W can and does borrow ideas and methodologies from those approaches, while adapting them to its specific focus. For example, a country like Singapore with its Risk Assessment and Horizon Scanning – RAHS Programme Office, part of the National Security Coordination Secretariat at the Prime Minister’s Office, uses a mix of most of those perspectives, reworks and combines them for its own needs, while creating and designing original tools, methodologies and processes. Furthermore, various actors also use different names for SF&W, or very similar approaches. It is thus important to clarify what various labels and names mean, even if borders between categories are often fuzzy.

We find, by alphabetical order (the “Early Warning System” item is under the “Warning” section):

Futures Studies (also futurology)

Futures Studies (also futurology), practiced by futurists, have been developed since the 1960s. It has, initially, as main market for-profit organisations, i.e. companies and businesses, although it also increasingly tends to provide services to territorial collectivities and state agencies, generally in fields unrelated to security (e.g. urbanism, education, the future of work etc.). Considering the outlook of its founding fathers and related texts, it tends to be characterised by a pro-peace utopian outlook, an emphasis on human intent, a specific multi-disciplinarity focusing on economy and business, technology, some parts of sociology and anthropology, literary criticism, and philosophy. It also tends to have been influenced by post-modernism. It is most often taught in business schools or part of business programs, such as the Wharton School, Turku’s Finland Futures Research Centre, or the University of HoustonHawaii Research Center for Futures Studies seems to be an exception to the rule as it is part of the department of political studies. It tends to be heavily grounded in a post-modern approach.

Forecasting

Forecasting usually refers to the use of quantitative techniques, notably statistics, to approach the future. This is however not always the case and, for example, Glenn and Gordon in their exhaustive review, Futures research methodology, tend to use indifferently forecasting, futures methods and foresight. Understanding forecasting as quantitative techniques seems, nevertheless, to be the most generalized and clearer meaning. It is a tool that is or may be used in any discipline, for example demographics. It is also sometimes considered as the only proper way to anticipate the future. It then tends to ignore what has been developed in other fields and the reasons for this evolution such as the complexity of the world. Many approaches to forecasting are mostly business and economics oriented, although some parts of political science – notably those dealing with elections – or more rarely parts of international relations also use forecasting. Here, we may notably refer to the work of Philip Schrodt, or of the Political Instability Task Force – PITF (funded by the CIA).

Foresight

Foresight, notably in Europe, tends to be used for approaches to the future focused almost exclusively on science and technology, innovations and research and development e.g. the European Foresight Platform which replaces the European Foresight Monitoring Network (EFMN), but also elsewhere in the world. If foresight is meant to be used for other issues, then it is spelled out: e.g. Security Foresight.

Horizon Scanning

Horizon Scanning is used mainly in the U.K. and in Singapore – see the post “Horizon Scanning and Monitoring for Anticipation” for more details.

Intelligence

For the CIA, “Reduced to its simplest terms, intelligence is knowledge and foreknowledge of the world around us—the prelude to decision and action by U.S. policymakers.” (CIA, 1999: vii). Note that Michael Warner (2002) references eighteen different definitions of “intelligence.” It is thus broader than SF&W and should ideally include it, although the SF&W function may or not be part of the intelligence system. A major difference that may be underlined between intelligence on the one hand, SF&W on the other, is that the first starts with and depends upon decision-makers or policy-makers’ requirements while the second does not (see the SF&W cycle).

National Intelligence Estimate

In the US, “National Intelligence Estimates or NIEs “represent a coordinated and integrated analytic effort among the [US] intelligence enterprise, and are the [Intelligence Community] IC’s most authoritative written judgments concerning national security issues and estimates about the course of future events” (ODNI, 2011: 7). NIEs are produced by the National Intelligence Council (NIC).  The NIC is heir to the Board of National Estimates created in 1950, that was morphed into National Intelligence Officers in 1973 and finally became the National Intelligence Council, reporting to the Director of Central Intelligence, in 1979. It is part of the ODNI, Mission Integration (MI) led by the Deputy Director of National Intelligence for Mission Integration, Edward Gistaro. They, however, result from a collective effort and process. “The NIEs are typically requested by senior civilian and military policymakers, Congressional leaders and at times are initiated by the National Intelligence Council (NIC)” (National Intelligence Estimate – Iran: Nuclear Intentions and Capabilities, November 2007 – pdf). They may use or not Strategic Foresight & Warning methodologies, and usually are concerned with a medium term (up to ten years) timeframe. Most of the time NIEs are classified, however some are public and can be found in the NIC (public) collection. For more details on the NIEs process, read, for example, Rosenbach and Peritz, “National Intelligence Estimates,” 2009.

National Intelligence Assessment

National Intelligence Assessments or NIAs are products such as the US Intelligence Community Assessment on Global Water Security (Feb 2012), or the 2008 National Intelligence Assessment on the National Security Implications of Global Climate Change to 2030. In the words of Tom Fingar, former chairman of the NIC, “The short explanation of the difference between an NIA and the better-known National Intelligence Estimate or NIE is that an NIA addresses subjects that are so far in the future or on which there is so little intelligence that they are more like extended think pieces than estimative analysis. NIAs rely more on carefully articulated assumptions than on established fact.” (Fingar, 2009: 8). Both the NIEs and NIAs emphasize and rate the confidence they have in their own judgements and assessments, which is rarely done elsewhere and should be  widely adopted.

La Prospective

La Prospective is the French equivalent, broadly speaking, for both futures studies approaches and strategic foresight (or Strategic Futures). We can notably refer to the work done by Futuribles, which is focusing on futurism for businesses, as well as teaching done at the CNAM, notably focused on innovation.

Risk Management

Risk Management 

Risk Management (initially known as risk analysis[2]) is an approach to the future that has been developed by the private sector in the field of engineering, industry, finance and actuarial assessments. It started being increasingly fashionable in the 1990s. The International Organisation for Standardisation (ISO) now codifies it through the ISO 31000 family under the label of Risk Management.[3] Risk management remains primarily a tool of the private sector with its specific needs and priorities, however those approaches are now widely referred to, incorporated and used within governments. Risk management includes monitoring and surveillance, as  intelligence, strategic warning and SF&W.

Risk Assessment

Risk Assessment is, as defined in risk management, the overall process of risk identification, risk analysis and risk evaluation. It tends also to be used in a looser sense, as in Singapore RAHS, or in the US DIA five-year plan, when the latter mentions that it will “Provide strategic warning and integrated risk assessment” (p.3).

Political risk

Political risk is most often practiced by many consultancies as a “classical” analysis of the political conditions in a country without much methodology, on the contrary from what should be done.
Consultancies dealing with risk and political risk quite often actually deal with “risks to infrastructures” and direct operational risks. Here we are more in the area of tactical risks and daily collection of intelligence to prevent, for example, terrorist or criminal attacks on offices or exploitation sites.

Risk governance

Risk governance is the label used by the OECD to address risk management. Although they started with a focus on economic and infrastructural risks, they now address all-hazards risk. (See also Strategic Crisis Management below).

Science

Although this tends to be forgotten in “anticipation circles” – or refused by part of the academia in the case of social sciences for various reasons – the first discipline to deal with the future is science as it can qualify as such only if it has descriptive, explanatory and predictive power (of course with all the necessary and obvious specifications that must be added to the word “prediction,” considering notably complexity science and the need to forget the 100% crystal ball type of prediction for the more realistic probabilistic approach).

Strategic Analysis

Strategic Analysis is a term that can be used by various institutions, for example by the Situation Awareness unit of the Finnish Security Police (SUPO), and is defined by them as a “general assessment of changes in the operational environment, incidents, phenomena or threats” for decision-makers.” We find it also mentioned in the DIA five-year plan as part of the strategic warning responsibilities. It can thus be seen as a part of SF&W.

Strategic Anticipation 

Strategic Anticipation is a loose term that can be used to cover all strategic activities related to the future.

Strategic Foresight

Strategic Foresight covers strategic anticipation for conventional and unconventional strategic issues as we do, however without the warning component. One example is the Clingendael Institute, a leading International Relations and Security think-tank uses the term Strategic Foresight for its corresponding department and research.

Strategic crisis management

Strategic crisis management is the label used by one department of the OECD risk governance section. It seeks to address the management of crisis as it happens, but not only. It also covers exactly the same process and issues as the one we tackles here, however, doing it as crisis has happened or while it is happening. As a result, it does consider the rising tendency of policy-makers and decision-makers to wait until crisis has hit to start thinking about anticipation. We were proud to deliver one of the two keynote speeches of their 2015 workshop focused more particularly on anticipation

Strategic Futures

Strategic Futures is a term that is used in the American intelligence system, for example with the Strategic Futures Group of the NIC. Prior to 2011 the Strategic Futures Group was named the Long Range Analysis unit. It contributes, besides the National Intelligence Offices, to the overall process that produces the Global Trends series of the NIC (latest Global Trends: The Paradox of Progress). Global Trends uses all available methodologies according to needs.

Intelligence, warning,

Strategic Futures may be considered as synonymous with strategic foresight, in its exploratory dimension. It may also integrate a warning dimension, and in this case, would be equivalent to SF&W. Indeed, it is interesting to note that the National Intelligence Council used to have among its National Intelligence Officers a National Intelligence Officer for Warning (as shown here in the cached version of its public website for 22 August 2010 – This office had been created by the Director of Central Intelligence Directive NO. 1/5, effective 23 May 1979). This Office then disappeared (compare for example with cached version for 10 April 2011), while the Long Range Analysis Unit was renamed in Strategic Futures Group.

Strategic Warning

If the National Office for Warning disappeared from the NIC, Strategic Warning (also known as Indications and Warning), and which aims at avoiding surprises, remains nonetheless crucial within the US Intelligence system, as reasserted notably by the DIA in the 2012-2017 plan (read also Pellerin, DoD News, July 2012). The strategic warning mission of the DIA was reasserted in June 2018 in “Defense Intelligence Agency Bringing Forewarning into 21st Century” (DoD News).  Strategic warning covers notably “necessary collection and forward-looking analytic methods and techniques, … to ensure warning is conveyed accurately and in a timely manner.” (p. 6). It is very similar if not identical to SF&W, but emphasises the warning aspect.

Also in the warning section, one finds the appellation that is promoted notably by the European Union, as Early Warning Systems (see 2011 Council Conclusions on Conflict Prevention building on the Treaty of Lisbon – Article 21c), and which tends to be focused essentially on conflict prevention. Note that the four steps of the process (1/ scan for high risk and deteriorating situations, 2/identify ‘at risk’ countries that require further EU analysis and action, 3/analysis including setting explicit objectives in preparation for early preventive or peacebuilding actions, 4/monitor the resulting actions in terms of impact on conflicts (see EU factsheet on EWS), on the contrary from what is promoted in intelligence notably for ethical reasons including those relative to the democratic mandate held only by policy-makers (e.g. Fingar, Lecture 3, pp. 1-2, 6-7) quite largely integrate early responses within its system. meanwhile, the available types of actions are pre-determined and consist of “preventive or peacebuilding” actions, although the broad appellation may leave some leeway in terms of establishing an efficient strategy then operationalisation of the answer. Also contrary to other approaches, EWS deal exclusively with conflict as issue.

The very specificities of the European Union in terms of its evolving institutions, the way decisions are taken and the competences  (see EU competences) and prerogatives of each of its institutions according to areas, has strong influences on the approach promoted for Early Warning Systems. Notably the specificity of the Common Foreign and Security Policy – CFSP (see EU Special competences) is highly constraining on the design and then practice of early warning. Finally, the possibility to see the CFSP evolve towards more common defence notably, considering changes in the EU and international context – post Brexit, election of U.S. President Trump, election of France staunch EU supporter President Macron – (e.g. Paul Taylor, “Merkel’s thunderbolt is starting gun for European defense drive“, 30 May 2017, Politico), is highly likely to lead to changes in the EU approach to “Early Warning”.
The November 2017 document on the EU conflict EWS explains objective and process: EU conflict Early Warning System: Objectives, Process and Guidance for Implementation – 2017

Strategic Intelligence

Strategic Intelligence is a widely used but rarely defined term that Heidenrich (2007) describes as “that intelligence necessary to create and implement a strategy, typically a grand strategy, what officialdom calls a national strategy. A strategy is not really a plan but the logic driving a plan.” According to the way intelligence and security are understood, strategic intelligence and strategic foresight, or rather in this case strategic foresight and warning will more or less largely intersect; to the least they will need each other.


[1] “Unconventional,” from a Department of Defence perspective, connotes national security conditions and contingencies that are defense-relevant but not necessarily defense-specific. Unconventional security challenges lie substantially outside the realm of traditional war fighting. They are routinely nonmilitary in origin and character.” Nathan Freier, Known Unknowns: Unconventional “Strategic Shocks” in Defense Strategy Development (Carlisle, PA: Peacekeeping and Stability Operations Institute and Strategic Studies Institute, U.S. Army War College, 2008), p.3.

[2] Note that the Society for Risk Analysis considers risk assessment and risk management as part of risk analysis.

[3] The ISO31000 was first published as a standard in November 2009. The ISO Guide 73:2009 defines the terms and vocabulary used in risk management. A new version of the guidelines, ISO 31000:2018, Risk management – Guidelines, was published in February 2018. The other ISO documents related to risk management remain unchanged.


Selected Bibliography

Central Intelligence Agency (Office of Public Affairs), A Consumer’s Guide to Intelligence, (Washington, DC: Central Intelligence Agency, 1999).

Davis, Jack “Strategic Warning: If Surprise is Inevitable, What Role for Analysis?” Sherman Kent Center for Intelligence Analysis, Occasional Papers, Vol.2, Number 1 https://www.cia.gov/library/kent-center-occasional-papers/vol2no1.htm;

Fingar, Thomas, ”Myths, Fears, and Expectations,” Payne Distinguished Lecture Series 2009 Reducing Uncertainty: Intelligence and National Security, Lecture 1, FSI Stanford, CISAC Lecture Series, October 21, 2009 & March 11, 2009. 

Fingar, Thomas, “Anticipating Opportunities: Using Intelligence to Shape the Future,” Payne Distinguished Lecture Series 2009 Reducing Uncertainty: Intelligence and National Security, Lecture 3, FSI Stanford, CISAC Lecture Series, October 21, 2009.

Grabo, Cynthia M. Anticipating Surprise: Analysis for Strategic Warning, edited by Jan Goldman, (Lanham MD: University Press of America, May 2004).

Glenn Jerome C. and Theodore J. Gordon, Ed; The Millennium Project: Futures Research Methodology, Version 3.0, 2009.

Heidenrich, John G.  “The State of Strategic Intelligence”, Studies in Intelligence, vol51 no2, 2007.

Knight, Kenneth Focused on foresight: An interview with the US’s national intelligence officer for warning,” September 2009, McKinsey Quarterly.

Pellerin, Cheryl, DIA Five-Year Plan Updates Strategic Warning Mission, American Forces Press Service, WASHINGTON, July 18, 2012.

Rosenbach, Eric and Aki J. Peritz, “National Intelligence Estimates”, Memo in report Confrontation or Collaboration? Congress and the Intelligence Community, Belfer Center for Science and International Affairs, Harvard Kennedy School, July 2009.

Schrodt, Philip A., “Forecasts and Contingencies: From Methodology to Policy,” Paper presented at the theme panel “Political Utility and Fundamental Research: The Problem of Pasteur’s Quadrant” at the American Political Science Association meetings, Boston, 29 August – 1 September 2002.

Warner, Michael, “Wanted: A Definition of “Intelligence”, Studies in Intelligence, Vol. 46, No. 3, 2002.

Featured Image: Morris (Sgt), No 5 Army Film & Photographic UnitPost-Work: User:W.wolny / Public domain

$2 Billion for Next Gen Artificial Intelligence for U.S. Defence – Signal

Impact on Issues and Uncertainties

Credit Image: Mike MacKenzie on Flickr
Image via www.vpnsrus.com – (CC BY 2.0).

Critical Uncertainty ➚➚➚ Disruption of the current AI-power race for private and public actors alike – The U.S. takes a very serious lead in the race.
➚➚  Accelerating expansion of AI
➚➚  Accelerating emergence of the AI-world
➚➚ Increased odds to see the U.S. consolidating its lead in the AI-power race.
➚➚ Escalating AI-power race notably between the U.S. and China.
➚➚ Rising challenge for the rest of the world to catch up
Potential for escalating tension U.S. – China, including between AI actors

Facts and Analysis

Related

Ongoing series: Portal to AI – Understanding AI and Foreseeing the Future AI-powered World
★ Artificial Intelligence – Forces, Drivers and Stakes
Militarizing Artificial Intelligence – China (1)
★ Militarizing Artificial Intelligence – China (2)

Articles starting with a ★ are premium articles, members-only. The introduction remains nonetheless open access.

On 7 September 2018, the U.S. Defense Advanced Research Projects Agency (DARPA) of the Department of Defense (DoD) launched a multi-year investment of more than $2 billion in new and existing programs to favour and let emerge “the third wave” of Artificial Intelligence (AI). According to DARPA, this next generation AI should notably improve and focus upon “contextual adaptation,” i.e. “machines that understand and reason in context”.

The goal is to enable the creation of machines that “function more as colleagues than as tools” and thus to allow for “partner[ing] with machines”. As a result, the DARPA wants to create “powerful capabilities for the DoD”, i.e.:

“Military systems that collaborate with warfighters will
– facilitate better decisions in complex, time-critical, battlefield environments;
– enable a shared understanding of massive, incomplete, and contradictory information;
– and empower unmanned systems to perform critical missions safely and with high degrees of autonomy.”

The last point is highly likely to include notably the famously feared Lethal Autonomous Weapon Systems (LAWS) aka killer robots.

Even though the USD 2 billion announcement includes existing programs, DARPA’s new campaign indicates the importance of AI for the American Defence. The U.S. shows here again its willingness to remain at the top of the race for AI-power, by breaking new ground in terms of “algorithms” as well as “needs and usage”, to use our five drivers and stakes’ terminology. It also thereby adopts a distinctively disruptive  strategy as it intends to go beyond the current Deep Learning wave.

Disruption would impact both public and private actors, states and companies alike.

In terms of power struggle, we may also see the launch of the DARPA campaign as an answer to the call by Alphabet (Google), Tesla and 116 international experts to  ban autonomous weapons.  With such an amount of funding available, it is likely that more than one expert and laboratory will see their initial reluctance circumvented.

Should the U.S. succeed, then it would take a very serious lead in the current race for AI power, notably with China, as it would deeply shape the very path on which the race takes place.

Sources and Signals

Darpa: AI Next Campaign

DARPA Announces $2 Billion Campaign to Develop Next Wave of AI Technologies

Over its 60-year history, DARPA has played a leading role in the creation and advancement of artificial intelligence (AI) technologies that have produced game-changing capabilities for the Department of Defense. Starting in the 1960s, DARPA research shaped the first wave of AI technologies, which focused on handcrafted knowledge, or rule-based systems capable of narrowly defined tasks.

Elon Musk leads 116 experts calling for outright ban of killer robots

Some of the world’s leading robotics and artificial intelligence pioneers are calling on the United Nations to ban the development and use of killer robots. Tesla’s Elon Musk and Alphabet’s Mustafa Suleyman are leading a group of 116 specialists from across 26 countries who are calling for the ban on autonomous weapons.

Impacts of Chinese Baidu new no-code tool to build AI-programs – Signal

Impact on Issues and Uncertainties

➚➚ Accelerating expansion of AI

➚➚ Accelerating emergence of the AI-world

➚ Redrawing of the power map of the world along AI-power status lines

➚ Escalating AI-power race notably between the U.S. and China.
➚ Rising challenge for the rest of the world to catch up

China  influence and capability in terms of A.I.
U.S. feeling threatened, which is possibly a factor of global instability

 Potential for escalating tension U.S. – China, including between AI actors

Facts and Analysis

Related

Our ongoing series: The Future Artificial Intelligence – Powered World

Artificial Intelligence – Forces, Drivers and Stakes

One of the drivers we identified as powering AI, its development and spread is “needs and usage”. We then noted that this driver was particularly active in the case of China.

The deployment of a beta version of Baidu EZDL is one more evidence in this direction. In a nutshell, Baidu EZDL is a platform for machine learning, which may be used by anyone and notably small and medium size companies without AI capabilities (we have not yet tested its ease of use or its claims). It is currently limited to object recognition, images and sound. It is likely, nonetheless to vastly spread the use of AI among first Chinese small and medium-sized companies, and then globally.

This enhances China – and Baidu – positions in the AI-world in construction, while promoting globally the expansion of AI. It also escalates competition in terms of AI between China and the U.S., when tensions between the two countries is high because of the U.S. declared trade war.

Source and Signal

Baidu EZDL website

Michael Feldman, Baidu Launches ‘No-Code’ Tool for Building Machine Learning Models, Top500, 4 September 2018:

Baidu Launches ‘No-Code’ Tool for Building Machine Learning Models

Baidu Launches ‘No-Code’ Tool for Building Machine Learning Models Search giant Baidu has released EZDL, a software development platform for non-programmers who want to build production-level machine learning models.

 

EN