It has been created in three year by inside warming and melting. This shows the acceleration of the process, as well as the destabilization of the entire glacier and of its neighbours (Sloat, ibid).
The sole melting of the Thwaites glacier could add two feet to the global rise of the ocean.
Horizon scanning on Quantum Information Sciences and technologies and their use. As a result we publish the Quantum Times, a daily scan and news brief on everything related to the emerging Quantum world. The Quantum Times is updated daily before 9:00 ECT.
Being informed and keeping up with what is happening in the highly competitive race to quantum is crucial. We created this scan to help our users’ community keeping abreast of developments in the field.
You will find here all news (in English) related to quantum computing and simulation, quantum sensing and metrology, quantum communication, as well as quantum key distribution, quantum machine learning and post-quantum cryptography.
Find all our in-depth reports, brief and articles on the Quantum emerging world here.
From the corporate world to governments, we seek to escape uncertainty and surprises. This is crucial to survive and thrive. It is also necessary for the protection from threats, dangers and risks.
As a whole and generally, our abilities – if not willingness – to identify threats has improved with experience and practice. Notably, we became relatively efficient in assessing likelihood and impact. Nonetheless, one component of threat and risk assessment remains most often unconsidered, unnoticed, and neglected: time.
Yet, time is a crucial component of our ability to prevent surprise, handle threats and manage risks. This article assesses how we integrate time and highlights room for improvements.
Since 2017, quantum information and technology science (QIS), and especially quantum computing, are quickly emerging as central in Hollywood and its movies, TV series and novels. Their scenarii emphasise the link between quantum power and national security situations.
That link organizes the structure of the U.S. strategic debate through the very complex and tangled relationships between the federal centres of political power in Washington D.C, the Department of Defence, the intelligence community and the media and cultural industry (Valantin, Ibid.). That is the reason why Hollywood movies, television and video games play a vital role in the U.S. strategic debate.
Beyond hype and hatred, this article focuses on the way Artificial Intelligence (AI) – actually Deep Learning – is integrated in reality, through sensor and actuator.* Operationalisation demands to develop a different way to look at AI. The resulting understanding allows highlighting the importance of sensor and actuator, the twin interface between AI and its environment. This interface is a potentially disruptive driver for AI.
Sensor and actuator, the forgotten elements
Sensor and actuator are key for the development of AI at all levels, including in terms of practical applications. Yet, when the expansion and the future of AI are addressed, these two elements are most of the time overlooked. This is notably because of this lack of attention that the interface may become disruptive. Indeed, could an approach through sensor and actuator for AI be key to the very generalised boom so many seek? Meanwhile, many subfields of AI could also benefit from such further development. Alternatively, failing to completely integrate this approach could lead to unnecessary hurdles, including temporary bust.
Sensor and actuator, another stake in the race for AI
Furthermore, we are seeing emerging three interacting AI-related dynamics in the world. The twin birth and spread of AI-governance for states and AI-management for private actors interact and feed into an international race for AI-power, i.e. how one ranks in the global relative distribution of power. As a result, AI increasingly influences this very distribution of power ( see The New AI-World in the Making). Thus, the drivers for AI are not only forces behind the expansion of AI, but also stakes in the AI-competition. Meanwhile, how public and private actors handle this competition, the resulting dynamics and entailed defeats and victories also shape the new AI-world in the making.
Thus, if sensor and actuator are crucial in widely operationalising AI, then the ability to best develop AI-governance and AI-management, as well as the position in the international race for AI-power, could also very well depend from the mastery of these sensor and actuator.
This article uses two case studies to progressively explain what sensor and actuator are. It thus details the twin interface between the AI-agent and its environment. As a result and third, we highlight that one understands best AI as a sequence. That understanding allows us envisioning a whole future world of economic activities. That world is however not without danger and we highlight that it will demand a new type of security. Finally, we shall point out the necessity to distinguish the types of reality the AI sequence bridges.
The next article will focus on different ways to handle the AI sequence and its twin interface, notably the actuator. We shall look more particularly at the Internet of Things (IoT), Human Beings themselves, and Autonomous Systems, better known as robots. Meanwhile we shall explore further the new activities AI creates.
Looking at the game against AlphaGo differently
We shall examine again (Google) DeepMind’s AlphaGo, the supervised learning/AI-agent that plays Go and which victory started the current AI phase of development.
Replaying the game against AlphaGo
Now, let us imagine a new game is set between Mr Fan Hui, the Go European Champion AlphaGo defeated by a 5-0 win in October 2015 and the AI-agent (AlphaGo webpage). Mr Fan Hui, as happened in reality, plays first against the AI-agent AlphaGo. In front of him, we can see a goban (the name of the board for the go). AlphaGo is connected to the cloud for access to distributed computing power, as it needs a lot of computing power.
Mr Fan Hui starts and makes its first move placing a white stone on the Goban. And then it is the turn of AlphaGo. How will the AI-agent answer? Will it make a typical move or something original? How quickly will it then play? The suspens is immense, and…
Nothing happens.
What went wrong?
The (right) way DeepMind did it
If you watch carefully the video below showing the original game, you will notice that, actually, the setting is not exactly what I described above. A couple of other crucial elements are present. If DeepMind had put a human and an AI-agent face to face according to my described setting, then their experiment would have gone wrong. Instead, thanks to the elements they added, their game was a success.
You can observe these three elements at 1:19 of the video, as shown in the annotated screenshot below:
A: a human player
B: a screen
C: a human being with a bizarre device on a table.
Sensor
In our imagined setting, I did not create an interface to tell the AI-agent that Mr Hui had moved a stone, and which one. Thus, as far as the AI agent was concerned there was no input.
In DeepMind’s real setting we have the human agent (C). We may surmise that the bizarre device on the table in front of her allows her to enter in the computer for the AI-agent the moves that Mr Fan Hui does throughout the game.
More generally, a first input interface must exist between the real world and the AI-agent to see it functioning. Therefore, we need sensors. They will sense the real world for the AI. We also need to communicate to the AI-agent the data the sensors captured, in a way that the AI understands.
Let us assume now that we add agent C and its device – i.e. the sensor system – to our setting.
Again, nothing happens.
Why? The AI-agent proceeds and decides about its move. Yet, the algorithmic result remains within the computer, as a machine output whatever its form. Indeed, there is no interface to act in the real world. What is needed is an actuator.
Actuator
The interface to the outside world must not only produce an output that our Go Master can understand for each move, but also one that will make sense, for him, during the whole game.
It would not be enough to get just the position of a stone according to coordinates on the board. Such type of result would demand first that Mr Fan Hui has a good visualisation and mapping capability to translate these coordinates on the goban. It would demand, second, that our Go Champion has a truly very good memory. Indeed, after a couple of moves, being able to picture and remember the whole game would be challenging.
DeepMind actually used the needed actuators to make the game between human and AI possible.
At (B), we have a screen that displays the whole game. The screen also most probably shows the AI-agent move each time the latter plays. Then, at (A), we have a human agent, who translates the virtual game on screen in reality on the goban. To do so, he copies the move of the AI-agent as displayed on the screen by placing the corresponding stone on the board.
It is important to note the presence of this human being (A), even though it was probably not truly necessary for Mr Fan Hui, who could have played in front of the screen. First, it is a communication device to make the whole experiment more fully understandable and interesting for the audience. Then, it is possibly easier for Mr Fan Hui to play on a real goban. The translation from a virtual world to a real world is crucial. It is likely to be a major stake in what will really allow AI to emerge and develop.
As we exemplified above, specifying the process of interaction with an AI-agent, highlights the importance of twin interfaces.
This is actually how DeepMind conceptualised one of its latest AI achievement, to which we shall now turn.
Towards seeing as a human being
In June 2018, DeepMind explained how it had built an AI-agent that can perceive its surrounding very much as human beings do it (open access; S. M. Ali Eslami et al., “Neural scene representation and rendering“, Science 15 Jun 2018: Vol. 360, Issue 6394, pp. 1204-1210, DOI: 10.1126/science.aar6170).
“For example, when entering a room for the first time, you instantly recognise the items it contains and where they are positioned. If you see three legs of a table, you will infer that there is probably a fourth leg with the same shape and colour hidden from view. Even if you can’t see everything in the room, you’ll likely be able to sketch its layout, or imagine what it looks like from another perspective.” (“Neural scene representation and rendering“, DeepMind website).
The scientists’ aim was to create an AI-agent with the same capabilities as those of human beings, which they succeeded in doing:
DeepMind uses “sensor and actuator”
What is most interesting for our purpose is that what we described in the first part is exactly the way the scientists built their process and solved the problem of vision for an AI-agent.
They taught their AI-agent to take images from the outside world (in that case still a virtual world) – what we called the sensor system – then to convert it through a first deep learning algorithm – the representation network – into a result, an output – the scene representation. The output, at this stage, is meaningful to the AI-agent but not to us. The last step represents what we called the actuator. It is the conversion from an output meaningful to the AI to something meaningful to us, the “prediction”. For this, DeepMind developed a “generation network”, called a “neural renderer”. Indeed, in terms of 3D computer graphics, rendering is the process transforming calculation into an image, the render.
The screenshot below displays the process at work (I added the red circles and arrows to the original screenshot).
The following video demonstrates the whole dynamic:
Developing autonomous sensors for the vision of an AI-agent
In the words of DeepMind’s scientists, the development of the Generative Query Network (GQN) is an effort at creating “a framework within which machines learn to represent scenes using only their own sensors”. Indeed, current artificial vision systems usually use supervised learning. This means that human intervention is necessary to choose and label data. DeepMind’s scientist wanted to overcome as much as possible this type of human involvement.
The experiment here used a “synthetic” environment (Ibid., p5). The next step will need new datasets to allow expansion to “images of naturalistic scenes” (Ibid). Ultimately, we may imagine that the GQN will start with reality, captured by an optical device the AI controls. This implies that the GQN will need to integrate all advances in computer vision. Besides, the sensors of our AI-agent will also have to move through its environment to capture the observations it needs. This may be done, for example, through a network of mobile cameras, such as those being increasingly installed in cities. Drones, also controlled by AI, could possibly supplement the sensing network.
Improving visual actuators for an AI-agent
Researchers will also need to improve the actuator (Ibid.). DeepMind’s scientists suggest that advances in generative modeling capabilities, such as those made through generative adversarial networks (GAN) will allow moving towards “naturalistic scene rendering”.
Meanwhile, GANs could lead to important advances in terms, not only of visual expression, but also of “intelligence” of AI-agents.
When GANs train to represent visual outputs, they also seem to develop the capability to group, alone, similar objects linked by what researchers called “concepts” (Karen Hao, “A neural network can learn to organize the world it sees into concepts—just like we do“, MIT Technology Review, 10 January 2019). For example, the GAN could “group tree pixels with tree pixels and door pixels with door pixels regardless of how these objects changed color from photo to photo in the training set”… They would also “paint a Georgian-style door on a brick building with Georgian architecture, or a stone door on a Gothic building. It also refused to paint any doors on a piece of sky” (Ibid.) .
Similar dynamics are observed in the realm of language research.
Using a virtual robotic arm as actuator
In a related experiment, DeepMind’s researchers used a deep reinforcement network to control a virtual robotic arm instead of the initial generation network (Ali Eslami et al., Ibid., p.5). The GQN first trained to represent its observations. Then it trained to control the synthetic robotic arm.
In the future, we can imagine a real robotic arm will replace the synthetic one. The final actuator system” will thus become an interface between the virtual world and reality.
AI as a sequence between worlds
Let us now generalise our understanding of sensor and actuator, or interfaces for AI-input and AI-output.
Inserting AI in reality means looking at it as a sequence
We can understand processes involving AI-agents as the following sequence.
Environment -> sensing the environment (according to the task) -> doing a task -> output of an AI-intelligible result ->expressing the result according to task and interacting actor
The emergence of new activities
This sequence, as well as the details on the GAN actuator for example, shows, that actually more than one AI-agent is needed if one wants to completely integrate AI in reality. Thus, the development of performing AI-agents will involve many teams and labs.
Envisioning the chain of production of the future
As a result, new types of economic activities and functions could emerge in the AI-field. One could have, notably, the assembly of the right operational sequence. Similarly, the initial design of the right architecture, across types of AI-agents and sub-fields could become a necessary activity.
To break down the AI integration in sequence allows us starting to understand the chain of production of the future. We can thus imagine the series of economic activities that can and will emerge. These will go far beyond the current emphasis on IT or consumer analytics, what most early adopters of AI appear to favour so far (Deloitte, “State of ArtificiaI Intelligence in the enterprise“, 2018).
The dizzying multiplication of possibilities
Furthermore, the customisation of the AI sequence could be tailored according to needs. One may imagine that various systems of actuators could be added to a sequence. For example a “scene representation” intelligible to the AI-agent to use our second case study could be expressed as a realistic visual render, as a narrative and as a robotic movement. We are here much closer to the way a sensory stimulation would trigger in us, human beings, a whole possible range of reactions. However, compared with the human world, if one adds the cloud, then the various expressions of the “scene representation” could be located anywhere on earth and in space, according to available communication infrastructure.
The possibilities and combinations entailed are amazing and dizzying. And we shall look in the next articles at the incredible possibilities which are being created.
Towards the need to redefine security?
Altering our very reality
In terms of dangers, if we come to rely only or mainly on a world that is sensed, understood, then expressed by an AI sequence, then we also open the door to an alteration of our reality that could be done more easily than if we were using our own senses. For example, if one relies on a sequence of AI agents to recognise and perceive the external world miles away from where we are located, then an unintentional problem or a malicious intent could imply that we receive wrong visual representations of reality. A tree could be set where there is no tree. As a result, a self-driving car, trying to avoid it, could get out of the road. The behaviour of the users of this very expression of reality will make sense in the AI-world. It will however be erratic outside it.
Actors could create decoys in a way that has never been thought about before. Imagine Operation Fortitude, the operation though which the allies deceived the Nazis during World War II regarding the location of the 1944 invasion, organised with the power of multiple AI-sequences.
Actually, it is our very reality, as we are used to see it expressed through photographs, that may become altered in a way that cannot be directly grasped by our visual senses.
Breaking the world-wide-web?
Here we also need to consider the spread of propaganda and of what is now called “Fake News”, and most importantly of of the “Fake Internet” as Max Read masterly explained in “How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually” (Intelligencer, 26 December 2018). Assuming the spread of “Fake Everything” signals established widespread malicious intention, then adding to it the power of AI-agents could break the world-wide-web. The impacts would be immense. To avoid such a disaster, actors will have to devise very strong regulations and to favour and spread new norms.
Artificial Intelligence completely redefines the way security can be breached and thus must be defended.
Integrating AI-agents according to different realities: Virtual-Virtual and Virtual-Material
From virtual to virtual
When the AI-agent’s environment and the other actors are virtual, then the sequence is – to a point – easier to build. Indeed everything takes place in a world of a unique nature.
However, fear and need to know will most probably imply that human beings will want control at various points of the sequence. Thus, ways to translate the virtual world into something at least perceptible by humans are likely to be introduced. This will enhance the complexity of development.
From virtual to material
When the environment is real and when interactions take place between an AI-agent and human beings, the sequence becomes much more complex. The twin interfaces must indeed become bridges between two different types of world, the digital and the real.
Actually, if we look through these lenses to the deep learning ecosystem and its evolution since 2015, researchers devoted a large part of their initial efforts to create AI-agents able to “do a task” (playing, sorting, labelling, etc.). Meanwhile, scientists have developed ways first to make the real world intelligible to AI-agents. In the meantime, the actuator-systems developed become intelligible to humans but they remain nonetheless mostly virtual.
Lagging behind in expressing the virtual world in the real one – Visual AI-agents
For example, the real world is translated into digital photographs, which the AI-agent through deep learning algorithms recognises. The AI will sort them or label them in a way that human beings understand. For instance, human beings easily understand words, or images displayed on a screen, which are the result of the actuator part of the sequence. Yet, this output remains virtual. If we want to improve further, then we must create and use other devices to enhance or ease the interface from virtual to real. Object recognition proceeds in a similar way.
In terms of visual AI-related efforts, we may wonder if we have not progressed more in giving vision to AI-agents than in using this vision in a way that is useful enough to human beings in the real world.
From virtual to real, sensing more advanced than expressing?
Yet, have we made similar progress into the development of actuators that interface between the virtual world of the AI-agent and the reality of human beings? Alternatively, could it be that we did improve the whole sequence but that progresses remain limited to the virtual world? In all cases what are the impacts in terms of security, politics and geopolitics?
This is what we shall see next, looking more particularly at the Internet of Things, Robots and Human Beings, as potential actuator systems of AI.
*Initially, I used the word “expressor” instead of the adequate word, “actuator”. Thanks to Teeteekay Ciar for his help in finding out.
About the author: Dr Helene Lavoix, PhD Lond (International Relations), is the Director of The Red (Team) Analysis Society. Strategic foresight and warning for national and international security issues is her specialisation. Her current focus is on the future Artificial Intelligence and Quantum world and its security.
Featured image: U.S. Army graphic by Sonya Beckett, CERDEC NVESD – Public Domain – From By Aris Morris, January 9, 2018, Army ALT Magazine, Science and Technology.
Strategic Foresight and Warning (SF&W) is at once process and analysis.
By SF&W analysis we mean all methodologies and related issues allowing for the development of an understanding grounded in reality that will generate best anticipatory products, useful to decision-makers and policy-makers for carrying out their mission (to find your way within the myriad of labels given to anticipatory activities, see Intelligence, strategic foresight and warning, risk management, forecasting or futurism? (Open Access/Free) and When risk management meets SF&W).
The larger SF&W analytical method can be seen as following steps, with use of various methodologies, notably to face specific challenges for each stage:
A good bibliography is a typical part of what is involved in step 1, to which must be added an ongoing scan as what is done with the Red (Team) Analysis Weekly. A more detailed discussion of step 1 and 6 can be found in the section scan & monitor.
The Chronicles of Everstate are an early experiment exemplifying one way to map an issue and how ego networks can be used to develop narratives. Our online course, From Process to Creating your Analytical Model… focuses on the creation of the model, thus on the most fundamental part of these steps..
Examples of scenarios and their indicators are given for Syria and Libya. Furthermore, as far as Libya is concerned, we detail the methodology to evaluate the likelihood of each scenario. Another example of the narrative can be found here.
Those steps are also addressed in the section Assessing future security threats, where we share our latest insights and foresights on methodology using specific geopolitical global issues, risks and uncertainties as case studies.
The public part of our monitoring – step 6 – is done for various issues through The Sigils, as well as through The Weekly(both open access/Free) You can also find monitoring at work in our Horizon Scanning Board (Open access/Free). Furthermore, these real life indications allow checking the validity of scenarios, and updating the model used for each issue, if necessary.
Finally, monitoring is necessary – if not crucial – to identify new emerging issues (the feedback on step 1).
Strategic foresight and warning or more broadly anticipation is a step by step process to anticipate the future in an actionable way.
The graphic ideal type process displayed below is the result of more than a decade of work with and about systems of anticipation, from early warning systems to prevent conflicts for aid agencies to strategic warning and strategic foresight with security and intelligence agencies and practitioners. It considers too research through commissioned reports and teaching on the topic.
It is more particularly adapted to global security, external risks, political and geopolitical risks and uncertainties. Indeed, the process recommended builds upon more than twenty years experience in central administration and in research in the areas of war, international relations, political science, analysis and policy planning.
The architecture of the Red Team Analysis Society’s website is built following this process. Each section strives progressively to address the various challenges that are met at each step, to explain and apply various possible methodologies and tools, and finally to deliver real-life strategic foresight and warning products.
Featured image: Stanley Kubrick exhibit at EYE Filminstitut Netherlands, Amsterdam – The War Room (Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb)- By Marcel Oosterwijk from Amsterdam, The Netherlands [CC-BY-SA-2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons
By strategic foresight methodology, we mean this part of the general strategic foresight and warning methodology that focuses on foresight analysis. In other words, it is the general method without the warning part. It thus consist in:
Defining the question
Step 1: Exploratory stage
Step 2 – The creation of the model for SF&W: mapping dynamic networks part I & part II. See also our online course for this part.
Constructing a foresight scenario’s narrative with Ego Networks: This methodology was experimented with the Chronicles of Everstate – It can be used as a guide and fall back in case the analyst faces a hurdle in developing its narrative. However, practically, building a whole narrative with ego network is likely to be too painstaking for an analyst to be systematically used. Should Artificial Intelligence be applied to SF&W, then, possibly, it could, benefit from the ego-network approach.
Among those who are aware of Quantum Information Science (QIS), some call for caution, decrying a potential hype or even denying the possibility to ever see a fully multi-purpose quantum computer – a Universal Quantum Computer.
Yet, as we have shown in the previous article, even though the time when a Universal Quantum Computer will exist may be relatively far away, even though there is indeed no absolute certainty such a computer will ever be created and then industrialised, the very existence of this possibility – even if it is remote – has already changed the world. It has triggered discoveries and evolution in other sub-fields of QIS – namely Quantum sensing and Metrology, Quantum Communication and Quantum simulations – and related usage that can be neither denied nor ignored. We are in the case of a possibly low probability high impact scenario that no one, and especially not security related actors be they public or private, can overlook.
The imagined future quantum-AI world and the related race for quantum both feed into the race itself, giving it its momentum, accelerating and intensifying it, through research and potential and actual usages envisioned.
This is also one of the conclusions reached by the consensual, conservative and very cautious latest report by the U.S. National Academies of Sciences, Quantum Computing: Progress and Prospects,published in December 2018. Sponsored by the Office of the Director of National Intelligence, the report concludes that
“Key Finding 7:…Although the feasibility of a large-scale quantum computer is not yet certain…. Quantum computing research has clear implications for national security. Even if the probability of creating a working quantum computer was low, given the interest and progress in this area, it seems likely this technology will be developed further by some nation-states. Thus, all nations must plan for a future of increased QC capability. The threat to current asymmetric cryptography is obvious and is driving efforts toward transitioning to post-quantum cryptography… But the national security implications transcend these issues. A larger, strategic question is about future economic and technological leadership….” National Academies of Sciences, Quantum Computing: Progress and Prospects – p. 7-20.
As the race for Quantum is fully a component of both the emerging Quantum AI world and the very race itself, we must thus understand its dynamics, its characteristics, as well as its actors.
The purpose of this article is thus to define the framework within which the Race to Quantum can be understood, to present an adequate tool to handle the multiple characteristics of this race, namely dynamic mapping – for mathematicians dynamic graphs – and to uncover parts of the dynamic map thus achieved as example of what is happening and what can be done to understand.
Read also the follow up articles adding to the mapping:
Considering the scope of the race, this is work in progress. The research furthermore needs to be permanently updated. It thus necessitates sponsorship for open publication, and/or commission for specific and private use, according to actors’ strategy. Do not hesitate to contact us.
Here, as concrete examples of the dynamic map depicting the race to quantum, we shall present a first series of videos showing how the race unfolds between 1997 and 2028, considering some of the characteristics identified in the first section of this article, as necessary to understand the race to quantum. Each video is accompanied by a classical description of the corresponding part of the race, with the detailed sources used.
The first mapping shown as a video (click on the link to access the mapping directly) focuses on the Netherlands, QuTech and QuSoft, to which is added the EU Quantum effort.
Then, focusing on state actors, the second mapping will add Germany, the third the U.S. and the fourth China.
To exemplify the importance of the private sector in the race to Quantum, the fifth mapping will then add IBM.
Finally, the sixth and last mapping will add finance through the Vision Fund, still a potential actor in the race for Quantum Technologies.
Each video notably shows how adding a new actor changes the outlook of the race. Meanwhile, the mapping tool used highlights the importance of using a proper visualisation so that our perceptions of the race reflect as adequately as possible what is happening to take informed decisions.
Many other actors are part of the race from the UK to Singapore through Australia, Canada, France, Japan or Israel, to say nothing of other private companies from Google to Ali Baba, and will need to be included in the mapping of the race before one can reach a conclusive analysis. Nonetheless, as the reader will discover, crucial elements of understanding are already made available by the six dynamic mappings presented below.
Understanding the Race to Quantum
The first way to look at the Race for Quantum is to try using what we could call a classical framework: identifying public funding. This is the approach that was taken by Freeke Heijman-te Paske, Ministry of Economic Affairs, Netherlands, “Global developments Quantum Technologies“, 8 May 2015 (then presented at the EU Flagship Launch in May 2016), as well as by a 2015 McKinsey document estimating annual spending on nonclassified quantum technology (the two show similar results, and it is impossible to know whom used the research of whom).
However, the first problem with the Heijman-te Paske/McKinsey figures, is that we are unable to trace sources. Although we shall consider their figures accurate for the year 2015, nonetheless, it becomes impossible to update the estimates, when we are now a couple of years later. It is thus difficult to have a dynamic idea of the evolution of quantum funding when this is a crucial element for a race.
Second, considering mainly public funding is fraught with difficulties as far as the quantum race is concerned. Indeed, any more in-depth inquiry in the Quantum World shows how much public and private efforts are intertwined. Thus, considering only one or the other effort may, at best, only provide a partial picture. Furthermore, the positive feedbacks between both cannot be depicted and highlighted by lump sums attributed to one country. To illustrate this point, let us take the example of the Netherlands’ Research Centre QuTech.
QuTech dominates the field of quantum technologies in the Netherlands, and is more particularly focused on Quantum Computing and Quantum Internet. It was founded in 2013 by the Delft University of Technology (TU Delft) and the Netherlands Organization for Applied Scientific Research. In 2015, it received €146 million over 10 years ($168,6 million) from the government through what may be seen as a comprehensive framework for quantum research (Annual Report 2015 p. 7, 35). It was thus designed as a a public-private centre. Its main private and industrial partners are Intel and Microsoft. Intel announced a 10 years collaborative partnership in 2015 with a $50 million funding (Ibid.). Microsoft co-financed QuTech projects regularly (e.g. annual report 2015). In 2018, the American firm established its own quantum research laboratory at TU Delft, Station Q Delft, and Microsoft and TU Delft’s quantum institute, QuTech, will be collaborating intensively on the development of topological qubits (QuTech News, 1 June 2018).
Thus, should we keep a classical public funding framework, how would we classify QuTech? If we were looking at the Netherlands as a unit of analysis, should we consider only the $168,6 million over ten years, plus “usual” yearly funding in quantum research across the country? But then, how should we regard the private industrial involvement in QuTech, which is not only important in terms of funding, but also access to facilities, cross-fertilization of research and possibly practical output?
Furthermore, other grants, awards and projects also contribute to fund QuTech’s research. For example, in late 2015, QuTech secured a five years funding from the American Intelligence Advanced Research Projects Activity (IARPA) “to develop an error-corrected 17-qubit superconducting circuit and the electronics and software to control it”, a project called LogiQ. This new activity, “launched in April 2016, is a partnership of QuTech with Zurich Instruments and ETH Zurich” (annual report 2015).
Are we thus to count this funding as American, or are we to share it between the Netherlands and Switzerland? But if we choose the second option, then are we not losing some information as, at the end of the project, the U.S. also will benefit from the research funded?
Using QuTech case as well as others, on the one hand, building upon the excellent International Conference on Quantum Computing (ICoCQ) held in Paris at the end of November 2018, the papers presented there, as well as discussions with scientists, on the other, we identified crucial characteristics of the Race for Quantum.
Major features of the Race for Quantum we must consider are as follows:
Existence of a public comprehensive strategic framework (or not) for a given country;
Yearly usual public research funding for a given actor;
Linkages Public-Private, Industry-Research, Finance-Industry-Research (notably though various stages of venture capital);
Linkages across sovereign boundaries (which implies being able to then consider industrial risks, as well as sovereign national security risks);
Onset of efforts (when did it start?), as time and accumulated funding, research, and notoriety matter;
If funding matters, then talents do matter too. Both must be captured;
Considering the shortage of talents, the mapping must allow for capturing as much as possible tomorrow talents;
Communication matters too (capturing imagination – see previous article), we must thus be able to account for this dimension;
Other elements we are currently developing;
All these elements must be seen in a dynamic light for analysis, i.e. data need to be collected over time.
We shall need to integrate as many as possible of these specificities in a state of play to make it meaningful in terms of race. This will then allow us properly monitoring the race.
It should be highlighted, however, that scientific discovery and engineering creativity is not necessarily a consequence of the amount of funding available or of the number of academic papers published. If the two latter elements are useful measurements of the degree of commitment of actors to QIS, and potentially increase the odds to see the most committed being at the top of the race, there is also no fatality here. Revolutionary way(s) forward in QIS may very well emerge from a small lab and/or from a genius not (yet) integrated within the race or integrated as a small player.
The Tool: Dynamic Graph Visualisation Software
As we need to consider items as well as linkages among them, then this means we can represent our problem of mapping our actors and their interactions in a graph:
“A network [or graph] is a set of items,… vertices or sometimes nodes, with connections between them, called edges” or ties. (Mark Newman, “The Structure and Function of Complex Networks“, SIAM Review 56, 2003, 167–256, pp.168-169).
As a result, we shall be able to benefit from graph theory – should it be needed – as well as from related tools.
In our case, we shall use open source and free Gephi, which is a “visualization and exploration software for all kinds of graphs and networks”, as indeed it also allows for dynamic graph analysis, which is necessary for our purpose. This is the same software we use to map issues and for influence analysis for scenarios, as well as to identify indicators for warning.
When mapping the race to quantum, one measure of the importance of the actors will be expressed through the size of the nodes, ranked according to the funding received. In other words, the more funding an actor or a framework receives, the larger the node.All other nodes are resized accordingly. For mathematically savvy readers, this means that the size of the nodes is ranked according to weighted in-degrees.
Similarly, the thickness of the edge (the arrow linking nodes) represents the yearly amount of funding and varies relatively according to all the yearly amounts of funding of the mapping.
Mapping the actors of the race for quantum
Considering the scope and breadth of the map, we here focus only on a couple of actors, which also aims at demonstrating the interest of using dynamic graph and integrating the characteristics identified above.
We shall first detail the Netherlands, QuTech and QuSoft map, to which we added the EU funding to be complete.
We shall then add a partial mapping for Germany, focusing exclusively on the latest government’s decision regarding a comprehensive framework, but not fully detailing all German actors. We shall then add similarly the U.S., again focusing on the American Government effort at launching a Quantum Comprehensive Framework, thus including neither unknown military and classified efforts, nor private involvement. Then, again for the sake of comparison, we shall add as exhaustively as possible China, using mainly the excellent report made by Elsa B. Kania and John K. Costello for CNAS.These data should ideally be revised to include a couple of other elements missing, either related to our framework or to evolutions that took place since the CNAS report was published.
Then to give at least one example of the importance of private high tech research we shall include American IBM.
Finally, because we have here a potential disrupter of the race, notably when the competition will be in its later stages, we shall add the Mega High Tech Fund Vision Fund, launched by Japanese Bank Softbank.
Quantum in The Netherlands: QuTech and QuSoft
Our first mapping will focus upon the Netherlands, QuTech, using the data and sources detailed above, as well as later efforts, this time in terms of developing Quantum Software through the dedicated research center QuSoft.
The Netherlands being located in the EU, we also need, to get a proper mapping, data related to EU investment in Quantum, as detailed below.
Edges are weighted according to yearly funding, in million of USD (converted at the time of writing), when data is available. When it is not, then a weight of 1 is attributed to show the existence of a relationship. Only committed funding in programs are taken into account, which explains why some edges disappear over time.
For the period 2010 2028 the Race to Quantum for the Netherlands, QuTech and Qusoft, considering the characteristics listed above 1 to 5, as well as 9 (dynamics) looks like the video below.
The Race to Quantum: The EU and the Netherlands – Video 1
The European Union: The Quantum Flagship
Prior to the launch of a coordinated strategy, according to Freeke Heijman-te Paske (Ibid, slide 8), the EU, through various programs of the European Commission spent on quantum technologies: €17.5 ($19.9) million between 1997 and 2002; €30.5 ($34.7) million between 2002 and 2007; €45.6 ($51.8) million between 2007 and 2014; €31.8 ($36.2) million between 2014 and 2018.
On 29 October 2018, the EU launched its Quantum Flagship, which is a €1 billion ($1.1476 billion) and 10 years initiative. However, the EU funds only half of the overall amount and the home country of the labs applying for funding will have to finance the other half (Davide Castelvecchi, “Europe shows first cards in €1-billion quantum bet“, Nature, 29 Oct 2018, Official EU page on Quantum Flagship). Thus, the purely EU funding is truly only equal to €500 million over 10 years.
The EU Quantum Flagship is built around five dimensions: “Quantum Communication (QComm), Quantum Computing (QComp), Quantum Simulation (QSim), Quantum Metrology and Sensing (QMS), and finally, Basic Science (BSci)”, which slightly differs from the U.S. approach, but where we, nonetheless, find the same fundamental areas (White House, National Strategic Overview For Quantum Information Science, 2018). For the “ramp-up” phase, which should last three years, i.e. until September 2021, 20 projects were selected with an overall budget of €132 million, across all quantum technologies (press release).
Out of the €100 million, or rather €50 million, per year available over 10 years, the €132 million ($150,4 million) funding for the first 3 years would mean that €168 million (84 for the EU and 84 for the member states) have not been yet invested. One may wonder why there is such a discrepancy, and what is the way forward.
This could potentially start highlighting two related problems that could hit regions, countries and companies unequally: first the relative absence of talents and second the lack of an ecosystem that is thriving enough to be conducive to proper research and innovation in the field, and to applications and usage. In the specific case of EU funding, the notoriously heavy, complicated, costly and peculiar procedures to apply for funding, even more so in the case of the Flagship if it has to be paralleled by a similar process within member states, may also play its part.
As far as talents are concerned, the Quantum Flagship aims at involving “the quantum community at large, with over 5000 European researchers in academia and industry, searching to place Europe at the forefront of Quantum innovation” (press release). We note here an interesting discrepancy in terms of figures. Indeed, Cade Metz, of the New York Times, pointed out that “By some accounts, fewer than a thousand people in the world can claim to be doing leading research in the field” (“The Next Tech Talent Shortage: Quantum Computing Researchers“, 21 October 2018). Meanwhile, Todd Holmdahl, Corporate Vice President, Quantum, Microsoft Corporation estimates in his Written Testimony to the U.S. Senate Committee on Energy & Natural Resources (Hearing to Examine Department of Energy’s Efforts in the Field of Quantum Information Science, September 25, 2018), that:
“Today, fewer than one in 10,000 scientists, and even fewer engineers, have the education and training necessary to leverage quantum tools”.
Thus, educating scientists, engineers as well as more broadly potential users for quantum technologies is fully a part of the race to quantum and could derail best efforts if not considered.
Germany
In August 2018, Germany announced a €650 million quantum initiative ($ 745,9 million – rate 7 Nov 2018), the framework program “Quantentechnologien – von den Grundlagen zum Markt” (Quantum technologies – from basics to markets – see the official 48 pages pdf), which covers the years 2018-2022, i.e. four years (see also Andreas Thoss, “€650 million for quantum research in Germany“, LaserFocusWorld, 28 September 2018). This program is a combined effort of the German Federal Ministry of Education and Research BMBF, the Ministry of Economics, the Ministry of the Interior, and the Ministry of Defence (Thoss, Ibid.).
Added to €100 million per year ($114,7 million) of governmental research funding for quantum research (ibid.), Germany thus is investing €262,5 million per year ($301,24 million) in QIS.
To this should be added the funding that will be provided by the EU Quantum flagship (see above).
Interestingly, and in line with our point regarding the importance of understanding and imagining a future Quantum world, as well as the necessity to develop a Quantum educated workforce, the German framework includes a dimension related to the explanation of QIS to people (Ibid.).
As a result the race for Quantum now looks as follows:
The Race to Quantum: The EU, the Netherlands and Germany – Video 2
In 2009, the U.S. developed a “Federal Vision for Quantum Information Science”. Then, a federal inter-agency coordination on quantum research, the Interagency Working Group in QIS, was chartered in October 2014 (Olivier Ezratty, “Qui gagnera la bataille de l’ordinateur quantique ?“, La Tribune, 25 July 2018). It aimed at developing and coordinating policies, programs, and budgets for QIS research and included “participants from the Departments of Commerce, Defense, and Energy; the Office of the Director of National Intelligence; and the National Science Foundation” (Request for Information on Quantum Information Science and the Needs of U.S. Industry, 2015). As a result of these and other programs, in 2016, “federally-funded basic and applied research in QIS” was “on the order of $200 million a year” (Interagency Working Group in QIS, “Advancing Quantum Information Science:…). Note that Freeke Heijman-te Paske (Ibid.) estimates the American yearly funding in 2015 to €360 million (approx $409 million), which is twice as much as what the American Interagency Working Group estimates. We shall use the American figure, considering the absence of sources in the Netherlands’ document.
Finally, in the Fall of 2018, the QIS truly started benefiting from a national strategy across not only Federal agencies but also industries, what we call here a comprehensive framework. It is highly likely that the rising tension with China, and Chinese efforts and success in the field and in other emerging crucial high tech areas such as AI played their part in the American concern.
On 24 September 2018, the White House Office of Science and Technology Policy (OSTP) conveyed a meeting for “advancing American leadership in quantum information science” (QIS), which gathered “administration officials”, including “officials from the Pentagon, National Security Agency, White House National Security Council, NASA and the federal departments of energy, agriculture, homeland security, state and interior, “academic experts in the field of quantum information science and leading companies including Google and IBM”, as well as “JPMorgan Chase & Co”, “Honeywell International Inc, Lockheed Martin Corp, Goldman Sachs Group Inc, AT&T Inc, Intel Corp, Northop Grumman Corp” (Nick Whigham, “The international race to build a quantum computer heats up with White House summit“, news.com.au, 25 September 2018; David Shepardson, “Key companies to attend White House quantum computing meeting”, Reuters, 24 September 2018).
On that day the White House published the National Strategic Overview For Quantum Information Science, which aims to “maintaining and expanding American leadership in QIS to enable future long-term benefits from, and protection of, the science and technology created through this research…“.
A couple of days before, on 13 September, the House of Representatives approved the “H.R. 6227: National Quantum Initiative Act” to “provide for a coordinated Federal program to accelerate quantum research and development for the economic and national security of the United States”, and “authorize three agencies—the Department of Energy (DOE), the National Institute of Standards and Technology (NIST), and the National Science Foundation (NSF)—to together spend $1.275 billion from 2019 to 2023 on quantum research”, i.e. during the first five years of the 10 years initiative (Gabriel Popkin, “Update: Quantum physics gets attention—and brighter funding prospects—in Congress“, Science, 27 June 2018). Meanwhile, the Department of Defence (DoD) also plays a role in promoting and developing QIS under its own budget (Will Thomas, “Trump Signs National Defense Authorization Act for Fiscal Year 2019″, American Institute of Physics, 17 August 2018).
Without counting the Pentagon, we thus have a yearly spending of $255 million, i.e. a 27.5% increase compared with the 2016 overall QIS estimated yearly spending.
Besides or rather with this Federal program, the U.S. is home to a large number of the biggest companies working on QIS – Alphabet (Google), Intel, IBM, Honeywell, Hewlett Packard, Microsoft, AWS (Amazon), as well as successful and promising startups such as Rigetti, and IonQ.
Focusing on the Federal Program – thus keeping in mind that this does not accurately represent the reality of the U.S. effort in the Quantum Race, as the private sector cannot fundamentally be excluded as will show the video 5 below, our mapping now looks as follows (note that in the absence of figure on quantum research post first five years of the comprehensive framework, we did not add any, when funding will likely exist):
The Race to Quantum: The EU, the Netherlands, Germany and the U.S. – Video 3
To these, we added the estimate of Freeke Heijman-te Paske for 2015 that we tentatively evaluated to last, besides other newer funding, considering China’s declared intention to become a leader in new technologies, including QIS.
For the most recent funding, we should particularly note that, according to an introduction by Pan Jianwei (the scientist behind the Chinese Quantum effort) at the Hefei Municipal Committee Central Group Theory Study Conference on Quantum Communication (as also quoted by Kania & Costello, fn 83):
“It is planned to invest 100 billion yuan in five years [$14.39 billion over 5 years, i.e. $2.878 billion per year] for the National Laboratory of Quantum Information In Hefei” Pan Jianwei Introduction. Reporter Zhang Pei, Anhui Business Daily, 24 May 2017.
Besides these state and public fundings, the Chinese High Tech giants are also committing themselves to QIS, notably Ali Baba and Baidu (Kania & Costello, Ibid.) (these are not included in the mapping at this stage).
Meanwhile, efforts to develop applications for QIS are promoted from the provinces’ administrations to the People’s Liberation Army (PLA), including through the civil-military fusion’s approach, and through the very large military consortiums (Kania & Costello, Ibid.).
As a result the race for Quantum, focusing on what we know of China’s public funding, now looks as follows:
The Race to Quantum: The EU, the Netherlands, Germany, the U.S. and China – Video 4
As we see with each of the actor we added to our mapping, the outlook of the race changes considerably. What is particularly interesting with the use of a dynamic graph to visually map actors is that what by and large remain very large amounts of money we have some difficulty to truly comprehend, now become immediately comparable and understandable. Indeed the use of weighted edges and weighted in-degrees for the size of the actors implies that comparisons are automatically embedded in the visual outlook of the map.
Meanwhile, the number of nodes, here mainly research labs and governmental programs, help us better grasping the idea of ecosystems.
Quantum IBM
To give a better idea of the types of competing and collaborating actors and of the stakes involved, despite our still very incomplete map, we shall add one private IT actor.
We chose IBM, notably because it is one very advanced player in terms of QIS.
On 4 May 2016, it launched IBM Quantum Experience (News Release). Through this cloud platform, it made available to the public and clients its quantum computers, thus allowing for their use, which is fundamental in the race to quantum as we saw. In 2017, IBM quantum computing research became IBM Q, a new division. In December 2018, two 5 Qubits and one 14 Qubits computers are available for public usage, and one 20 Qubits computer is reserved for clients, while a 32 Qubits simulator is also online (IBM Q).
According to IBM annual report (published April 2018), “more than 75,000 users have run more than 2.5 million quantum experiments. A dozen clients, including partners JPMorgan Chase, Daimler AG, Samsung and JSR, are now exploring practical applications”. In November 2018, according to IBM data, as shown on the screenshot below, 572,945 experiments were run by various users on their machines (IBM Q Experience).
According to Harriet Green, chairman and CEO of IBM Asia Pacific, “Just in the last five years, IBM has invested over $38 billion in these new capabilities” (Jessa Tan, “IBM sees quantum computing going mainstream within five years“, CNBC, 30 March 2018).
Now the Race to Quantum looks as displayed in the video below.
The Race to Quantum: The EU, the Netherlands, Germany, the U.S., China and IBM – Video 5
The video shows the current predominance of the U.S., thanks to its giant IT industry. Adding the Chinese no less giant digital companies, considering notably their efforts to also offer quantum computing on cloud platform (e.g. Alibaba-CAS Superconducting Quantum Computer – SQC), thus competing directly against IBM, as well as accounting for other elements and characteristics of the race, could again change the outlook of the race.
For now, let us turn to another type of actor, finance and more specifically funds.
Vision Fund – Kickstarting Japan, Saudi Arabia and the U.A.E. into quantum technologies
Over 2016 and 2017, the controversial Japanese Softbank created the mega high tech $100 billion “Vision Fund” (Jonathan Guthrie and Sujeet Indap, “Lex in-depth: SoftBank’s credibility problem“, The Financial Times, 17 December 2018). Notably, Softbank is the major shareholder of nothing else than Chinese Ali Baba, and held 29.11% of the giant Chinese company on 2 November 2018 (Kristina Zucchi, “The Top 5 Alibaba Shareholders (BABA)“, Investopedia).
Announced on 14 October 2016, Vision Fund‘s first major close occurred in May 2017, and its final close in May 2018 (Arash Massoudi, Leo Lewis, and Patrick McGee, “Daimler leads new investors in closing $100bn Vision Fund“, The Financial Times, 10 May 2018).
At the origin of Vision Fund was the meeting of Masayoshi Son, the billionaire Japanese technology investor, founder, Chairman and CEO of Softbank and Saudi Prince Mohammed bin Salman al-Saud, also known as MBS (Arash Massoudi, Kana Inagaki, and Simeon Kerr, “The $100bn marriage: How SoftBank’s Son courted a Saudi prince“, The Financial Times, 19 October 2016).
As major investors, we thus find not only the Saudi Kingdom but also the U.A.E., two major Gulf countries that must diversify from oil. The fund is “backed by a $45bn commitment from the [Saudi] kingdom’s Public Investment Fund”, which represents 45% of the total ($17 bn in equity and $28 bn in debt), and by a $15 billion commitment from the U.A.E. Abu Dhabi’s Mubadala Investment Company, i.e. 15% of the total ($9.3 bn in debt and $5.7bn in equity) (Andrew Zhan & Adam Augusiak-Boro, “SoftBank: Vision or Delusion”, Equity-Zen, August 2018, using 2017 FT research data). The linkages between Softbank and Saudi Arabia are strong enough to have been reaffirmed on 5 November 2018, despite the Khashoggi affair (Kana Inagaki, Ibid.).
Other investors range from Apple to Daimler through Taiwanese Foxconn (Massoudi et al., “Daimler…”, Ibid.).
Although Vision Fund is interested in all technologies that could “accelerate the information revolution” and not specifically quantum ones (website), considering its size, and the amount of the minimum investment it makes (Andrew Zhan & Adam Augusiak-Boro, “SoftBank: Vision or Delusion”, Equity-Zen, August 2018), it could nonetheless have a mammoth impact on QIS.
Indeed, if quantum technologies are not mentioned on Vision Fund’s website, specialised media reported in 2017 the Fund’s interest in quantum tech. According to Bloomberg Quint “Shu Nyatta, who helps invest money for the fund, said the group wanted to find and back the company whose quantum computing hardware or software that runs atop it would become the “de facto industry standard” (Jeremy Kahn, “SoftBank’s Vision Fund Eyes Investment in Quantum Computing,” Bloomberg Quint, 26 June 2017):
“We are happy to invest enough to create that standard around which the whole industry can coalesce,” Shu Nyatta, Vision Fund, reported by Bloomberg, Ibid.
As Vision Fund does not seem to have yet invested into QIS, it is included in the mapping only as a “ready to enter the race” actor. It should not be ignored, however, because it is a potentially very disruptive player considering its weight and its investors. Indeed, we may wonder about the potential political, strategic, financial and industrial consequences to see Vision Fund entering massively into the capital of a security sensitive company, or not entering in its capital but favouring a competitor, for example from an adversary country. The potential and changing clout of Saudi Arabia and of the U.A.E. should also be highlighted and deserves a fully detailed strategic analysis (forthcoming).
Here is thus our mapping including the mega Vision Fund. Note that Vision Fund’s edges correspond to capital investments and not yearly investments or fundings as for the rest of the mapping. We nonetheless, for the fund, kept it this way as capital investment also represents continuous influence and future profits.
The Race to Quantum: The EU, the Netherlands, Germany, the U.S., China, IBM and Vision Fund – Video 6
Throughout these mappings, we have shown the complexity of the race for quantum technologies, highlighting the importance of mapping it with a proper tool. Further analysis and conclusion would demand to complete the mapping, as well as to fully include all the characteristics of the race. Considering the stakes, this is a tool each player should use before to take strategic decisions.
Featured Image: “Majoranas on Honeycomb” by Jill Hemman – ORNL Art of Science images feature visualization effects, neutrons research – 2018 Director’s Choice -This visualization illustrates neutrons (blue line) scattering off a graphene-like honeycomb material, producing an excitation that behaves like a Majorana fermion—a mysterious particle that is also its own antiparticle (green wave). The visualization supports research by Arnab Banerjee, Mark Lumsden, Alan Tennant, Craig Bridges, Jiaqiang Yan, Matthew Stone, Barry Winn, Paula Kelley, Christian Balz, and Stephen Nagler. Public Domain.
National Academies of Sciences, Engineering, and Medicine. 2018. Quantum Computing: Progress and Prospects. Washington, DC: The National Academies Press. https://doi.org/10.17226/25196.
On 3 December 2018, i.e. two days before the opening of the 2018 UN Climate Change Conference – its 24th meeting (COP 24), in Katowice, at the very heart of Poland coal country, Jair Bolsonaro, the new president of Brazil, announced that his country would not organize the following round of negotiations, i.e the COP 25, and that he was contemplating Brazil’s withdrawal from the Climate Paris Accord (“Brazil withdraws candidacy to host UN climate change conference 2019”, XinhuaNet, 2018, 11, 29).
A few days before, California firemen had finally succeeded in stopping the two megafires that had been ravaging “the Golden State” during almost a month.
Meanwhile, on 1 December, the leaders attending the G20 meeting in Buenos Aires, Argentina, released a joint statement reaffirming their commitment to fighting climate change by upholding the Paris Accord, even if the U.S. President Donald Trump refused to endorse the statement (Catherine Lucey and Almudena Calatrava, “Trump alone on climate change as G20 find common ground on climate, migration”, Business Insider, 3 December, 2018).
Those different political stances are literally drawing the political cartography of the way climate change is becoming a political issue. However, this must be seen in the light, first, of a continued growth of greenhouse gases in the atmosphere, which has not been hampered or slowed since 2015 and second, of the international negotiations of the Paris Accord during the COP 21. Considering thus the context, we may wonder if the different actors really understand the nature of climate change as a profoundly singular threat: climate change is a planetary threat, and thus is “something” that is totally unknown of the collective history; it is not present within the memory of humanity.
Thus, the emergence of a new kind of political frame of mind must also accompany the understanding this new reality.
In this article, we look at the very singularity of climate change and how it imposes a new way of thinking about the relationship between modern societies and a rapidly changing planet. We explain how the new planetary condition is tantamount to a “hyper siege”. Finally, we focus upon the geopolitical consequences of the understanding and misunderstanding of the nature of climate change as a planetary threat on the political frame of mind.
A new planetary condition
Climate change is not a crisis.
A crisis implies the passage from a given situation to another. This is not what is happening in the case of climate change. On the contrary, the very expression “climate change” encapsulates the fact that the planetary climate has left the stability zone known as the Eocene, during which “homo sapiens” developed. Since then, with the industrial revolution and the massive development of the use of carbon fuels, the planetary climate has entered a trajectory of change unknown in its speed and scale in the geophysical history of our planet ( James Hansen, Storms of my Grand children, the truth about the coming climate catastrophe and our last chance to save humanity, 2009).
The relationships between the human species and our planet started being understood as un-secure in 1972, when the Club of Rome, a futurist group composed of bankers, industrials and economists, published its famous report “The Limits to Growth”, which it had commissioned to a team of scientists of the Massachusetts Institute of Technology (Dennis and Donnella Meadows, Jørgen Randers, William W. Behrens III). The report established that the combined pressures exercised by both the growth of industrial production on the planetary resources and the growth of pollution and environmental degradation would increase the costs of the economic system, while decreasing its efficiency, until growth would no longer be possible. These twin dynamics would go on until the whole system would stop being able to support and sustain itself, once the planetary carrying capacity would be exhausted and the environmental conditions and life conditions fatally degraded. These “limits to growth” were meant to be reached around 2020. This pioneer report opened multiple areas of research, out of which emerged the wider field of research about sustainability and its limits. It was updated in 2004 (Dennis and Donnella Meadow, The Limits to growth – the 30 years update, 2004).
In 2005, Jared Diamond, building on transversal studies, and thus following the methods pioneered by the Club of Rome, demonstrated with his monumental “Collapse: How societies choose to fail or survive”, how the choice of certain forms of development could be inadequate, given the carrying capacity of the regional environment and, as a result, lead entire societies to collapse.
This was the “official” start of what we could call the “sustainability versus collapse” studies. In this new field, the report: “Planetary boundaries: Exploring the safe operating space for humanity”, led by Johann Rockstrom, director of the Stockholm Resilience Center (Ecology and Society, 2009) has been a conceptual breakthrough. The research team defined nine “planetary boundaries”, which must not be crossed, because crossing them would fundamentally alter the collective life conditions of humanity. If crossed, these thresholds would be nothing but “tipping points” towards deeply changed life conditions on Earth.
The nine boundaries are: “ climate change; rate of biodiversity loss (terrestrial and marine); interference with the nitrogen and phosphorus cycles; stratospheric ozone depletion; ocean acidification; global freshwater use; change in land use; chemical pollution; and atmospheric aerosol loading” (Ibid.). The report warns that three of these thresholds, i.e. climate change, the biodiversity crisis and the interferences with the nitrogen and phosphorus cycles, are already crossed. Since this research has been published, the world faces the multiplication of extreme environmental events, which are impacting immense regions, such as the Arctic, as well as the economic development of the weakest as well as the strongest economies on Earth, while endangering hundred of millions of people (Harry Pettit, ‘The ocean is suffocating’: Fish-killing dead zone is found growing in the Arabian Sea – and it is already bigger than SCOTLAND”, Mail on Line, 27 April 2017 and Eric Holtaus, “James Hansen Bombshell’s climate warning is now part of the Scientific canon”, Slate.com, March 22, 2016).
Welcome to the planetary hyper siege
Beyond the fundamental importance of scientific research, it must be understood that climate change is a planetary threat through the multiplication of impacts felt throughout the world. It means that the alterations of the Earth-system geophysics are turning geophysical conditions against humanity and endangering the very fabric of the conditions necessary for collective life.
This is why climate change, in the words of California governor Jerry Brown, “is not the new normal, but the new abnormal”. He made this declaration while California firemen were waging a desperate fight against the two mega fires ravaging California (“Gov. Jerry Brown says massive fires are “the new abnormal” for California”, The Week, November 11, 2018).
In a previous article, we explained that climate change was tantamount to a “long planetary bombing” (Jean Michel Valantin, “Climate Change: the Long Planetary Bombing“, The Red (Team) Analysis Society, 18 Sept 2017). This qualification is truer than ever, but needs to be reinforced through the idea of “hyper siege”. This means that contemporary societies are being literally “immersed” into the new and adverse geophysical conditions that are besieging them (Jean-Michel Valantin “Hyper Siege: Climate change versus U.S National security”, The Red Team Analysis Society, March 31 2014, and (Clive Hamilton, Defiant Earth, The fate of the Humans in the Anthropocene, 2017).
For example, while the ocean is increasingly rapidly submersing Bangladesh, forcing dozens of millions of people to flee rural lands, the coupling of intense and repeated drought and of the U.S.-China trade war puts the U.S. agriculture under growing pressure (See Jean-Michel Valantin “Climate change, a geostrategic issue? Yes!” and “The US Economy, Between the Climate Hammer and the Trade war Anvil – The US Soybean Crop case”, The Red (Team) Analysis Society, October 8, 2018). In both cases, the vulnerabilities of societies and of their economies are being put under a permanently growing climate pressure that will neither stop nor abate. In other terms, the planetary conditions are becoming a threat for the very conditions upon which modern societies are dependent.
The geopolitical consequences of understanding or misunderstanding the nature of the planetary threat
Understanding the new planetary condition implies a new political frame of mind. This frame of mind must enable to think the evolution of modern societies in relation to the “Defiant Earth” as being in a constant state of both flux and danger. In other words, it means that the political and economic decision-makers and actors have to develop a worldview centred on the idea of change and adaptation, which is not that remote from the way of thinking a strategist would have (Jean-Michel Valantin “Strategic Thinking in the Russian Arctic: Turning Threats into Opportunities (part 1 and 2)”, The Red (Team) Analysis Society, 19 December, 2016).
For example, the rapid warming and geophysical transformation of the Arctic is motivating Russian, Chinese, American and Canadian political, economic and military authorities to develop economic, industrial, energy and military strategies aimed at adapting the different national interests to climate change (Jean-Michel Valantin, “Militarizing the Warming Arctic – The Race to Neo-Mercantilism(s)“, The Red (Team) Analysis Society, November 12, 2018). This adaptation of the policies of the Arctic countries’ authorities to the geophysical change of the Arctic signals the integration of the Earth-system state of rapid change by the worldview of political authorities.
This new political frame of mind is the key to striving and succeeding in finding adaptive and attenuating responses in the face of the planetary threat. Failing to acquire it is not an option.
Featured image: A wildfire approaches Naval Base Ventura County: NAVAL BASE VENTURA COUNTY, Calif. (May 3, 2013) Naval Base Ventura County has evacuated some residents due to smoke concerns as a fast-growing wildfire along the Pacific Coast Highway northwest of Los Angeles has forced residents to leave the area. (U.S. Navy photo/Released) 130503-N-ZZ999-003 – Public Domain.
GDPR, Privacy and Cookies
The Red Team Analysis Society uses cookies to ensure that we give you the best experience on our website. This includes cookies from third party social media websites if you visit a page which contains embedded content from social media. Such third party cookies may track your use of the Red Team Analysis Society website.
If you click on "Accept", you accept our policy, we'll assume that you are happy to receive all cookies on the Red Team Analysis Society website and this will close this notice. Accept AllGDPR and Cookie Policy
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.