Modeling for Dynamic Risks and Uncertainties (1) : Mapping Risk and Uncertainty

(This article is a fully updated version of the original article published in November 2011 under the title “Creating a Foresight and Warning Model: Mapping a Dynamic Network (I)”). Mapping risk and uncertainty is the second step of a proper process to correctly anticipate and manage risks and uncertainties.  This stage starts with building a model, which, once completed, will describe and explain the issue or question at hand, while allowing for anticipation or foresight. In other words, with the end of the first step, you have selected a risk, an uncertainty, or a series of risks and uncertainties, or an issue of concern, with its proper time frame and scope, for example, what are the risks and uncertainties to …

The remaining part of this article is for our members and those who purchased special access plans. Make sure you get real analysis and not opinion, or, worse, fake news. Log in and access this article.

The Khashoggi’s Mystery and the Need for a Wake Up Call – The Red (Team) Analysis Weekly – 25 October 2018

Each week our scan collects weak – and less weak – signals for global changes, national and international security, political and geopolitical risk of interest to private and public actors.

EditorialThe New York Times’ article Why Jamal Khashoggi’s Killing Has Resonated” by Megan Specia ponders what many have been wondering lately. Why on earth would the murder of Mr Khashoggi, definitely an atrocious crime, definitely terrible for his family and definitely wrong, yet an event that hardly obviously belongs to international relations and even less to major historical events, take center stage not only in the media but also for international actors be they public or private?

A potential attack on press freedom or denunciation of Saudi Arabia’s human rights’ breaches cannot be a sufficient answer, considering the number of journalist murders or jailed on the one hand, so many countries’ breaches of human rights on the other. Actually, most of the time, these events do not stir anything.

Megan Specia (Ibid.) gives an answer in four main points: “Mr. Khashoggi was a prominent writer with powerful friends”; “A killing inside a consulate, often a place of refuge, is shocking”; “Leaks to Turkish media kept the story in the headlines”; “The Saudi crown prince had already set the stage for tense geopolitics”.

Her first and last points are certainly the most interesting, especially read together, as they point towards factions war within Saudi Arabia, with ramifications outside the country and manipulations of the media and public opinion. Worryingly, the propaganda operation – assuming there was one – worked extremely well, with foreign heads of states, diplomats and CEOs falling into the trap and becoming pawns in a game they do not master.

There is, however, also another point that must be made, or to the least pondered, about the Khashoggi affair and its resonance, a point related to international public opinion: increasingly, important even crucial events and dynamics are completely downplayed or stir absolutely no interest when, on the contrary, irrelevant matters do.

To take a very easy example, extreme weather events pile worldwide, while the IPCC panel issued its sternest and most urgent warning ever, yet it feels as if nobody was really concerned. The amazing hailstorm on Rome, on 21 October, was not even crowdsourced by the Weekly algorithm, and did not make international news, at least not anywhere on a par with Mr Khashoggi’s murder. Yet, climate change impacts are incredibly more important, for the whole world and for each and every human being, than what happened in the Saudi Arabia’s consulate.

Meanwhile, the Cold War is finally coming to an end in East Asia, artificial intelligence and quantum computing seem to point towards the birth of a completely new paradigm, tensions between the U.S. and China are high indeed… etc.

Yet, people prefer being fascinated with a murder.

The why this is happening deserves being pondered because, considering the stakes, our very survival could depend on it.

However unpalatable, we may wonder if the information overload created by the world-wide-web, and the way major high-tech actors’ interest end up favouring very low quality content, where analysis is disappearing for opinion, has not a large part of responsibility in what is happening.

We may also wonder if the very real and serious and threatening stakes at hand are not so frightening that people just prefer to ignore them in a mad rush forward, seizing any piece of information that could assuage their rising anxiety. In that case, the fascination with Mr Khashoggi’s murder would be a symptom of denial and escapism.

In both cases, after proper and detailed analysis, responses must be designed, given and truly implemented.

Should such new dynamics take place, then, the sad murder of a journalist would have served as a wake up call, and, after all, become truly a historical event.

Find out more on horizon scanning, signals, what they are and how to use them:

Horizon Scanning and Monitoring for Anticipation: Definition and Practice“.

Read below our latest complimentary Weekly horizon scanning. 

Each section of the scan focuses on signals related to a specific theme: world (international politics and geopolitics); economy; science; analysis, strategy and futures; AI, technology and weapons; energy and environment. However, in a complex world, categories are merely a convenient way to present information, when facts and events interact across boundaries.

Read the 25 October 2018 scan

The Weekly is the complimentary scan of The Red (Team) Analysis Society. It focuses on political and geopolitical uncertainty, on national and international security issues.

The information collected (crowdsourced) does not mean endorsement but points to new, emerging, escalating or stabilising problems and issues.

Featured image: Antennas of the Atacama Large Millimeter/submillimeter Array (ALMA), on the Chajnantor Plateau in the Chilean Andes. The Large and Small Magellanic Clouds, two companion galaxies to our own Milky Way galaxy, can be seen as bright smudges in the night sky, in the centre of the photograph. This photograph was produced by European Southern Observatory (ESO), ESO/C. Malin [CC BY 4.0], via Wikimedia Commons.

Modeler la Sécurité du Cyber Future – Agora 41, Assemblée Stratégique pour l’ANSSI

Alors que nous entrons dans la «quatrième révolution industrielle», dans l’ère de la transformation numérique, dans un nouvel «IA-monde» et dans la «seconde révolution quantique», la sécurité nationale et internationale doit s’adapter. Elle doit le faire en anticipant ce monde futur, en évitant les surprises et les menaces tant nouvelles qu’anciennes, tout en saisissant les immenses possibilités offertes par ce qui n’est rien moins qu’un changement de paradigme (Pour les labels, respectivement, Klaus Schwab, World Economic Forum, Helene Lavoix, The Future Artificial Intelligence – Powered World series, The Red (Team) Analysis Society, Jonathan P. Dowling, Gerard J. Milburn, “Quantum Technology: The Second Quantum Revolution”, 13 Jun 2002, arXiv:quant-ph/0206091v1).

Access English version

La stratégie relative au cyber-espace et à la cyber-sécurité varie selon les pays – et les acteurs. Elle est gérée de différentes manières par différents types d’agences. Après avoir présenté brièvement les principaux acteurs étatiques français, britanniques et américains de la cyber-sécurité, nous nous concentrerons sur la perspective française et l’ANSSI, ses objectifs et sa récente initiative de réflexion, Agora 41.

La France, le Royaume-Uni et les États-Unis – bref aperçu

En France, l’Agence Nationale de la Sécurité des Systèmes d’Information (ANSSI), créée le 7 juillet 2009, s’occupe de la sécurité du monde digital. Elle est l’autorité nationale pour toutes les questions liées à la défense et à la sécurité des systèmes d’information et, de ce fait, conduit la Stratégie nationale de sécurité numérique française (2015). Néanmoins, d’autres dimensions du cyber-espace restent sous l’autorité d’autres parties de l’État, notamment le ministère de l’Intérieur et le ministère de la Défense, qui prévoit un budget de 1,6 milliard d’euros pour 2019-2025 pour la cyber-sécurité, tandis que son commandement en matière de cyber-défense, créé en 2016, verra une augmentation de ses dépenses de personnel (Benjamin Hue, “La France va renforcer son arsenal contre la cybercriminalité“, RTL, 24 janvier 2018). Une nouvelle stratégie nationale en matière de cyber-sécurité sur cinq ans, avec un budget global clair, est nécessaire, et pourrait être en préparation (Ibid.).*

L’ANSSI correspond plus ou moins au Centre National Britannique de Cybersécurité (National Cyber Security Center – NCSC), appartenant au GCHQ, ouvert en octobre 2016 et officiellement lancé le 14 février 2017, et participant pleinement à la CyberUK strategy de 2017 (lancement du NCSC, video et documents; Reuters, “Britain to spend 1.9 billion pounds on boosting cyber defenses“). Le budget global du Royaume-Uni pour la cybersécurité pour tous les ministères (sans compter le budget potentiel pour les cyber-représailles et les attaques) s’élève à £1,9 milliard pour 2017-2022 (“Chancellor’s speech at the National Cyber Security Centre opening“, 14 février 2017; Reuters, Ibid. .)

L’ANSSI et le NCSC sont les héritiers de la mission cryptographique passée des institutions étatiques. L’ANSSI est le dernier né de la Direction Technique des Chiffres (DTC) créée en 1943 à Alger (Histoire de l’ANSSI). De son côté, le NCSC, à travers le GCHQ, est ancré dans le célèbre Bletchley Park qui, grâce notamment à Turing, à l’équipe de codebreakers et aux Bombes machine, a vaincu Enigma et ainsi contribué à la victoire des Alliés pendant la Seconde Guerre Mondiale. Auparavant, ses origines remontent aux efforts de décryptage  déployés par l’Amirauté et le War Office pendant la Première Guerre Mondiale (par exemple, GCHQ, «The story of Signals Intelligence 1914-2014»).

En ce qui concerne les États-Unis, leur budget fédéral cybersécurité est de 15 milliards de dollars pour 2019, et  éclipse les efforts européens, mais doit être partagé entre toutes les agences dotées de cyber-éléments, du Pentagone à la NASA en passant par la Small Business Administration (John Slye, “The Fy 2019 Budget Increases Cybersecurity Funding By Nearly $600 Million“, Deltek, 28 Février 2018).

cybersecurity, U.S., ANSSI, Agora 41
U.S. Federal Cybersecurity Funding FY 2017-2019 by John Slye, “The Fy 2019 Budget Increases Cybersecurity Funding By Nearly $600 Million“, Deltek, 28 Février 2018

Cependant, et malgré la fameuse National Security Agency/Central Security Service (NSA/CSS), aucune nouvelle agence ni centre unifié n’est dédié au nouveau monde cyber et à sa sécurité, comme cela est fait en France et au Royaume-Uni (David H. Petraeus, “The Case for a National Cybersecurity Agency“, Belfer Center, 5 septembre 2018). L’ Office of Cybersecurity and Communications de la National Protection and Program Directorate (NPPD) au sein du Département de la sécurité intérieure (DHS) pourrait être vu comme approchant le système britannique ou français. Cependant, en tant qu'”Office”, il ne dispose pas de l’autonomie, du poids et du leadership que l’on peut trouver en Europe. En outre, vu son emplacement et le nombre d’autres agences impliquées dans la cyber-sécurité, il est très probable que l’OCC / NPPD consacre du temps, des ressources et de l’énergie à des escarmouches et querelles administratives.

Cela dit, le budget américain consacré à la cybersécurité reste pour le moins très important. Qui plus est, les États-Unis bénéficient d’un “cyber-écosystème” qui est un atout formidable. Cet écosystème est créé par le cyber-budget fédéral et les agences et bureaux en bénéficiant, les GAFA et autres sociétés telles que Intel, NVIDIA et IBM, pour en nommer seulement quelques-unes, la Silicon Valley, des milliardaires patriotes et concernés et des universités de classe mondiale, comme le montre l’initiative de 1 milliard de dollars du MIT “pour faire face aux opportunités et aux défis mondiaux présentés par la prédominance de l’informatique et la montée de l’intelligence artificielle (IA)” qui inclus un don de 350 millions de dollars de Stephen A. Schwarzman, PDG de Blackstone (MIT Review, “MIT reshapes itself to shape the future“).

Si Eisenhower soulignait l’importance du complexe militaro-industriel pour comprendre la sécurité nationale des États-Unis (Military-Industrial Complex Speech, Dwight D. Eisenhower, 1961), nous pourrions désormais devoir aussi compter avec un complexe gouvernementalo-High-Tech. Le future programme “JEDI” pour le Département de la Défense ne peut que renforcer cette tendance (Helene Lavoix, “Artificial Intelligence, Computing Power and Geopolitics (2)“, The Red (Team) Analysis Society, 25 June 2018; Shaun Nichols, “US JEDI military cloud network is so high-tech, bidders will have to submit their proposals by hand, on DVD“, The Register, 27 Sep 2018).

De son côté, l’OTAN travaille à la mise en place d’un nouveau centre de commandement cyber-militaire, qui devrait être prêt pour 2023 (Robin Emmott, “”NATO cyber command to be fully operational in 2023“, Reuters, 16 octobre 2018).

Une perspective complète et plus détaillée devrait notamment inclure la Chine.

L’ANSSI, de la stratégie à l’anticipation et au groupe de réflexion

En tant que leader de la stratégie française de cybersécurité, l’ANSSI vise à atteindre cinq objectifs principaux (site web):

  1. “Garantir la souveraineté nationale”: notamment défense de l’intérêt national fondamental dans le cyber-espace.
  2. “Apporter une réponse forte contre les actes de cybermalveillance”: Promouvoir l’utilisation de l’espace cybernétique et protéger les citoyens, en réagissant fermement contre tout type de cybercriminalité.
  3. “Informer le grand public”, c’est à dire sensibilisation à la sécurité numérique.
  4. “Faire de la sécurité numérique un avantage concurrentiel pour les entreprises françaises”.
  5. “Renforcer la voix de la France à l’international”,  soit l’influence française internationale, à travers la définition des normes, la promotion de la stabilité cybernétique mondiale, et la promotion de l’autonomie européenne.

Qui plus est, l’ANSSI doit avoir une forte activité de prospective stratégique et d’anticipation sur tous les horizons temporels pour pouvoir assurer la sécurité du nouveau monde émergent, tout en faisant face aux menaces et aux risques très concrets du présent.

En effet, par exemple, parmi de nombreux impacts, l’informatique quantique perturbera complètement la transmission sécurisée des données, tandis que les villes et les entreprises qui utilisent abondamment l’intelligence artificielle (par apprentissage en profondeur/Deep Learning) devront être sécurisées. La communication quantique, quant à elle, tente par exemple de développer de nouveaux réseaux quantiques sur lesquels pourrait être construit dans le futur un internet quantique (Edd Gent, “From Quantum Computing to a Quantum Internet—A Roadmap“, SingularityHub, 22 October 2018). L’informatique quantique, ou plus largement les technologies quantiques, et l’intelligence artificielle, s’accélérant et se perturbant mutuellement, comme nous l’avons vu (The Coming Quantum Computing Disruption, Artificial Intelligence and Geopolitics (1)”, 15 octobre 2018) créeront de nouveaux cyber-défis auxquels les agences, les entreprises et les citoyens doivent être préparés.

Les vidéos ci-dessous illustrent un possible avenir et ses enjeux de sécurité (bande annonce française de la série TV Person of Interest saison 4 de J.J. Abrahams et trailer officiel en anglais; NVIDIA GTC China 2017 Keynote Recap, notamment la partie sur les villes intelligentes).

 

Dans le même temps, alors que les impacts multidimensionnels néfastes du changement climatique se propagent et s’intensifient, les conséquences sur la cybersécurité  doivent également être pris en compte.

Comme le souligne le Sénat,

“L’un des axes retenus dans la stratégie de l’ANSSI pour la période 2016-2020, intitulé « connaissance et anticipation » a pour objectif de renforcer sa capacité à mener des travaux de prospective, à anticiper les nouvelles menaces et à favoriser l’émergence de nouvelles technologies ou de nouveaux usages susceptibles d’avoir un impact en matière de sécurité informatique.”(Projet de loi de finances pour 2018 : Direction de l’action du Gouvernement : Coordination du travail gouvernemental” 23 novembre 2017)

Dans ce cadre, l’ANSSI a lancé un programme de réflexion original, Agora 41, au sein duquel 41 experts ont été sélectionnés et invités à participer à une nouvelle expérience visant à développer des solutions innovantes pour soutenir l’agence dans sa mission.

Cinq thèmes ont été sélectionnés pour servir la réalisation de la stratégie de cybersécurité et de ses objectifs, tout en respectant les impératifs de prospective stratégique.

  • Imaginaire, Cyber-monde et Sécurité
  • Entrent les GAFA et BATX: De nouvelles règles pour un nouveau jeu sur un nouvel échiquier?
  • Gagner la Guerre des Talents
  • Cyber-cohabitation
  • Mettre en place un cyber-écosystème victorieux pour la sécurité

Chaque membre d’Agora 41 a choisi un thème principal, tout en ayant la possibilité d’interagir sur les autres questions. Ce système  cherche à permettre des discussions fructueuses avec échanges horizontaux entre questions.

Ensemble, ces efforts pourraient contribuer à façonner non seulement la future cybersécurité, mais également notre cyber-futur.

————

* Il est presque impossible, de l’extérieur, d’évaluer avec précision le budget de l’ANSSI compte tenu de son “autonomie budgétaire limitée” au sein du SGDSN (“Projet de loi de finances pour 2018 : Direction de l’action du Gouvernement : Coordination du travail gouvernemental“, 23 novembre 2017. Comme l’a souligné le Sénat (Solutions numériques, “ANSSI : un rapport sénatorial préconise d’élargir son autonomie de gestion budgétaire“, 19 avril 2018), cette autonomie partielle contribue à occulter les besoins vitaux et à nuire à l’efficacité de l’ANSSI en la privant de ressources vitales, et ce d’autant plus que ses missions sont élargies et le seront plus encore dans un avenir prévisible. Qui plus est, le risque de querelles et de tracasseries administratives est accru.

Disclaimer: L’auteur participe à l’effort Agora 41 mais reste indépendante dans ses réflexions, condition sine qua non du succès de l’initiative de sensibilisation de l’ANSSI. Les opinions exprimées dans ce rapport représentent les vues et interprétations de l’auteur, sauf indication contraire. Cet article n’implique pas l’approbation de la politique, des programmes ou des réglementations par l’ANSSI.

À propos de l’auteur: Dr Helene Lavoix,, PhD Lond (Relations internationales), est la directrice de The Red (Team) Analysis Society. Elle est spécialisée dans la prospective stratégique et l’alerte en matière de sécurité nationale et internationale. Elle se concentre actuellement sur l’intelligence artificielle, l’informatique quantique et la sécurité.

Featured Image: The Argonne-led “Multiscale Coupled Urban Systems” project aims to help city planners better examine complex systems, understand the relationships between them and predict how changes will affect them. The ultimate goal is to help officials identify the best solutions to benefit urban communities. (Image by Argonne National Laboratory.)

Shaping the Security of the Cyber Future – Agora 41, Strategic Outreach for the French National Cybersecurity Agency

Accès à la version française

As we enter the “fourth industrial revolution”, the age of the digital transformation, a new emerging “AI-world”, and the “second quantum revolution”, national and international security must adapt. It must do so by anticipating this future world, avoiding surprises related to new – but also old – threats and dangers, while seizing the immense opportunities offered by what is no less than a change of paradigm (For the labels, respectively, Klaus Schwab, World Economic Forum, Helene Lavoix, The Future Artificial Intelligence – Powered World series, The Red (Team) Analysis Society, Jonathan P. Dowling, Gerard J. Milburn, “Quantum Technology: The Second Quantum Revolution”, 13 Jun 2002, arXiv:quant-ph/0206091v1).

The strategy related to cyber space and cyber security varies according to countries – and actors. It is handled in various ways by different types of agencies. After having briefly presented the main French, British and American state actors for cyber security, we shall focus on the French outlook and present the ANSSI, its goals and finally new outreach initiative, Agora 41.

France, the UK and the U.S. – a brief overview

In France, the Agence Nationale de la Sécurité des Systèmes d’Information (ANSSI) – National Agency for Cyber Security, created on 7 July 2009,  deals with the security of the cyber world. It is the national authority for all matters related to the defence and security of information systems and, as a result, leads the French National Strategy for Digital Security (2015). Nonetheless, other cyber dimensions remain under other types of state authorities, notably the ministry of Interior and the ministry of Defence, which plans a €1.6 billion budget for 2019-2025 for cyber security, while its cyber defence command, created in 2016, will see an increase in personal (Benjamin Hue, “La France va renforcer son arsenal contre la cybercriminalité“, RTL, 24 January 2018). A new national cyber strategy over five years with a clear overall budget is necessary and could be forthcoming (Ibid.).*

The ANSSI more or less corresponds to the more recent British National Cyber Security Center (NCSC), a part of the GCHQ, opened in October 2016 and officially launched on 14 February 2017, fully participating to the 2017 CyberUK strategy (launch of the NCSC, video and documents; Reuters, “Britain to spend 1.9 billion pounds on boosting cyber defenses“). The overall UK budget for cybersecurity across all ministries (but not including potential budget for cyber-retaliations and attacks) reaches £1.9 billion for 2017-2022 (“Chancellor’s speech at the National Cyber Security Centre opening“, 14 February 2017; Reuters, Ibid.).

Both the ANSSI and the NCSC are heir to the past cryptographic mission of states’ institutions. The ANSSI is the latest child of the Direction Technique des Chiffres (DTC) created in 1943 in Alger (ANSSI History). For its part, the NCSC, through the GCHQ, is indeed grounded  in the most famous Bletchley Park, which, notably thanks to Turing, the team of codebreakers and the Bombes machine they created, defeated the German Enigma Machine and thus contributed to the Allies victory during World War 2 . Before that, its ancestry can be traced to the codebreaking efforts at the Admiralty and War Office during World War 1 (e.g. GCHQ, “The story of Signals Intelligence 1914-2014″).

Cybersecurity in the U.S. benefits from a $15 billion federal budget for FY 2019 dwarfing European efforts, but to share among all agencies with a cyber element, from the NASA to the Small Business Administration (John Slye, “The Fy 2019 Budget Increases Cybersecurity Funding By Nearly $600 Million“, Deltek, 28 February 2018).

U.S. Federal Cybersecurity Funding FY 2017-2019 by John Slye, “The Fy 2019 Budget Increases Cybersecurity Funding By Nearly $600 Million“, Deltek, 28 February 2018

Nonetheless, and despite the famous National Security Agency/Central Security Service (NSA/CSS), no new agency nor overarching centre handles the new cyber world and its security in the leading way developed in both France and the UK (David H. Petraeus, “The Case for a National Cybersecurity Agency“, Belfer Center, 5 September 2018).  The Office of Cybersecurity and Communications of the National Protection and Program Directorate (NPPD)  within the Department of Homeland Security (DHS) could be seen as an effort approaching the British and French approach. However, being an Office, it does not have the autonomy, weight and leadership that may be found in Europe. Furthermore, by its very location and by the number of other agencies involved, the OCC/NPPD is very likely to have to devote time, resources and energy to administrative skirmishes and quarrels.

That said, the American cybersecurity budget remains very large indeed. Meanwhile, the U.S. benefits of a “cyber ecosystem”, which is a formidable assets. This ecosystem is created by the Federal cyber budget and the benefiting agencies and offices, the GAFA – and other companies such as Intel, NVIDIA and IBM to quote only a few – the Silicon Valley, patriot and concerned billionaires and world-class universities, as shown by the $ 1 billion MIT initiative “to address the global opportunities and challenges presented by the prevalence of computing and the rise of artificial intelligence (AI),” including a $350 million gift by Stephen A. Schwarzman, CEO of Blackstone (MIT Review, “MIT reshapes itself to shape the future“).

If Eisenhower pointed out the importance of the military–industrial complex to understand American national security (Military-Industrial Complex Speech, Dwight D. Eisenhower, 1961), it could well be that we now must also count with a similar but larger and deeper cyber-IT-whole of government complex.  The about to be born “Joint Enterprise Defense Infrastructure (JEDI) program for the Department of Defense could only reinforce this trend (Helene Lavoix, “Artificial Intelligence, Computing Power and Geopolitics (2)“, The Red (Team) Analysis Society, 25 June 2018; Shaun Nichols, “US JEDI military cloud network is so high-tech, bidders will have to submit their proposals by hand, on DVD“, The Register, 27 Sep 2018).

For its part, NATO is working upon getting a new cyber military command center, which should be ready for 2023 (Robin Emmott, “NATO cyber command to be fully operational in 2023“, Reuters, 16 October 2018).

A complete and more detailed outlook would notably need to include China, should we want to provide a better global picture.

The ANSSI, from Strategy to Anticipation and Outreach

As the leader of the French cybersecurity strategy, the ANSSI aims at achieving five main goals (website):

  1. Defence of the fundamental national interest in cyberspace.
  2. Promoting cyperspace usage and protect citizens, with a strong response against any type of cybercrime.
  3. Raising digital security awareness.
  4. Transforming digital security into a competitive advantage for French economic actors.
  5. Strengthening international influence [shaping norms, promoting cyber global stability, promoting European autonomy – my summary].

Furthermore, the ANSSI must have a strong strategic foresight and anticipatory activity across all timeframes to be able to provide for the security of the new emerging world, while also dealing with the very concrete threats and risks of the present. Indeed, for example, among many other impacts, quantum computing will completely unsettle the safe transmission of data, while cities and companies abundantly using artificial intelligence in its deep learning component will need to be secured. Quantum communication work, for example, at creating quantum networks, upon which could be built in the future a quantum internet (Edd Gent, “From Quantum Computing to a Quantum Internet—A Roadmap“, SingularityHub, 22 October 2018). Quantum computing, or more largely quantum technologies, and AI, both accelerating and disrupting each other, as we saw (“The Coming Quantum Computing Disruption, Artificial Intelligence and Geopolitics (1)”, 15 October 2018), will create completely new cyber challenges that need to be envisioned and for which states’ agencies, companies, and citizens must be prepared.

The videos below, notably when seen together, may help us imagine what the future and its security could look like (trailer of Person of Interest Season 4 by J.J. Abrahams; NVIDIA GTC China 2017 Keynote Recap, notably the part on smart cities).

Meanwhile, the adverse multi-dimensional impacts of climate change spread and intensify, the consequences on cybersecurity must also be considered.

As underlined by the Sénat,

“One of the axis selected in the strategy of the ANSSI for the 2016-2020 period, “knowledge and anticipation” has as aim to reinforce the capacity to undertake foresight efforts, to anticipate new threats and to favour the emergence of new technologies or new uses which could have an impact in terms of cyber security”  (“Projet de loi de finances pour 2018 : Direction de l’action du Gouvernement : Coordination du travail gouvernemental“,  23 Novembre 2017)

In this framework, the ANSSI started an original outreach programme, the Agora 41, where 41 experts were selected and invited to participate in a new experiment at thinking out of the box and across disciplines to support the agency in its mission.

Five themes were selected to serve the achievement of the cyber strategy and its goals, while obeying to the strategic foresight necessity.

  1. Imagining the Cyber-World and its Security
  2. Enter the GAFA and the BATX: New rules for a new game on a new board?
  3. Winning the Talents’ War
  4. Cyber-cohabitation
  5. Enabling a Victorious Cyber-Ecosystem for Security

Each member of Agora 41 chose one core theme, while also having the possibility to interact on other issues. This system aims at allowing for more fruitful discussions and maximum feedbacks across questions.

Together these enabling efforts could help shape not only the future cybersecurity but also our very cyber future.


* It is near impossible from outside to precisely evaluate the ANSSI’s budget considering its “limited budgetary autonomy” within the Prime Minister’s Secrétariat général de la défense et de la sécurité nationale – SGDSN (“Projet de loi de finances pour 2018 : Direction de l’action du Gouvernement : Coordination du travail gouvernemental“,  23 Novembre 2017). As pointed out by the Sénat (Solutions Numériques, “ANSSI : un rapport sénatorial préconise d’élargir son autonomie de gestion budgétaire”, 19 April 2018), this partial  autonomy may only contribute to obscure vital needs and hinder the ANSSI’s efficiency by denying vital resources, all the more so when its missions are and will be enlarged considering the foreseeable future. Meanwhile the risk of administrative quarrels and hassle is enhanced.

Disclaimer: The author is part of the Agora 41 effort, but remains independent in her thinking, a sine qua non condition for the success of the ANSSI’s outreach initiative. The views expressed in this report represent the views and interpretations of the author, unless otherwise stated. This article does not imply policy, program or regulatory endorsement by the ANSSI.

Featured Image: The Argonne-led “Multiscale Coupled Urban Systems” project aims to help city planners better examine complex systems, understand the relationships between them and predict how changes will affect them. The ultimate goal is to help officials identify the best solutions to benefit urban communities. (Image by Argonne National Laboratory.)

Of Fire and Storm – Climate Change, the “Unseen” Risk for the U.S. Economy – State of Play

This is an update of the 17 September 2018 release of this article analysing the economic costs of climate change on the U.S. economy in 2018. This update integrates the consequences, and especially the costs, of the super hurricane “Michael”, which hammered the Florida panhandle, then Georgia, North Carolina and Virginia, between the 10 and the 14 of October 2018 (Camilla Domonoske, “Michael Will Costs Insurers Billions, but Won’t Overwhelm the Industry, Analysts Say”, NPR, October 14, 2018).

“Michael” took over from “Florence”, the monster storm that hit and battered the U.S. East Coast on 12 September 2018. It looks like a new climate-related disaster “peak”.  It could announce a transition towards possibly worse, considering the last 12 months of climate hellish conditions.

Thus, a major question arises: is climate change becoming a major risk for the U.S. economy? If yes, how should economic actors react (Jean-Michel Valantin, “Climate Change: The Long Planetary Bombing”, The Red (Team) Analysis Society, September 18, 2017)? Continue reading “Of Fire and Storm – Climate Change, the “Unseen” Risk for the U.S. Economy – State of Play”

The Coming Quantum Computing Disruption, Artificial Intelligence and Geopolitics (1)

On 12 October, Chinese Huawei launched its new Quantum Computing Simulation HiQ Cloud Service Platform (Press Release).  On 13 September 2018, the U.S. House of Representatives approved the “H.R. 6227: National Quantum Initiative Act” with $1.275 billion budget from 2019 to 2023 on quantum research. The Chinese government yearly investment in quantum science is estimated to $ 244 million (CRS, “Federal Quantum Information Science: An Overview”, 2 July 2018). The EU Quantum Flagship plans so far to invest €100 million per year, to which national investments must be added. The largest tech companies, be they American, European or Asian, and more particularly Chinese, fund quantum R&D. This heralds the start of a new race for quantum technologies.

Indeed, ongoing scientific and technological innovations related to the quantum universe have the potential to fundamentally alter the world as we know it, while also accelerating and even disrupting more specifically the field of artificial intelligence (AI). Advances in quantum technologies have been dubbed the “Second Quantum Revolution”(Jonathan P. Dowling, Gerard J. Milburn, “Quantum Technology: The Second Quantum Revolution”, 13 Jun 2002, arXiv:quant-ph/0206091v1).

In this first article, we shall explain what is this quantum revolution, then narrow it down to where it interacts with AI, indeed potentially accelerating and disrupting current dynamics.  This article is aimed at non-quantum physicists, from the analysts to decision-makers and policy-makers, through interested and concerned readers, who need to understand quantum technologies. Indeed, the latter will revolutionise the world in general, AI in particular, as well as governance, management, politics and geopolitics, notably when combined with AI. We shall use as much as possible real world examples to illustrate our text.

We shall explain first where quantum technologies are located, i.e. quantum mechanics. We shall then focus upon these quantum technologies – called the Quantum Information Science (QIS) – concentrating notably on quantum computing and simulation, but also briefly reviewing quantum communication and quantum sensing and metrology. We shall aim at understanding what is happening, how dynamics unfold and the current state of play, while also addressing the question of timing, i.e. when will quantum computing start impacting the world.

Related

Artificial Intelligence – Forces, Drivers and Stakes

The Quantum Computing Battlefield and the Future – Quantum, AI and Geopolitics (2)

Mapping The Race for Quantum Computing – Quantum, AI and Geopolitics (3)

Finally, we shall look at the intersection between the quantum technologies and AI – indeed the emerging Quantum Machine Learning sub-field or even Quantum AI –  pointing out possible accelerations and disruptions. We shall therefore highlight why and how quantum technologies are a driver and stake for AI.

Building upon the understanding achieved here, the next articles shall delve more in detail on the potential future impacts on the political and geopolitical world.

From Quantum Mechanics to the new Quantum Technologies

Currently, the principles of quantum mechanics are being newly applied to an array of fields creating the potential for new possibilities in many areas.

Members-only

Download article as pdf (English only)- The Coming Quantum Disruption Artificial Intelligence and Geopolitics – part 1.pdf – 22 pages

Become a member of The Red (Team) Analysis Society.

Quantum mechanics or quantum physics is a scientific discipline, which started at the very beginning of the 20th century, with, initially, work by Max Planck on the colour spectrum (for a rapid and clear summary of the development of the field, read, for example, Robert Coolman, “What Is Quantum Mechanics?“, LiveScience, 26 September 2014).

Quantum mechanics is about “the general expression of the laws of nature in a world made of omnipresent and almost imperceptible particles” (Roland Omnes, Quantum Philosophy: Understanding and Interpreting Contemporary Science, 1999, p.82). This is the reign of the infinitesimally small. Quantum mechanics contributed to a series of scientific changes that stroke at the very heart of the way we understand. As Omnes put it,

“We are loosing the spontaneous representation of the world… common sense is defeated” (ibid.).

Even though common sense was challenged, scientists did not abandon the scientific project and continued their work. Now, the very properties that shocked the scientific community and the new understanding of the world that emerged with quantum mechanics are being used to develop new technologies.

In a nutshell, at the level of the quantum world, we observe a “wave-like nature of light and matter” (Biercuk and Fontaine, “The Leap into Quantum Technology…“, War on the Rocks, Nov 2017). Two resulting properties of quantum systems are then fundamental to the current technological effort, namely superposition and entanglement.

Superposition means that “quantum systems may be (loosely) described as simultaneously existing in more than one place until the system is observed” (Ibid.). Once the system is observed, then the system fixes itself in one place, and one says that “the superposition collapses” (Ibid.).

Entanglement means that “linked particles can be “remotely controlled” no matter how far apart they may be. Manipulate the local partner of an entangled pair and you instantaneously manipulate its entangled partner as well” (Ibid.).

Building notably on these properties, scientists are developing the technological field called the Quantum Information Science (QIS), composed of Quantum sensing and metrology, Quantum communication and Quantum computing and simulation, to which can be added research in Quantum materials. We shall more particularly focus here on Quantum computing.

Understanding Quantum Information Science

Quantum computing and simulation

Quantum computing means harnessing quantum properties, notably superposition and entanglement, “to perform some computation” (CRS, July 2018) in a way that is incredibly faster than what is achieved today by the most powerful High Performance Computing (HPC) capabilities, even the exascale computers, which are currently being built (see Winning the Race to Exascale Computing).

Using quantum computing should be particularly promising for quantum simulations, i.e. “using some controllable quantum system  [the quantum computer] to study another less controllable or accessible quantum system” (Georgescu, et al,  “Quantum Simulation” 2013). In other words, quantum computing is the best approach to studying and simulating systems located at the quantum level and thus displaying quantum properties.

Quantum computing, a development initiated by security concern

The idea of a quantum computer was developed in 1981 (published in 1982) by American physicist Richard P. Feynman, who thought about using quantum properties to simulate physics and indeed quantum mechanics (“Simulating Physics with Computers“, International Journal of Theoretical Physics, VoL 21, Nos. 6/7, 1982). It was initially mainly of theoretical interest (Simon Bone and Matias Castro, “A Brief History of Quantum Computing”, Imperial College London).

Then, the incredible computing power that a functioning quantum computer could have led to the rising awareness that a “cryptopocalypse” could happen. Indeed, in 1994, mathematician Peter Shor formulated an algorithm, “Shor’s algorithm”, showing that “a quantum computer with a few tens of thousands of quantum bits and capable of performing a few million quantum logic operations could factor large numbers and so break the ubiquitous RSA public key cryptosystem” – the most widely used way to encrypt data transmission (Peter Shor, “Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer,” 1994, 1995; Seth Lloyd, & Dirk Englund, Future Directions of Quantum Information Processing, August 2016, p.6 ).

It is the 1994 Shor’s findings that created the interest in quantum computing, from which evolved Quantum Technologies (Bone and Castro, Ibid.; Lloyd & Englund, Ibid, Biercuk, “Building The Quantum Future“, video, 2017). The QIS’ birth thus would stem from both the fear of and interest in developing such a quantum computer: Shor’s algorithm would indeed give an incredible security advantage to those benefiting from a quantum computer, as they could break all the codes present, past and future of their ‘competitors’ if these actors use current classical computing capabilities as well as current encryption systems.

What is quantum computing?

Quantum computing is currently being developed. The two main challenges of the field are to develop a usable quantum computer and we are now only at the very early stages of building the hardware, and to learn to program these new computers. 

Qubits, hardware and some of the challenges faced

Classical computers store information as 0s and 1s, the bits or binary digits.

For interested and scientifically-minded readers we recommend, among a host of explanations:

Sam Sattel, “The Future of Computing – Quantum & Qubits“, Autodesk.com blog.

Quantum computers use qubits, with which, “you can have zero, one, and any possible combination of zero and one, so you gain a vast set of possibilities to store data” (Rachel Harken, “Another First for Quantum“, ORNL Blog, 23 May 2018).

The short video below (Seeker, 15 July 2018) explains (relatively) simply what are qubits, superposition, and entanglement, as well as the very practical challenges faced to build a quantum computer – i.e. the hardware, such as refrigeration, how to control the state of a qubit and finally how long the information can last inside a qubit, a property called coherence. It then moves to a couple of examples of possible simulations and usage.

For an even better understanding of quantum computing, and although the video is a bit long – 24:15 – we recommend taking the time to watch  the very clear, lively and fascinating video by Michael J. Biercuk of the University of Sydney, “Building the Quantum Future“.

Number of qubits, power, and error

Thus, to get a functioning quantum computer, in terms of hardware, you need to have enough qubits to proceed with your computing and to do so in a way where the errors generated by the specificities of quantum computing, notably loss of coherence or decoherence, are not too serious to defeat the whole system. The necessity to consider the errors generated by the quantum system used implies to imagine, create and then implement the best possible quantum error correction, tending towards full quantum error correction. One of the difficulties is that the error correction is also a function of the qubits, which thus multiplies the number of qubits that must be operational.

For example, Justin Dressel of the Californian Institute for Quantum Studies of  Chapman University applied Austin G. Fowler et al., “Surface codes: Towards practical large-scale quantum computation” (2012)  to Shor’s algorithm using as case study the aim to decrypt a strong RSA encryption using a 2048-bit keys. He calculated that for a quantum computer to meet this goal, its minimum qubit number would be 109. Such a machine would then need to run for 27 hours, to “compare with 6.4 quadrillion years for a classical desktop computer running the number sieve”. Of course, as with classical computers, more qubits would reduce the run-time (for the paragraph, Justin Dressel, Quantum Computing: State of Play, OC ACM Chapter Meeting, 16 May 2018).

Actually, we are still quite far from a 109 qubit computer.

The state of play in terms of qubits processors…

On 16 May 2018, according to Dressel (Ibid.), two main competing implementations (others being in development) are used to obtain physical qubits, and have so far given the following results:

Method 1. Trapped ions –  with as best performance

  • University of Maryland (UMD)/ Joint Quantum Institute  (JQI)*: 53 qubits

Method 2. Superconducting circuits –  with as best performance

… and quantum simulators running on classical computers

Besides the creation of very real quantum computing hardware, we also have the design and development of quantum computing simulators. These allow researchers and scientists to start experimenting with quantum computing and notably to begin learning to program these computers. Indeed, the specificities of quantum computing demand new ways to program these computers.

For example, Atos used its HPC supercomputers to develop Atos Quantum Learning Machine (QLM) with appliances from 30 and 40 Qubits according to power level (Atos QLM Product). Meanwhile, Atos developed “universal quantum assembly programming language (AQASM, Atos Quantum Assembly Language) and a high-level quantum hybrid language” (Ibid.).

Other similar efforts are at work, with, for example, the Centre for Quantum Computation and Communication Technology at the University of Melbourne able “to simulate the output of a 60-qubit machine”, but for “only” an instance of Shor’s algorithm  (Andrew Tournson, “Simulation Breaks Quantum  Computing World Record“, Futurity, 2 July 2018).

As mentioned in the opening paragraph, Chinese Huawei announced on 12 October that it launched its very first quantum computing simulation platform through its Cloud Service, HiQ (Press release). “The HiQ platform can simulate quantum circuits with at least 42-qubits for full-amplitude simulations” (ibid.), which would make it slightly more powerful than Atos QLM. Of course, performance must be tested by scientists before such conclusions may be drawn with certainty. As Atos, Huawei also developed its quantum programming framework. Unlike Atos’s system, HiQ “will be fully open to the public as an enabling platform for quantum research and education” (Ibid.). We see here emerging two different approaches and strategies to the development of quantum computing, which do and will matter for companies, state actors as well as citizens, as well as for the field. We shall come back to this point in the next article.

When shall we have functioning quantum computers? What is quantum supremacy?

Actually, we already have functioning quantum computers, but their computing power is still weak and they may be considered as prototypes.

Because we already have these prototypes as well as the simulators on classical machines, the current real and relevant question must be transformed into two questions.

1- How powerful does my quantum computer need to be to answer my question or solve my problem?

The first part of our initial timing-related question could be phrased as follows: how powerful does my quantum computer need to be to answer my question or solve my problem?

In other words, the type of computation needed to solve a problem may be more easily and more quickly achieved on a quantum computer with a small number of qubits, but de facto using quantum properties, than on a classical computer, where the very quantum characteristics necessary for solving the problem at hand would demand an enormous HPC, or would just not be feasible. Here, the quantum understanding of the problem under consideration and the algorithm developed become as important, if not more, than the very quantum hardware problem. As a result, current quantum machines and quantum simulations may be considered as already operational.

For example, Vanderbilt University physicist Sokrates Pantelides and postdoctoral fellow in physics Jian Liu, developed detailed quantum mechanical simulations on the atomic scale to help the oil industry know the promise of recovery experiments before they start (Heidi Hall, “Quantum mechanics work lets oil industry know promise of recovery experiments before they start“, Vanderbilt University News,  Sep. 27, 2018). They used classical HPC computing facilities at the U.S. National Energy Research Scientific Computing Center of the Department of Energy (DOE).  It is likely that if quantum computers had been available to them it would have facilitated their research.  Note that the Oak Ridge National Laboratory (ORNL) of DOE has a group focusing on Quantum Information Science – sensing, communicating, computing – and is using Atos Quantum Learning Machine (Atos QLM), a “quantum simulator, capable of simulating up to 40 quantum bits (Qubits)” (Atos Press release, “Atos Quantum Learning Machine can now simulate real Qubits“, 9 April 2018).

As another example, on 4 October 2018, Spanish researchers U. Alvarez-Rodriguez et al. (“Quantum Artificial Life in an IBM Quantum Computer“, Nature, 2018) published the results of their research, according to which they were able to  create a quantum artificial life algorithm.  Interviewed by Newsweek, Lamata, a member of the scientific team, explained:

“We wanted to know whether emergent behaviors of macroscopic biological systems could be reproduced at the microscopic quantum level,” he said. “What we found in this research is that very small quantum devices with a few quantum bits could already emulate self-replication, combining standard biological properties, such as the genotype and phenotype, with genuine quantum properties, such as entanglement and superposition,” (Hannah Osborne, “Quantum Artificial Life Created for First Time, Newsweek, 11 October 2018).

The life creating simulation was realised using “the superconducting circuit architecture of IBM cloud quantum computer”, with “the IBM ibmqx4 quantum computing chip” (Alvarez-Rodriguez, et al., Ibid.), i.e. using IBM 5 Q, which counts 5 qubits with a maximum qubits connectivity of 4 (“Qubit Quality“, Quantum Computing Report) .

This simulation illustrates perfectly how quantum computing can be both accelerating and disruptive for artificial intelligence, as we shall synthesise in the third part. Indeed, as pointed out in the research paper’s conclusions and prospects, the successful quantum artificial life algorithm could potentially be combined with the new emerging field of quantum machine learning to pursue “the design of intelligent and replicating quantum agents” (Alvarez-Rodriguez, et al., Ibid.). We would reach here potentially a completely new level of AI.

2- When shall we have quantum computers with such a power that classical computers, even the most powerful, are out-powered?

The second part of our question regarding timing could be rephrased as follows: when shall we have quantum computers with such a power that classical computers, even the most powerful, are out-powered, i.e. when will quantum simulations made on classical computers become irrelevant?

This is what Google called achieving “quantum supremacy”, or crossing the “quantum supremacy frontier”, i.e. finding out  “the smallest computational task that is prohibitively hard for today’s classical computers” and then going beyond it thanks to a quantum computer (Sergio Boixo, “The Question of Quantum Supremacy“, Google Ai Blog, 4 May 2018). The idea of achieving quantum supremacy is best explained by the following slide from John Martinis’ (Google) presentation “Quantum Computing and Quantum Supremacy” (HPC user Forum, Tuscon, April 16-18, 2018).

Quantum supremacy. quantum computing, quantum disruption, quantum computing and AI, quantum technologies, QIS, Artificial Intelligence, Quantum AI, Quantum Machine Learning, risk, political risk, geopolitical risk, the Red (Team) Analysis Society
Slide from John Martinis’ (Google) presentation “Quantum Computing and Quantum Supremacy” (HPC user Forum, Tuscon, April 16-18, 2018).

Building upon Google’s slide, Dressel believes we have almost reached “the scale that is no longer possible to simulate using classical supercomputers. The current challenge is to find “near-term” applications for the existing quantum devices” (Ibid.).

Quantum supremacy. quantum computing, quantum disruption, quantum computing and AI, quantum technologies, QIS, Artificial Intelligence, Quantum AI, Quantum Machine Learning, risk, political risk, geopolitical risk, the Red (Team) Analysis Society
Figure from a slide from Justin Dressel, Quantum Computing: State of Play, OC ACM Chapter Meeting, 16 May 2018

However, as improvements in terms of ways to construct quantum simulations on classical machines are also ongoing, then the timeline as well as the numbers of qubits necessary to achieve quantum supremacy could change (Phys.org, “Researchers successfully simulate a 64-qubit circuit“, 26 June 2018; original research: Zhao-Yun Chen et al, “64-qubit quantum circuit simulation“, Science Bulletin, 2018).

Meanwhile, Dressel  (Ibid.) also estimates that we can expect chips with one billion qubits in approximately 10-15 years.

Quantum supremacy. quantum computing, quantum disruption, quantum computing and AI, quantum technologies, QIS, Artificial Intelligence, Quantum AI, Quantum Machine Learning, risk, political risk, geopolitical risk, the Red (Team) Analysis Society
Figure from a slide from Justin Dressel, Quantum Computing: State of Play, OC ACM Chapter Meeting, 16 May 2018

The availability of such a powerful computing power would most obviously be accelerating for AI while completely disrupting the current landscape surrounding the contemporary AI revolution, from the microprocessors developed and used for example in the race to exascale, to the power of those who succeeded in being at the top of the race in terms of classical HPC, but we shall come back to the political and geopolitical implications in the second article of the series.

Quantum communications

As logically evolving from the way quantum technologies were born, quantum communications are mainly concerned with the development of  “quantum-resistant cryptography”, as underlined in the U.S. National Strategic Overview for Quantum Information Science, September 2018. If quantum computing can be used to break existing encryption, then quantum mechanics may also be used to protect encryption, notably with quantum cryptography (see phys.org definition) or quantum key distribution (QKD).

Quantum communications is thus about “generating quantum keys for encryption” and more largely, “sending quantum-secure communications (any eavesdropping attempt destroys the communication and the eavesdropping is detected)” (CRS, July 2018, Ibid.).

Quantum sensing and metrology

“’Quantum sensing’ describes the use of a quantum system, quantum properties or quantum phenomena to perform a measurement of a physical quantity” (Degen et al, 2016). Thanks to quantum sensors, we “measure physical quantities such as frequency, acceleration, rotation rates, electric and magnetic fields, or temperature with the highest relative and absolute accuracy.” (Wicht et al. 2018). This video by the UK National Quantum Technology Hub, “Sensors and Metrology“, explains very simply this sub-field.

Applications, including in terms of national security, are numerous, from global positioning systems (GPS), to sub-marines through, for example, improving considerably our understanding of the human brain and of cognition, as explained in the video shown in the last part of the article.

Don’t overstate boundaries

As always, however, if categories between different sub-disciplines are convenient to define fields, focus and explain subject matters, boundaries tend to be porous. Feedbacks with other sub-fields may take place when new discoveries are made. Innovations also emerge at the intersection of the different subfields, as illustrated below with the production of vortices of light in quantum sensing, which then feed into quantum communication – as, for example, unique and identifiable petal patterns can form the alphabet to transmit  information (Matthew O’Donnell, “Petal Patterns“, Quantum Sensing and Metrology Group at Northrop Grumman, 17 May 2018).

Accelerating and Disruptive Impacts on AI: the Emergence of Quantum Machine Learning

Related:

When Artificial Intelligence will Power Geopolitics – Presenting AI

Artificial Intelligence and Deep Learning – The New AI-World in the Making

The intersection between the current AI development, which takes place mainly in the area of machine learning and more specifically deep learning and Quantum Information Science is potentially so fruitful that it is giving rise to a new sub-discipline, Quantum Machine Learning.

Below are some of the main areas where research takes place or could take place and where current AI development could be accelerated or disrupted by quantum technologies, while AI possibilities would also impact positively quantum computing.

The first obvious accelerating and potentially disruptive impact quantum computing could have on AI is that once hardware with high number qubits are available, then the (quantum) computing power available also for AI will reach new heights. This is likely to allow for so far impossible to test methodologies, while until now too complex or computing power-hungry algorithms will be developed.

Then, we are likely to see an intensification and multiplication of the development of “creating-AIs”, such as what was done by the combination of evolutionary algorithms and reinforcement learnings by Google Brain Team as well as by scientists at U.S. Department of Energy’s Oak Ridge National Laboratory (ORNL) (see Helene Lavoix, When AI Started Creating AI – Artificial Intelligence and Computing PowerThe Red (Team) Analysis Society, 7 May 2018).

Meanwhile, the capacity to see the birth of a third generation AI will be immensely enhanced(see Helene Lavoix $2 Billion for Next Gen Artificial Intelligence for U.S. Defence – Signal ).

As for quantum simulations, some scientists “postulate that quantum computers may outperform classical computers on machine learning tasks.” In that case, Quantum Machine Learning is understood as the field where scientists focus on “how to devise and implement quantum software that could enable machine learning that is faster than that of classical computers” (Jacob Biamonte, et al., “Quantum machine learning“, v2 arXiv:1611.09347v2., May 2018). Quantum Machine Learning algorithms are sought and developed (Ibid., Dawid Kopczyk, “Quantum machine learning for data scientists“, arXiv:1804.10068v1, 5 Apr 2018).

Furthermore, as expected from the second part of this article, the explanations above on QIS, the intersection and feedbacks between quantum systems and AI are also more complex, as far as we can understand and foresee now.

The very challenges involved in quantum computing, i.e. mainly developing the hardware and developing the program and algorithms, could be served by AI. In other words, one would apply the current understanding of AI to quantum computing’s development. Potentially, as we shall proceed through trials and errors, and because of the specificities of quantum computing, AI will evolve, potentially reaching new stages of development. Indeed, for example, as new quantum capabilities are reached, and new simulations become available, new understanding of and approaches to AI may be uncovered.

Also quantum simulations on the one hand, quantum sensing on the other, will produce a new host of big data, which will need AI to be understood.

We can find an example of such a case where AI has been used for these newly available quantum large dataset, which in turn could benefit quantum computing and then most probably AI,  in the field of physics in general, superconductivity in particular. On 1st August 2018, Yi Zhang et al. published an article explaining their use of an AI, a specifically designed “array of Artificial Neural Network (ANN)” –  i.e. deep learning  – on a large body of data, “experimentally derived electronic quantum matter (EQM) image archive”, which allowed for  progress in our understanding of superconductivity – notably as far as temperature is concerned, a key challenge in quantum computing (Yi Zhang et al., “Using Machine Learning for Scientific Discovery in Electronic Quantum Matter Visualization Experiments“,  1 August 2018, arXiv:1808.00479v1; for a simplified but detailed explanation, Tristan Greene, “New physics AI could be the key to a quantum computing revolution“, TNW, 19 September 2018).

As a result of this experiment, usage of AI-Deep Learning will most probably increase in physics and more largely in science, while new advances in superconductivity could help towards qubits processors.

Should such a development occur in superconductivity, then this also means that the race to exascale we previously detailed could be disrupted. According to the time when exascale is reached and to the processors used, compared with the time when the new advances in superconductivity can be engineered, as well as when competing quantum processors are available, then the huge computing power finally obtained with exascale as well as the so far developed processor could be more or less obsolete or about to be. The industrial risk should here be carefully estimated and monitored, probably through scenarios as most adapted and efficient methodology. We shall see in the next article the related potential political and geopolitical impacts.

The new types of data gathered by quantum sensing may also enrich our understanding of intelligence in general as with the University of Birmingham project “Quantum Sensing the Brain” (11 June 2018) described in the video below.

This specific quantum sensing achievement may, in turn, thrice change and enrich approaches to AI: first, because we would have had to create new AI-systems to make sense of these specific data, second because these deep learning agents would have had access to new and so far unknown understanding of intelligence, thus would have learned something different enhancing the potential to develop different outputs, and third because the resulting overall new understanding of intelligence could, in turn, generate different and better types of AI.

In the same area, the emerging field of quantum cognition (see Peter Bruza et al., “Introduction to the Special Issue on Quantum Cognition“, Journal of Mathematical Psychology, 23 September 2013; Peter Bruza et al., “Quantum cognition: a new theoretical approach to psychology“,  Trends in Cognitive Science, July 2015), now benefiting from quantum simulations, could lead to completely novel approaches to cognition and intelligence. In turn, a disruption of the current status quo in terms of AI around deep learning could occur. Totally new approaches to AI could emerge.

As a result, quantum technologies are indeed a driver as well as a stake for AI.

Although it is still very early in the field of Quantum Information Science, and notably quantum computing and simulations, and even more so in its intersection with AI, considerable innovations have already taken place both in QIS and Quantum AI / Quantum Machine Learning, and the fields are already starting to bear fruits. Many challenges remain, but the efforts endeavoured to overcome these very hurdles could also lead to new breakthrough in both QIS and AI.  We could be at the dawn of a real change of paradigm with a whole range of consequences from the already discernible to those difficult to imagine for polities and its actors. It is to these possible impacts we shall turn with the next article.


Featured Image: An image of a deuteron, the bound state of a proton (red) and a neutron (blue). Image Credit: Andy Sproles, ORNL

Notes

*The Joint Quantum Institute (JQI) is actually a group operating “through the work of leading quantum scientists from the Department of Physics of the University of Maryland (UMD), the National Institute of Standards and Technology (NIST) and the Laboratory for Physical Sciences (LPS). Each institution brings to JQI major experimental and theoretical research programs that are dedicated to the goals of controlling and exploiting quantum systems.” (JQI – About). Note that notably through the NIST they will benefit from the 2019 US budget for QIS.

Some references

Alvarez-Rodriguez, U., M. Sanz, L. Lamata & E. Solano, “Quantum Artificial Life in an IBM Quantum Computer“, Nature, Scientific Reports volume 8, Article number: 14793 (2018) – Published: 04 October 2018.

Biamonte Jacob, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe & Seth Lloyd, “Quantum machine learning“, Nature volume 549, pages 195–202, 14 September 2017; revised 10 May 2018 arXiv:1611.09347v2.

Biercuk Michael J., and Richard Fontaine, “The Leap into Quantum Technology: A Primer for National Security Professionals,” War on the Rocks, November 17, 2017.

Biercuk, Michael J., The University of Sydney, “Building the Quantum Future”Pause Fest, Mar 2, 2017.

Bruza, Peter D., Jerome Busemeyer, Liane Gabora, “Introduction to the Special Issue on Quantum Cognition“, Journal of Mathematical Psychology, 53, 303-305, arXiv:1309.5673v1

Bruza, Peter D., Zheng Wang, Jerome R. Busemeyer, “Quantum cognition: a new theoretical approach to psychology“,  Trends in Cognitive Science, Volume 19, Issue 7, July 2015, Pages 383-393.

Congressional Research Service, Federal Quantum Information Science: An Overview, 2 July 2018.

Degen, C. L., F. Reinhard, P. Cappellaro, “Quantum sensing“,  Submitted on 8 Nov 2016 (v1), last revised 6 Jun 2017 (this version, v2), arXiv:1611.02427v2 – quant-ph.

Dirjish, Mathew, “Quantum Sensing Platform Now A Reality”, SensorsOnline, July 30, 2018.

Executive Office of the President of the United States, National Strategic Overview for Quantum Information Science, September 2018.

Fowler, Austin G., Matteo Mariantoni, John M. Martinis, Andrew N. Cleland, “Surface codes: Towards practical large-scale quantum computation“, Phys. Rev. A 86, 032324 (2012),  arXiv:1208.0928v2

The Red (Team) Analysis Weekly – An Obvious 21st Century Conundrum – 11 October 2018

Each week our scan collects weak – and less weak – signals for political and geopolitical risk of interest to private and public actors.

Find out more on horizon scanning, signals, what they are and how to use them:

Horizon Scanning and Monitoring for Anticipation: Definition and Practice“.

Welcome to the now obvious 21st century conundrum: The already impacting (we told you so) climate change entails huge costs. To reduce them – to put it mildly – immense and rising might and expenses would be necessary. As a result, profits and use of what is at the heart of our current civilisation – fossil fuels – appear to be necessary, but then climate change and its costs will heighten… How is that for an interesting riddle?

The solution out of this mad and accelerating vicious spiral could be in thinking and acting out of the box, including by being powerful and smart enough to entice outdated elite and power players in not derailing efforts. Expect nonetheless unavoidable direct and collateral damage.

Read below our latest complimentary Weekly horizon scanning. 
Continue reading “The Red (Team) Analysis Weekly – An Obvious 21st Century Conundrum – 11 October 2018”

The U.S. Economy, between the Climate Hammer and the Trade War Anvil – the Soybean Case

On 24 September 2018, the U.S. Secretary of Commerce imposed new tariffs on 200 billion dollars worth of Chinese goods, thus widely escalating the “trade war” initiated by president Donald Trump against China in April 2018. Beijing immediately retaliated with tariffs on 60 billions worth of American goods (Will Martin, “China Hits Back at Trump with Tariffs on $60 Billion of US Goods”, Business Insider, 18 September, 2018). Some analysts and commentators are worried that the new tariffs could backfire and may impact the prices of consumption goods on the domestic market, and thus the U.S. consumer (Scott Lincicone, “Here are 202 Companies Hurt by Trump’s Tariffs”, Reason.com, September 14, 2018).

However, these analyses do not take into account the “unseen” but intensifying stress that climate change is exercising on the current geo-economic conditions and how its impacts combine nationally and globally with the way the U.S.-China Trade war unfolds and triggers unintended consequences.

Continue reading “The U.S. Economy, between the Climate Hammer and the Trade War Anvil – the Soybean Case”

The Red (Team) Analysis Weekly – US-China tensions escalate – 4 October 2018

Each week our scan collects weak – and less weak – signals for political and geopolitical risk of interest to private and public actors.

Find out more on horizon scanning, signals, what they are and how to use them:

Horizon Scanning and Monitoring for Anticipation: Definition and Practice“.

Read below our latest complimentary Weekly horizon scanning. 

Each section of the scan focuses on signals related to a specific theme: world (international politics and geopolitics); economy; science; analysis, strategy and futures; AI, technology and weapons; energy and environment. However, in a complex world, categories are merely a convenient way to present information, when facts and events interact across boundaries.

Read the 4 October 2018 scan

The Weekly is the complimentary scan of The Red (Team) Analysis Society. It focuses on political and geopolitical uncertainty, on national and international security issues.

The information collected (crowdsourced) does not mean endorsement but points to new, emerging, escalating or stabilising problems and issues.

Featured image: Antennas of the Atacama Large Millimeter/submillimeter Array (ALMA), on the Chajnantor Plateau in the Chilean Andes. The Large and Small Magellanic Clouds, two companion galaxies to our own Milky Way galaxy, can be seen as bright smudges in the night sky, in the centre of the photograph. This photograph was produced by European Southern Observatory (ESO), ESO/C. Malin [CC BY 4.0], via Wikimedia Commons.

Revisiting Timeliness for Strategic Foresight and Warning and Risk Management

[Fully rewritten version v3] To exist, risk and foresight products as well as warnings must be delivered to those who must act upon them, the customers, clients or users. These anticipation analyses must also be actionable, which means that they need to include the right information necessary to see action taken.

Yet, if you deliver your anticipation when there is no time anymore to do anything, then your work will be wasted.

Yet, even if you deliver your impeccable strategic foresight or risk analysis, or your crucial actionable warning to your clients in time to see a response implemented, but at a moment when your customers, decision-makers or policy-makers cannot hear you, then your anticipation effort will again be wasted. Let me give you an example. If you look at the picture used as featured image, what you see is the Obama government in a situation room as it awaits updates on the 2011 Operation Neptune’s Spear, the mission against Osama bin Laden. Imagine now that you have another warning to deliver (and the authorisation to do so) on any other issue, one with high impact but meant to happen in, say, 2 years time. Do you seriously believe that anyone in that room would – or rather could – listen to you?  If ever you nonetheless delivered your warning, you would not be heard. Obviously, as a result, decisions would not be taken. Your customer would be upset, while the necessary response would not be implemented. Finally, endless problems, including crises, would emerge and propagate.

Delivering an anticipation analysis or product must thus obey a critical rule: it must be done in a timely fashion. Timeliness is a fundamental criterion for good anticipation, risk management and strategic foresight and warning.

In this article, we shall look, first, at timeliness as a criterion that enables the coordination of response. We  shall explain it with the example of the controversial “Peak Oil”. Second, timeliness means that customers or users will not only have enough time to decide and then implement any necessary course of action as warranted by your strategic foresight and warning or risk analysis, but also be able to hear you. This is the problem of fostering credibility and overcoming other biases. We shall explain this part using again the examples of Peak Oil and taking as second example Climate Change. Finally, we shall point out a synthetic approach to understand timeliness and ways forward to achieve it.

Timeliness: enabling the coordination of response

timeliness, timely, credibility, cognitive biases, the Red (Team) Analysis Society, risk management, strategic foresight, strategic warning, anticipation, scenario, peak oil, climate change

Most often, the challenge of timeliness is understood as stemming from the need to conciliate, on the one hand, the dynamics which are specific to the issue, object of anticipation, and, on the other, the related decisions and the coordination of the response.

Let us take the example of Peak Oil, i.e. the date when “world oil production will reach a maximum – a peak – after which production will decline” (Hirsch, 2005, 11), which implies the end of a widespread availability of cheap (conventional crude) oil. Hirsch underlined that the problem of timing, i.e. identifying when oil will peak is complex

“When world oil peaking will occur is not known with certainty. A fundamental problem in predicting oil peaking is the poor quality of and possible political biases in world oil reserves data. Some experts believe peaking may occur soon. This study indicates that “soon” is within 20 years. ” (Hirsch, 2005, 5)

Thus, according to Hirsch, oil should peak before 2025.

In 2018, the idea of Peak Oil may be thought as being outdated or plainly false, grounded in mistaken false science as exemplified by Michael Lynch, “What Ever Happened To Peak Oil?“, Forbes, 29 June 2018. Note that these arguments were already used prior to a phase of relatively wide recognition of the Peak Oil phenomenon around 2010, from scientists’ reports, associations, institutions and books (see, for example, the creation of the Association for the Study of Peak Oil & Gas in 2000 , Robert Hirsch report (2005), the Institut Français du Pétrole (IFP), Thomas Homer Dixon in 2006, Michael Klare or Jeff Rubin in 2010), to web resources such as the now defunct The Oil Drum and Energy Bulletin to finally the International Energy Agency (IEA – it recognised the peaking of Peak Oil in 2010, e.g. Staniford, 2010), despite still some resistance by then a shrinking number of actors. Since then, notably, the shale revolution took place, while climate change allowed an easier access to northern oil and gas fields (e.g Jean-Michel Valantin, “The Russian Arctic Oil: a New Economic and Strategic Paradigm?”, The Red Team Analysis Society, October 12, 2016).

Peak oil is thus not very much on the agenda, although some still argue that it will happen, as, exemplified by the websites Peak Oil Barrel or Crude Oil Peak, which suggests that the oil will peak when the U.S. shale will peak (“What happened to crude oil production after the first peak in 2005?“, Sept 2018.) The peak in U.S. shales thus becomes a significant issue (e.g. Robert Rapier, “Peak Tight Oil By 2022? EIA Thinks It’s Possible, Without Even Accounting For This Risk“. Forbes, 20 February 2018; Tsvetana Paraskova, “Peak U.S. Shale Could Be 4 Years Away“, OilPrice, 25 Feb 2018).

If the remaining proponents of peak oil are right and if some of the hypotheses of the EIA are correct, then Peak Oil could take place around 2022. This is not that far away from Hirsch estimates according to which Peak Oil could occur by 2025.

We should nonetheless allow for the considerable evolutions that took place over the last 13 years, notably in terms of technology, including Artificial Intelligence, consuming behaviour, global consumption, and climate change. We should also allow for coming revolutions such as Quantum technologies which could completely upset many estimates.  As long as all these developments with their complex feedbacks have not been considered, without forgetting that Hirsch addressed availability of cheap oil, not availability  of expensive oil, then we must remain conservative and treat 2025 as only a possibility (a probability of 50%) for Peak Oil.

timeliness, timely, credibility, cognitive biases, the Red (Team) Analysis Society, risk management, strategic foresight, strategic warning, anticipation, scenario, peak oil, climate change, energy security

Notwithstanding other impacts, Hirsch estimates that 20 years of a “mitigation crash program before peaking” would have allowed avoiding “a world liquid fuels shortfall” (Hirsch, 2005, 65).

Thus, assuming that oil peaks in 2025, if we want to have an energy mix of replacement for the soon gone cheap oil, then we should have decided implementing and then coordinating a response… back in 2005. Note that, interestingly, this corresponds to the time when Hirsch published his report, and the time when the world started being worried about Peak Oil. We can thus wonder if, in specific countries, as well as collectively, SF&W on this issue was not actually delivered.

To answer more precisely this question, further research, when archives are declassified, will need to be done. Meanwhile, it will be useful to follow precisely the delivery process, notably, according to countries and actors, to know where exactly the warning was delivered and to whom.

If we now assume that Hirsch estimates of the time needed to develop mitigation and a new energy mix is correct, then we may consider that Hirsch, as well as the “peak oil” interest of the second part of the first decade of the 21st century, delivered a timely waning, as far as the time needed to implement answers is concerned.

If and where the right decisions were taken and the right responses implemented would need to be evaluated on a case by case basis.

Let us turn now to other criteria that condition the timeliness of the delivery of a risk or foresight analysis or of a warning.

Timeliness, credibility and biases

Jack Davis, writing on strategic warning in the case of U.S. national security, hints at the importance of another criterion linked to timeliness, credibility:

timeliness, timely, credibility, cognitive biases, the Red (Team) Analysis Society, risk management, strategic foresight, strategic warning, anticipation, scenario, peak oil, climate change, intelligence

“Analysts must issue a strategic warning far enough in advance of the feared event for US officials to have an opportunity to take protective action, yet with the credibility to motivate them to do so. No mean feat. Waiting for evidence the enemy is at the gate usually fails the timeliness test; prediction of potential crises without hard evidence can fail the credibility test. When analysts are too cautious in estimative judgments on threats, they brook blame for failure to warn. When too aggressive in issuing warnings, they brook criticism for “crying wolf.”

Davis, Jack, “Improving CIA Analytic Performance: Strategic Warning,” The Sherman Kent Center for Intelligence Analysis Occasional Papers: Volume 1, Number 1, accessed September 12, 2011.

For Davis, credibility is the provision of “hard evidence” to back up strategic foresight, or actually any anticipation analysis. Of course, as we deal with the future, hard evidence will consist in understanding of processes and their dynamics (the model used, preferably an explicit model) added to facts indicating that events are more or less likely to unfold according to this understanding. This is why, building an excellent model (see our online course), grounded in science is so important, as this will be key in achieving the credibility criterion.

Credibility is, however, also something more than hard evidence. To obtain credibility, people must believe you. Hence, the biases of the customers, clients or users must be overcome. Thus, whatever the validity of the hard evidence in the eyes of the analyst, it must also be seen as such by others. The various biases that can be an obstacle to this credibility have started being largely documented (e.g. Heuer). Actually, explaining the model used and providing indications, or describing plausible scenarios are ways to overcome some of the biases, notably out-dated cognitive models. Yet, relying only on this scientific logic is insufficient, as shown by Craig Anderson, Mark Lepper, and Lee Ross in their paper “Perseverance of Social Theories: The Role of Explanation in the Persistence of Discredited Information.” Thus, other ways to minimize biases must be imagined and included. The possibility to deliver the SF&W or risk product will be accordingly delayed.

Credibility and, more broadly, overcoming biases are so important that I would go further than Davis and incorporate them within the very idea of timeliness. This would be much closer to the definition of timely, according to which something is “done or occurring at a favourable or useful time; opportune” (Google dictionary result for timely). Indeed, there cannot be timely SF&W or risk management if those who must act cannot hear the warning or analysis we seek to deliver.

If the SF&W product or the risk analysis is delivered at the wrong time, then it will be neither heard nor considered, decisions will not be taken, nor actions implemented.

More difficult, biases also affect the very capability of analysts to think the world and thus to even start analysing issues. We are there faced with cases of partial or full collective blindness, when timeliness cannot be achieved because SF&W or risk analysis cannot even start in the specific sectors of society where this analysis needs to be done.

If we use again our example of Peak Oil, the 2005 warning could have lost part of its timeliness, because of debate regarding its credibility, which remains nowadays and is exemplified in the Forbes article above mentioned. On the other hand, the decision by the International Energy Agency (IEA) to finally recognise the peaking of Peak Oil in 2010 (e.g. Staniford, 2010) lent an official character to the phenomenon, that was very likely extremely important in finally allowing for the credibility of the warning.

We face very similar stakes and challenges with Climate Change, as shown once more by the latest debates presiding to the October 2018 IPCC report (Matt McGrath, “IPCC: Climate scientists consider ‘life changing’ report“, BBC News, 1 October 2018). Tragically, in that case, the ongoing attacks on the credibility of the various warnings regarding climate change over years, has also finally most probably endangered the timely possibility of response to remain below a 1.5C warming:

“For some scientists, there is not enough time left to take the actions that would keep the world within the desired limit.
‘If you really look seriously at the feasibility, it looks like it will be very hard to reach the 1.5C,’ said Prof Arthur Petersen, from University College London and a former IPCC member.
‘I am relatively sceptical that we can meet 1.5C, even with an overshoot. Scientists can dream up that is feasible, but it’s a pipedream.'” (MacGrath, “IPCC: Climate scientists …)

This shows how the credibility issue is absolutely crucial for a warning to respect the timeliness criterion.

Timeliness as the intersection of three dynamics

To summarise, timeliness is best seen as the intersection of three dynamics:

timeliness, timely, credibility, cognitive biases, the Red (Team) Analysis Society, risk management, strategic foresight, strategic warning, anticipation, scenario, peak oil, climate change
  • The dynamics and time of the issue or problem at hand, knowing that, especially when they are about nature, those dynamics will tend to prevail (Elias, 1992)
  • The dynamics of the coordination of the response (including decision)
  • The dynamics of cognition (or evolution of beliefs and awareness, including biases resulting from interests) – at collective and individual level – of the actors involved.

To understand each dynamic is, in itself, a challenge. Even more difficult, each dynamic acts upon the others, making it impossible to truly hope to achieve timeliness if the impact of one dynamic on the others is ignored.

For example, if we continue with the case of climate change, having been unable to truly even properly think collectively the possibility of climate change in its dire reality and with a more accurate timeline before the turn of the century – despite multiple efforts in this direction (e.g. Richard Wiles, “It’s 50 years since climate change was first seen. Now time is running out“, The Guardian, 15 March 2018), has dramatically changed the current possible dynamics of the response, while both the cognitive delay and the absence of previous decisions and actions have orientated the dynamics of the issue towards some paths, while others are definitely closed. Any SF&W or risk assessment delivered on this issue now, as shown by the October 2018 IPCC Panel discussions (Ibid.), is quite different from what was delivered previously.

To acknowledge the difficulty of finding the timely moment, and the impossibility to ever practice an ideal SF&W in an imagined world where everyone – at individual and collective level – would have perfect cognition, is not to negate SF&W or risk management. Answering the “timeliness challenge” with a “what is the point to do it now as we did not do it when things were easy/easier” is at best childish, at worst suicidal.

On the contrary, fully acknowledging hurdles is to have a more mature attitude regarding who we are as human beings, accepting our shortcomings but also trusting in our creativity and capacity to work to overcome the most difficult challenges. It is to open the door to the possibility to develop strategies and related policies with adequate tools to improve the timeliness of SF&W and risk management, thus making it more actionable and efficient:

  • Creating evolving products that will be adapted to the moment of delivery;
  • Using the publication of groups, communities, scholarly or other work on new dangers, threats and opportunities as potential weak signals that are still unthinkable by the majority;
  • Developing and furthering our understanding of the dynamics of cognition and finding ways to act on them or, to the least, to accompany them;
  • Keeping permanently in mind this crucial issue in anticipation to seek and implement adequate strategies to overcome it, according to the ideas, moods, science and technologies available at the time of delivery.

——–

This is the 3rd edition of this article, considerably revised from the 1st edition, 14 Sept 2011.

Featured image: Situation Room, Pete Souza [Public domain], via Wikimedia Commons

About the author: Dr Helene Lavoix, PhD Lond (International Relations), is the Director of The Red (Team) Analysis Society. She is specialised in strategic foresight and warning for national and international security issues. Her current focus is on Artificial Intelligence and Security.


References

Anderson, Craig A., Mark R. Lepper, and Lee Ross, “Perseverance of Social Theories: The Role of Explanation in the Persistence of Discredited Information,” Journal of Personality and Social Psychology 1980, Vol. 39, No.6, 1037-1049.

Campbell, Colin J. and Jean H. Laherrere, “The end of cheap oil,” 
Scientific American, March 1998.

Davis, Jack, “Improving CIA Analytic Performance: Strategic Warning,” The Sherman Kent Center for Intelligence Analysis Occasional Papers: Volume 1, Number 1, accessed September 12, 2011.

Dixon, Thomas Homer, The Upside of Down: Catastrophe, Creativity and the Renewal of civilization, (Knopf, 2006).

Elias, Norbert,  Time: An Essay, (Oxford: Blackwell, 1992)

Hirsch, Robert L., SAIC, Project Leader, Roger Bezdek, MISI, Robert Wendling, MISI Peaking of World Oil Production: Impacts, Mitigation & Risk Management, For the U.S. DOE, February 2005.

International Energy Agency (IEA), World Energy Outlook 2010.

Klare, Michael, Blood and Oil: The Dangers and Consequences of America’s Growing Dependency on Imported Petroleum, (New York: Metropolitan Books, 2004; paperback, Owl Books, 2005).

Klare, Michael, Rising Powers, Shrinking Planet: The New Geopolitics of Energy (Henry Holt & Company, Incorporated, 2008).

Rubin, Jeff, Why Your World is About to Get a Whole Lot Smaller: Oil and the End of Globalization, Random House, 2009.

Staniford, Stuart, “IEA acknowledges peak oil,” Published Nov 10 2010, Energy Bulletin.

EN