This article focuses on the race to exascale computing and its multi-dimensional political and geopolitical impacts, a crucial response major actors are implementing in terms of High Performance Computing (HPC) power, notably for the development of their artificial intelligence (AI) systems. It thus ends for now our series on HPC as driver of and stake for AI, among the five we identified in Artificial Intelligence – Forces, Drivers and Stakes: the classical big data, HPC and the race to quantum supremacy as related critical uncertainty, algorithms, “sensors and expressors”, and finally needs and usages.
Related
Artificial Intelligence, Computing Power and Geopolitics (1): the connection between AI and HPC
Artificial Intelligence, Computing Power and Geopolitics (2): what could happen to actors with insufficient HPC in an AI-world, a world where the distribution of power now also results from AI, while a threat to the Westphalian order emerges
High Performance Computing Race and Power – Artificial Intelligence, Computing Power and Geopolitics (3): The complex framework within which the responses available to actors in terms of HPC, considering its crucial significance need to be located.
This final piece builds on the first part, where we explained and detailed the connection between AI and HPC, and on the second part, where we looked at the related political and geopolitical impacts: what could happen to actors with insufficient HPC in an AI-world, a world where the distribution of power now also results from AI, while a threat to the Westphalian order emerges. The responses available to actors in terms of HPC, considering its crucial significance need to be located within the complex framework we explained in part three. Accordingly, first, decisions regarding which HPC capability to develop must be taken in relative terms, i.e. considering others’ HPC and AIs. Second, each actor engaged in the race must consider how fast other actors will develop stronger HPC capabilities. Finally, the longer the lead time investing actors have over the next revolutionary advance in HPC, the longer they delay the loss of value of their investment, and the more performing the AI-systems they can create, which gives them a window of opportunity to take full advantage of their superior AI.
- Fifth Year of Advanced Training in Early Warning Systems & Indicators – ESFSI of Tunisia
- Towards a U.S. Nuclear Renaissance?
- AI at War (3) – Hyperwar in the Middle east
- AI at War (2) – Preparing for the US-China War?
- Niger: a New Severe Threat for the Future of France’s Nuclear Energy?
- Revisiting Uranium Supply Security (1)
- The Future of Uranium Demand – China’s Surge
- Uranium and the Renewal of Nuclear Energy
In this dynamical framework we look at the obvious policy response actors designed: having more and better HPC, earlier, than the others. This is translated as the ongoing race to exascale computing, i.e. bringing online a computer with as capability a thousand petaflops or 1018 floating point operations per second, knowing that, currently, the most powerful computer in the world, U.S. Summit, shows a performance of 122.3 petaflops (TopList 500 June 2018). We start with a state of play on the ongoing “race to exascale”, which involves chronologically Japan, the U.S., France, China and the EU. We notably includes latest information on the indigenous European Processor Initiative (4-6 Sept 2018). The table summarizing the State of Play below is open access/free.
We then point out linkages between this race and economic, business, political, geopolitical and global dynamics, focusing notably on the likely disappearance of American supremacy in terms of processors.
Finally, strategically, if this race means going quicker than others developing better machines, it may also imply, logically, slowing down as much as possible, by all means, one’s competitors. Disrupting the very race would be an ideal way to slow down others, while completely upsetting the AI and HPC field, relativising the race to exascale and thus changing the whole related technological, commercial, political and geopolitical landscape. Thus, we underline two major possible disruptive evolutions and factors that could take place, namely quantum computing and a third generation AI. We shall detail both in forthcoming articles as quantum computing is a driver and stake for AI in its own right, while a third generation AI can best be seen as belonging to the driver and stake constituted by algorithms (“Artificial Intelligence – Forces, Drivers and Stakes”, ibid).
The Race to Exascale
Access for non-members is limited to the table summarising the state of play of the race to exascale. The complete text is a premium article. To access this article, you must become one of our members. Log in if you are a member.
A pdf version of the complete article (EN and FR) is available for members
ARTICLE 3951 WORDS – pdf 17 pages
State of Play
As of September 2018, the state of play for the race to exascale is as follows:
The Race to Exascale – Full State of Play – 24 September 2018 | |||||
---|---|---|---|---|---|
Japan | U.S. | China | EU | France | |
Pre-ES and prototype | 2013-2020 | June 2018 Sunway exascale computer prototype July 2018 Tianhe-3 prototype |
2021 | 2015, 2018 Tera1000 | |
Peak-ES and Sustained-ES | 2021 Post-K | 2021 Aurora – Argonne National Laboratory (ANL) 2021 Frontier -Oak Ridge National Laboratory (ORNL) 2022 El Capitan – Lawrence Livermore National Laboratory (LLNL) 2022-2023 Aurora upgrades |
2020 – Tianhe-3
2d half 2020 or first half 2021 – Sunway Exascale |
2023-2024 (EPI)
(Initially 2022-2023 -Official) |
2020-2021
BullSequana X |
Energy Efficiency Goal | 20-30 MW | 20-32MW | 20-40 MW | 20 MW by 2020 | |
Initiative | Flagship 2020 Project | ECP | 13th 5 Year Plan | EuroHCP | CEA |
Budget | $0.8 to 1 billion*** | $ 1.8 billion for the 2 ORNL and LLNL machines | ? | € 1billion+ by 2020** | ? |
Vendors | Japan | U.S. | Chinese | Europe | France |
Processor, Accelerator, Integrator | Japan
Fujitsu-designed Arm |
U.S. | Chinese ARM based, Sugon: x86 | European Processor Initiative (EPI) (Arm, RISC-V) RHEA, First generation processor for pres Exascale – CRONOS for Exascale Bull integrator BXI |
Intel, ARM, Bull Exascale Interconnect (BXI) |
Cost per System | Aurora:$300 to $600 million. Frontier and El Capitan: $400 to 600 million |
$350 to 500 million*** | $350 million*** | ? | |
Research by The Red (Team) Analysis Society – Detailed sources in the text. |
Feedbacks and impacts
To continue reading, become a member . If you are already a member, please login (don’t forget to refresh the page).
Notes and Additional Bibliography
*”does not manufacture the silicon wafers, or chips, used in its products; instead, it outsources the work to a manufacturing plant, or foundry. Many of these foundries are located in Taiwan and China…” (Investopedia)
**€486 million matched by a similar amount from the participating countries plus in kind contributions from private actors (“Commission proposes to invest EUR 1 billion in world-class European supercomputers“, European Commission – Press release, 11 January 2018)
CEA, Atos et le CEA placent TERA 1000, le supercalculateur le plus puissant d’Europe, dans le Top 15 mondial, 25 June 2018
CEA, TERA 1000 : 1er défi relevé par le CEA pour l’Exascale, 12 Nov 2015
Collins, Jim, “Call for Proposals: Aurora Early Science Program expands to include data and learning projects“, Argonne National Laboratory, 18 January 2018
e-IRG, “Interview of EPI’s project coordinator Philippe Notton from Atos“, 10 April 2018
ECP, “SECRETARY OF ENERGY RICK PERRY ANNOUNCES $1.8 BILLION INITIATIVE FOR NEW SUPERCOMPUTERS“, 9 April 2018
Lobet, Mathieu, Matthieu Haefele, Vineet Soni, Patrick Tamain, Julien Derouillat, et al.. High Performance Computing at Exascale: challenges and benefits. 15ème congrés de la Société Française de Physique division Plasma, Jun 2018, Bordeaux, France.
Thielen, Sean, “Europe’s advantage in the race to exascale“, The NextPlatform, 5 September 2018
Valero, Mateo, European Processor Initiative & RISC-V, 9 May 2018
Featured image: Computational Science, Argonne National Laboratory, Public Domain.