We live in a world of increasingly abundant new technologies, seen as crucial for our future. Those are not only new, but also meant to revolutionise our lives for the better. Progress cannot be imagined without technology. Technology is meant to save us all. The speed with which bio-tech contributed to develop efficient vaccines against the COVID-19, as triggered by the early variants of the SARS-CoV2, exemplifies the saviour function of technology.
Meanwhile, actors, both public and private, have to constantly innovate, to fund the right science and technology research programs. They must invest in and adopt early enough the next key technology to make sure they do not fall behind in the ongoing technological race.
We need thus to keep track of technological development and innovation. Yet, this is only a pre-requisite. We also need to be able to sort through these many “new techs” and identify, as early as possible, which ones will be key for the future. If we invest in the wrong technology, or in the wrong way, or with the wrong timing, then the consequences are likely to be negative.
With this series, we shall address the first of these concerns: which new technologies could be key for the future?
As we are at a generic level, we shall, for now, not specify exactly “when in the future”.
How can we find the new techs that will become key in the future?
With this first article, we shall build a schematic model that will give us an understanding of why we need technologies. Indeed, it is only if we can find a logic behind the success or failure of new technologies that we can hope to identify key future technologies.
We start with looking at an existing good scan of current new technologies, using it as a case study. We then test the capability of this scan to identify future key technologies and highlight related difficulties. Thus, we underline what is missing in this scan to allow us moving forward with our question. Finally, building upon these findings, we start building a first schematic model that will allow us identifying future key technologies.
A classical comprehensive scan
Munich-Re, working with ERGO IT Strategy, provides us with a very useful yearly scan, the Tech Trend Radar (hereinafter “Radar“), which aims to raise “awareness of key trends” in the new tech sector. They focus on those technologies that are especially relevant to the insurance sector. Nonetheless, considering the breadth of insurance companies’ interest, their scan is relevant for many sectors and excellent for a general and comprehensive overview of current new technologies.
- Fifth Year of Advanced Training in Early Warning Systems & Indicators – ESFSI of Tunisia
- Towards a U.S. Nuclear Renaissance?
- AI at War (3) – Hyperwar in the Middle east
- AI at War (2) – Preparing for the US-China War?
- Niger: a New Severe Threat for the Future of France’s Nuclear Energy?
- Revisiting Uranium Supply Security (1)
Furthermore, the famous reinsurance company started its tech scan in 2015 and thus has a collection of 6 yearly radars, which thus could give us depth if ever we wanted to look at historical evolution.
The methodology for the “Radar” is grounded in the compilation of trends, which are then screened according to four rules “to define the most relevant trends categorised in four primary fields” (Tech Trend Radar 2020, p.62). These rules are notably inspired by a management framework, the “Run-Grow-Transform” (RGT) model (Ibid., RGT model adapted to IT by Hunter et al, Gartner Research, 2008).
First, Munich-Re and Ergo Tech Trend Radar 2020 present their result sorted according to “four trend fields” (Tech Trend Radar 2020 and 2019):
- User Centricity;
- Connected World;
- Artificial Intelligence;
- Enabling tech, ex-disruptive tech in the 2019 edition.
(Click on the image to access Munich-Re document)
Munich-Re and Ergo further sort out the trend fields according to maturity/degree of adoption of each tech, which allows then to advise in detail on what to do with the tech.
We thus have 52 technologies of interest, out of which 10 are considered as new for 2020.
But which ones will be key in the future?
Can we use this approach for foresight?
Among these 52 technologies, which are or rather will be the key technologies or the most important ones in the future? How can we find out?
Furthermore, how can we be certain that all future key technologies are here? Could we be missing a key technology, or many key technologies?
The case of precision farming
For example, “precision farming” – also known as “smart farming” or “smart agriculture” – is a novelty in Munich-re-Ergo Radar 2020 and was not included in the 2019 version (Ibid.).
Yet, a company such as Deere & Company already started preparing for smart farming at least in 2017 (Helene Lavoix, Artificial Intelligence, the Internet of Things and the Future of Agriculture: Smart Agriculture Security? part 1 and part 2, The Red Team Analysis Society, 2019). Interest and investments in the field increased in 2018 and then in 2019 (Ibid.). Thus, the “Radar” is three years late. If we had used the 2019 “Radar”, then we would have entirely missed a technology, possibly key for the future.
The case of “Deep Fake”, stemming from generative adversarial networks (GANs)
Similarly “DeepFake Defence” enters the “Radar” in 2020, in the trend field “User Centricity”.
However, the name “deep fake” emerged in 2017 to convey concern with forgeries involving Artificial Intelligence (AI) (Laurie A. Harris, “Deep Fakes and National Security“, Congressional Research Service, updated May 7, 2021, 3rd version). The U.S. Defense Advanced Research Projects Agency (DARPA) has two programmes focusing on fighting Deep Fakes. The first, Media Forensics (MediFor) started in 2016 and the second Semantic Forensics (SemaFor) in 2019.
Thus, here again, the “Radar” is late, for our purpose, in identifying a key trend.
Meanwhile, Deep Fakes are most often grounded in generative adversarial networks (GANs), indeed identified in the “Radar” in AI this time. The GANs entered the “Radar” in 2019 (Ibid.)
Generative adversarial network (GAN) was invented in 2014.
GANs are part of Unsupervised Learning (UL): the ability of a machine to find underlying structures from unlabelled data.
GANs group, alone, objects, finding “concepts”: pixel trees with pixel trees, doors with doors etc. (e.g. Gan Paint).
The incredible quality of the images generated, which do not exist in reality, allows for mind-boggling possibilities. They may have negative applications, for forgery for example. They may also lead to constructive usages for many other activities, such as urban planning, architecture, cinema, fashion, etc. (see also Helene Lavoix, Inserting Artificial Intelligence in Reality, The Red Team Analysis Society, January 2019).
Identifying GANs should thus have led to look at it use and misuse, as early as the GAN new tech was found. Furthermore, the classification of two related “tech” in different categories – even if those categories are called “fields” – may create problems, as we shall see below.
Of course, only those who do nothing never make any mistake. Yet, if some technologies were detected late previously, then, could a methodology similar to the “Radar” lead us to miss something else now for the future?
If so, which could be the forgotten important new technology? We could change our sources, using better or more extended ones. But, would this be enough? How could we know?
Can we identify what is missing or what can be improved when we use an approach such as the one used for the “Radar“?
The problem with laundry lists
The “Radar” we use here as a case study presents us with a long list of technologies sorted out through categories labelled “trend fields”. But we do not know exactly how and why these “trend fields” are chosen.
Categories
Categories are used in and result from classification, a fundamental cognitive function for the brain (Fabrice Bak, 2013: 107-113). Indeed, “Categorization is a process by which people make sense of things by working out similarities and differences” (McGarty, Mavor, & Skorich, 2015). The highest level of categorization is hierarchical (organised as a tree) and called a taxonomy or hierarchical classification. In classical terms, categories must be clearly defined (which criteria are necessary to make an item part or not of the category), mutually exclusive (one item can belong to only one category) and fully exhaustive (all the categories together represent the whole set for which the categories are built) (OECD,”Classification“, using “United Nations Glossary of Classification Terms” prepared by the Expert Group on International Economic and Social Classifications; unpublished on paper).
The archetypal example of a taxonomy is Linnaeus’ classifications of plants, animals and minerals (Regnum Animale, Regnum Vegetabile and Regnum Lapideum), according to various classes, a work he started with his Species Plantarum, published in 1753 and continued throughout his life (see his bibliography). Building upon Linnaeus ‘ work, organisms are now organised in the following inclusive taxonomies, organised from the most to the least inclusive: Kingdom, Phylum, Class, Order, Family, Genus, Species, and Strain.
Not real categories
Now, if we look at the “trend fields” used in the “Radar”, what we observe is that they respect none of the specificities a category should have:
1- They are not well defined. There is not one criteria that allows to class easily one item in one “trend field” or another. For example, are various types of IAs not actually also enabling technologies?
2- They are not mutually exclusive, i.e. some items could belong to two or more “trend fields”: 5 G is enabling and also part of a connected world; smart textiles are also user centric and may be seen as part of programmable materials; IA enables autonomous things and precision farming as seen, etc.
3- They are probably not exhaustive, which creates our problem of not knowing if we did not miss something.
The four “trend fields”, here, seem to be mainly habits of thoughts, existing names or disparate categories that allow readers and users to identify quickly and easily the new technologies selected by the process.
Static categories
In our case study, Munich-Re and Ergo then sort out the first “proto-categorisation” according to a second categorisation: maturity/degree of adoption of the tech.
The second categorisation appears correct in terms of the rules necessary for being a category. Yet, the criteria used to build the second category remain also turned inward. A dynamic explanatory element related to our concern is missing. We cannot know what will work or not, because we do not have a logic that explains future success for technologies.
Towards a model allowing us to understand what makes technologies key
What we need is a model that explains schematically the purpose of technologies, why we use them, why they are important to us, human beings. If we understand, even schematically this logic, then we can envision those technologies that will be key for the future.
Let us tell the story – or a story – of human beings and technologies.
We have a planet, populated with individuals.
Each individual living on planet Earth has needs, as explained by Maslow (Abraham Maslow, Motivation and Personality, 1954, 1987).
Actually, on the planet, we have crowds of individuals, living in different types of dwellings. Each crowd is organised as a society.
A society implies that social coordination must function. Social coordination is expressed according to three components (Barrington Moore, Injustice: Social bases of Obedience and Revolt, 1978):
- the issue of authority,
- the division of labour for the production of goods and services,
- and the distribution of these goods and services.
To satisfy the needs of social coordination, some tasks or actions must be carried out.
These tasks or actions will be impacted, made possible or not, facilitated or not, by some conditions and the environment.
This is where technologies are born, to facilitate and improve all these actions.
Thus, we can assume that the technologies that will be key for the future will be all those tech that will effectively help us to satisfy needs. Meanwhile, the actions required to meet these needs are more and more complex. They become increasingly complex because of previous actions – including the creation and use of previous technologies – and of their impact on the environment, and thus on the conditions for the actions. The evolution of needs resulting from this process also, in the same time, contributes to make actions and tasks more complex.
We now have a model that will allow us to find out which technologies are most likely to become key in the future, as we shall see in the next part.
Bibliography
Featured images: Spaceship and planet, and Safe by Reimund Bertrams de Pixabay / Public domain.
Chappellet-Lanier, Tajha, “DARPA wants to tackle ‘deepfakes’ with semantic forensics“, Fedscoop, 7 August 2019.
Diamond, Jared Guns, Germs, and Steel: The Fates of Human Societies, (W. W. Norton: 1997);
Goodfellow, Ian; Pouget-Abadie, Jean; Mirza, Mehdi; Xu, Bing; Warde-Farley, David; Ozair, Sherjil; Courville, Aaron; Bengio, Yoshua, “Generative Adversarial Nets“, Proceedings of the International Conference on Neural Information Processing Systems (NIPS), 2014.
Harris, Laurie A., “Deep Fakes and National Security“, Congressional Research Service, updated May 7, 2021, 3rd version
Hunter R. et al., “A Simple Framework to Translate IT Benefits into Business
Value Impact,” Gartner Research, May 16, 2008.
Lavoix, Helene, Inserting Artificial Intelligence in Reality, The Red Team Analysis Society, January 2019.
McGarty, Craig, et al, “Social Categorization”, in International Encyclopedia of the Social & Behavioral Sciences, December 2015, DOI: , 10.1016/B978-0-08-097086-8.24091-9
Maslow, Abraham, Motivation and Personality, (London, Harper & Row, 1954, 1987);
Moore, B., Injustice: Social bases of Obedience and Revolt, (London: Macmillan, 1978).
Munich-Re and ERGO IT Strategy, Tech Trend Radar 2020 and 2019.