Horizon Scanning and Monitoring for Early Warning: Definition and Practice

(Rewritten and revised edition) Horizon scanning and monitoring for early warning are part of the family of activities used to foresee the future, anticipate uncertainty and manage risks. Their practice is crucial for successful strategic foresight and warning, risk management, futurism or any anticipatory activity.

While monitoring is a generic and common term used for many activities, horizon scanning is very specific and used mainly for anticipation. Horizon scanning is a term that appeared in the early years of the 21st century. It refers both to a specific tool within the strategic foresight process and to the whole anticipatory process (Habbeger, 2009).*

We shall here focus on horizon scanning as a specific tool within the entire strategic foresight process. We shall contrast it to monitoring for warning (hereafter monitoring). First, we shall present definitions for the two concepts. Then, using comparison of the practice of the two activities, we shall highlight the similarities and differences between the two. Meanwhile we shall identify best practice. Finally, we shall conclude that horizon scanning, as a tool, is, actually, the first step of any – good – monitoring for anticipation.

Definitions for horizon scanning and monitoring

Horizon Scanning

As a tool, horizon scanning allows for the identification of potential new themes or meta-issues and issues, answering our concerns as defined in our agenda or context. We shall then need to analyse in-depth the issues thus identified.

Horizon scanning looks thus for weak signals indicating the emergence of new meta-issues and issues. As a result, a scan must adopt the largest possible scope for the core question under watch.

horizon scanning, warning, monitoring, intelligence, risk management, futurism
Meteorological Service of Canada (Environment Canada): Non meteorological data from weather echos can be filtered by using Doppler velocities of targets. After cleaning, only real precipitation is left.

The idea of horizon scanning is built upon older ideas and methods such as “environmental scanning,” “strategic foresight” and “indications and warning” (also labelled “strategic warning” and “warning intelligence” see Grabo, 2004). Actually, as Glenn and Gordon underline, in the 1960-1970s, most futurists used the term “’environmental scanning’. However, as the environmental movement grew, some thought the term might only refer to systems to monitor changes in the natural environment because of human actions. To avoid this confusion, futurists created various labels, such as “Futures Scanning Systems”, “Early Warning Systems” and “Futures Intelligence Systems”. The military, for its part, uses “strategic warning’ and related terms. The objective is to avoid strategic surprises (e.g. Pearl Harbour).

The English “horizon scanning” is not the same as the French “veille”, on the contrary from what some authors assert – e.g. Nicolas Charest (“Horizon Scanning” 2012 and pdf). We could best translate “veille” by “monitoring” – taken in a general way, and not more specifically for warning as here. We could also translate it as “intelligence gathering”.

Charest, actually, refers to a process: “an organised formal process of gathering, analysing and disseminating value-added information to support decision making”. Yet, this is a process from which the future and anticipation are absent. Strangely enough, the author himself underlines that the English meaning of horizon scanning implies foresight, anticipation.

Rather than conflating two practices and two words, “veille” and horizon scanning, it is necessary to distinguish both. Indeed, even though the two activities are closely related, one, horizon scanning, has to deal with the future, when the other does not have to face this challenge.

It is the anticipating quality, the necessity to “make a judgement on the future” to use Grabo’s word (Ibid.), that generates the essential difference between the two related activities.

The use of “horizon scanning” in the denomination of various governments’ offices contributed to popularise the name. For example, we had the UK Horizon Scanning Centre, created in 2004 after a call for developing such centres of excellence across government (Habbeger, 2009, p.14), or Singapore’s Risk Assessment and Horizon Scanning (RAHS) programme, launched in 2005 (Lavoix, 2010). The way the idea became fashionable also contributed to the confusion surrounding its meaning.

Monitoring for warning

Monitoring is a part of the strategic warning process. The literature on intelligence, warning and strategic surprise documents well the idea and the process. Indeed, actors have used strategic warning since at least WWII, while intelligence studies are now a constituted body of knowledge and a discipline. For further readings, there used to be an excellent bibliography of reference on intelligence related matters: J. Ransom Clark’s Bibliography on the Literature of Intelligence, notably the section on strategic warning. Unfortunately, it has been taken down. However, it can still be accessed through the internet archive, even the section on strategic warning, but with various dates which may not correspond to the latest version, now lost.

Monitoring issues will allow for the identification of warning problems. We shall then use adequate models and related indicators for the surveillance of those problems. As a reminder, an indicator is a concept and abstraction for something. An indication is the reality corresponding to the indicator at a specific instance. We thus use indicators to collect indications. For example growth of gross domestic product (GDP) is an indicator and 5% is an indication for a specific country and time. Speed can be an indicator and 60 km/h an indication on a specific place for a specific device at a specific time.

Both monitoring and surveillance lead the collection of necessary information, as defined by the model and related indicators.

As a reminder, throughout the whole SF&W process, we process to a narrowing down of our focus, which the vocabulary used reflects. We move from the most general and encompassing to the most detailed. Let us take as example energy as a “meta-issue”. Then, “issues” could be “oil security,” “peak oil,” “peak uranium,” “the volatility of oil prices,” “the politics of energy between Europe and Russia,” “energy for China,” etc. “Problems” could be the more specific “Gasprom policies,” “the Keystone pipeline,” “Energy in the Belt and Road Initiative”, or “Energy and the Belt and Road Initiative in Pakistan”, or even “tension around this or that plant”, etc.

Horizon scanning and monitoring for warning in practice

If definitions differ, is there truly a difference in the way we do horizon scanning on the one hand, monitoring for warning on the other? Is scanning included in monitoring for warning? Should we use the same processes and the same tools for scanning and for monitoring? Or do we have to use different approaches?

Similarly grounded in models, but different sophistication of models

A first difference between horizon scanning and monitoring is the location of each within the overall SF&W process. A scan is the first step of any analysis. What does that imply?

horizon scanning, warning, monitoring, intelligence, risk management, futurism

As it is the very first thing you do when tackling an issue, then scanning the horizon implicitly assumes that no understanding or little understanding of the question exists. Yet, actually this is only an appearance.

Try to make the exercise mentally: if you start looking for something, even in the loosest way, to do that you need to have an idea, even minimal, of what you are looking for. What happens is that, unconsciously, you rely on a cognitive model. This cognitive model is implicit. Thus, to scan the horizon you already use a model, even if it is a very imperfect one.

Further away in the process of foresight or risk analysis, you monitor an issue. This is meant to happen towards the end of the analytical process, thus once you know very well your topic. On the figure above, monitoring takes place after we have created the scenarios and identified the indicators for warning.

Monitoring is thus also grounded in a model. However, we have made that model explicit. We have improved and refined it through the process of analysis.

Thus, fundamentally, both horizon scanning and monitoring are similar. Their difference, here, resides actually in the sophistication of the model used, not in the actual process utilised to do scanning or the first steps of the monitoring. Hence, scanning and monitoring can utilise most often the same of tools or supports.

Broad outlook, enmeshed outputs

Second, the definition of a scan suggests that it should only identify weak signals. However, to select beforehand signals according to their strength – assuming this is possible – would be counter-productive and in some cases impossible. Indeed, a strong signal for an issue can also, sometimes, be a weak signal of emergence for something else.

Thus, when gathering signals through a scan that aims at identifying emerging meta-issues and issues, it is desirable to be as broad and encompassing as possible.

In practice, you can note new signals, and loosely start linking them to other meta-issues or issues.

Similarly, monitoring of an issue and surveillance of a problem may also pick upon signals of novel issues emerging. Again, you should make sure you record these findings.

Thus for both horizon scanning and monitoring, you need to have a cognitive make up that is as open and as broad as possible, while also, at the same time being able to link precisely this or that fact, trend or “thing” to this issue, that problem and this indicator.

Signals and their strength for horizon scanning, indications and timeline for monitoring

horizon scanning, warning, monitoring, intelligence, risk management, futurism, bias
Image by Jens Langner (http://www.jens-langner.de/) (Own work), Public domain, via Wikimedia Commons

Last but not least, because of various biases, both analysts and clients, decision-makers and policy-makers are often unable to see, identify, and consider some signals “below the horizon.” They will be able to accept those signals only when they are “above the horizon,” which means when they are much stronger, as exemplified in the article on timeliness.

The position of the signal below or above the horizon, or the strength a signal needs to have to see actors perceiving and accepting it, will vary according to person.

It is thus not practically desirable to try sorting out signals according to their strength too early in the process.

In the case of monitoring and surveillance for warning, it is also crucial to sort the indications according to a timeline. That time sequence warns us about the evolution of the issue under watch. Finally, it will allow for the warning and its delivery. At least mentally, each indication or signal, or group of indications and signals must be positioned on their corresponding timelines. We use a plural here, because indications and signals can feed into different dynamics for various issues, as seen in the previous part.

We thus look at strength – for signals. On the other hand, we focus on timeline for indicators and their indications. Thus, does that mean that scanning and monitoring are different?

Actually, the strength of a signal for horizon scanning may be seen as nothing else than an indication of the movement of change on a timeline. Let me explain that further. If the signal is weak, then the situation is far from the actual occurrence of an event or phenomenon. On the contrary, if the signal is strong then one is close to it. A scan would thus be an instance of monitoring, where only indications leading to judgements according to which an event will not happen soon, but nevertheless deserve to be put under watch, are selected.

However, as we saw that it is neither desirable nor sometimes possible to sieve through signals according to their strength, then this vision of a scan is idealistic and impractical.

As a result, and practically, at the end of the process, a scan will gives us signals of varying strength. At that stage, we shall only have a relatively weak confidence of the very strength of the signals identified. In that case, using strength of signal would be a precursor to a much more refined judgement made in terms of timeline.

Horizon scanning thus corresponds to the first stage of monitoring (and surveillance) before judgements related to the signification of the signal, or indication in terms of timelines, are made. It thus exists not only at the very beginning of the whole SF&W process, but each time we monitor.


* The debate on national security is rich and features many authors. For a brief summary of and references to the many outstanding scholars who inform it, e.g. Helene Lavoix “Enabling Security for the 21st Century: Intelligence & Strategic Foresight and Warning,” RSIS Working Paper No. 207, August 2010.


This is the 2d edition of this article, substantially rewritten and revised from the 1st edition, June 2012.

Featured image: U.S. Navy by tpsdave. CC0 Public Domain

About the author: Dr Helene Lavoix, PhD Lond (International Relations), is the Director of The Red (Team) Analysis Society. She is specialised in strategic foresight and warning for national and international security issues. Her current focus is on Artificial Intelligence and Security.

Bibliography and References

Charest, N. (2012), “Horizon Scanning,” in L. Côté and J.-F. Savard (eds.), Encyclopedic Dictionary of Public Administration.

Gordon, Theodore J. and Jerome C. Glenn, “ENVIRONMENTAL SCANNING,” The Millennium Project: Futures Research Methodology, Version 3.0, Ed. Jerome C. Glenn and Theodore J. 2009, Chapter 2.

Grabo, Cynthia M., Anticipating Surprise: Analysis for Strategic Warning, edited by Jan Goldman, (Lanham MD: University Press of America, May 2004).

Habbegger, Beat,  Horizon Scanning in Government: Concept, Country Experiences, and Models for Switzerland,    Center    for    Security    Studies    (CSS),    ETH    Zurich,    2009.

J. Ransom Clark’s Bibliography on the Literature of Intelligence.

Lavoix, Helene, What makes foresight actionable: the cases of Singapore and Finland. (U.S. Department of State commissioned report, December 2010).

Lavoix, Helene, “Enabling Security for the 21st Century: Intelligence & Strategic Foresight and Warning,” RSIS Working Paper No. 207, August 2010 (also accessible here).

When Risk Management Meets Strategic Foresight and Warning

Risk management is codified by the International Organization for Standardization (ISO). It is aimed at any organisation concerned with risk, be it public or private (Sandrine Tranchard, “The new ISO 31000 keeps risk management simple“, ISO News, 15 Feb 2018). Its forebear is actuarial science, i.e. methodologies to assess risk in insurance and finance (e.g. ENSAE Definition). Its study, as a discipline mainly of use to the private sector, progressively developed after World War II (Georges Dionne, “Risk Management: History, Definition, and Critique“, Risk Management and Insurance Review, Volume16, Issue2 ,Fall 2013, pp. 147-166).

Another “non-academic” discipline deals with risks, uncertainties, threats and opportunities or more exactly surprise. Its name is Strategic Foresight and Warning (SF&W). It results from the meeting of the older military Indications and Warning and from Strategic Foresight. Intelligence and military officers mainly developed SF&W for their needs regarding international and national security issues. Strategic Warning, for its part, for example, remains an essential mission, for example, of the U.S. Defense Intelligence Agency (DIA) as reasserted in its September 2018 Strategic Approach. Meanwhile, classical reference books on Strategic Warning are now part of the DIA 2018 Director’s Reading List. Strategic Warning and SF&W are more specifically the origin and outlook of our experience and practice here, at the Red (Team) Analysis Society.

Both disciplines and practices, risk management and SF&W, thus have different history, actors and aims. Yet, since the ISO revised risk management in 2009, we now have an almost perfect correspondence between SF&W and risk management. The ISO 2018 update confirms the similarity. This article will detail further the two processes, their similarities and complementarities.

CIA NYSE sc

The new risk management process thus lays the foundation for easily incorporating into the risks usually managed by businesses, all national and international security issues as usually related to states’ national interests, from geopolitics to politics, from criminality to war through cyber security. In other words, the process used to manage both the external and internal risks the corporate world faces is now similar to the way states handle their mission of international and domestic security, according to their national interests.

Meanwhile, these very similarities between risk management and SF&W should facilitate discussions and exchanges between the corporate world and the public sector, including in terms of data, information, analysis and process, according to the specificities and strength of each. When differences between SF&W and risk management subsist, we may turn them around to take the best of both world.

Indeed, what matters is to anticipate properly what lies ahead and to take adequate policies. It is not to abide by one label or another.

In this article, we detail the risk management process. We explain the new definition of risk. Then we underline the similarities with SF&W. We stress, where risk management is most different from SF&W, how the former could also help the latter. Notably, risk management provides a framework to address a sensitive area: developing and offering policy or response alternatives to decision-makers.

This is a premium article. To access this article, you must become one of our members or take our online course 1, from process to analytical modeling, or online course 2: scenario building. Log in if you are a member or a trainee. A pdf version of the article is available for members.

FULL ARTICLE 3017 WORDS – pdf 15 pages


Featured Image:  President Barack Obama attends a meeting on Afghanistan in the Situation Room in the White House. On his left, National Security Adviser James L. Jones, Secretary of State Hillary Clinton, U.S. Ambassador to the United Nations Susan Rice, National Intelligence Director Dennis C. Blair and CIA Director Leon Panetta. To his right, Vice President Joe Biden, Secretary of Defense Robert Gates (hidden), Chairman of the Joint Chiefs of Staff Admiral Michael Mullen, White House Chief of Staff Rahm Emanuel. – 9 octobre 2009, The Official White House Photostream, White House (Pete Souza) – Public Domain.

Copyrights for all references to ISO norms remain with the International Organization for Standardization (ISO).

This article is a fully updated and revised version of a text that was published first as an element of the U.S. Government commissioned report, Lavoix, “Actionable Foresight”, Global Futures Forum, November 2010 (pp. 12 & 20-24/98). 

Climate change: Shall we live or die on our changing planet ?

A cavity 1000 feet tall (1600 metres), and as large as two thirds of Manhattan has been found inside the Antarctic Thwaites glacier (Sarah Sloat, “An Enormous Cavity Inside an Antarctic Glacier Harbors a Dangerous Threat », Inverse Daily, February 1, 2019).

It has been created in three year by inside warming and melting. This shows the acceleration of the process, as well as the destabilization of the entire glacier and of its neighbours (Sloat, ibid).

The sole melting of the Thwaites glacier could add two feet to the global rise of the ocean.

Continue reading “Climate change: Shall we live or die on our changing planet ?”

The Quantum Times (Daily Updates)

Horizon scanning on Quantum Information Sciences and technologies and their use.
As a result we publish the Quantum Times, a daily scan and news brief on everything related to the emerging Quantum world. The Quantum Times is updated daily before 9:00 ECT.

You can access it here

Being informed and keeping up with what is happening in the highly competitive race to quantum is crucial. We created this scan to help our users’ community keeping abreast of developments in the field.

You will find here all news (in English) related to quantum computing and simulation, quantum sensing and metrology, quantum communication, as well as quantum key distribution, quantum machine learning and post-quantum cryptography.

Find all our in-depth reports, brief and articles on the Quantum emerging world here. 

Time in Strategic Foresight and Risk Management

From the corporate world to governments, we seek to escape uncertainty and surprises. This is crucial to survive and thrive. It is also necessary for the protection from threats, dangers and risks.

As a whole and generally, our abilities – if not willingness – to identify threats has improved with experience and practice. Notably, we became relatively efficient in assessing likelihood and impact. Nonetheless, one component of threat and risk assessment remains most often unconsidered, unnoticed, and neglected: time.

Yet, time is a crucial component of our ability to prevent surprise, handle threats and manage risks. This article assesses how we integrate time and highlights room for improvements.

Continue reading “Time in Strategic Foresight and Risk Management”

Quantum Computing, Hollywood and geopolitics

Since 2017, quantum information and technology science (QIS), and especially quantum computing, are quickly emerging as central in Hollywood and its movies, TV series and novels. Their scenarii emphasise the link between quantum power and national security situations.

Hollywood and the U.S Strategic Debate

This is a crucial indication, considering that the relation between the U.S. cultural industries and National security has been one of the main drivers of the U.S. strategic debate since World War II (Jean-Michel Valantin, Hollywood, the Pentagon and Washington: The Movies and National Security from World War II to the Present Day, 2005).

That link organizes the structure of the U.S. strategic debate through the very complex and tangled relationships between the federal centres of political power in Washington D.C, the Department of Defence, the intelligence community and the media and cultural industry (Valantin, Ibid.). That is the reason why Hollywood movies, television and video games play a vital role in the U.S. strategic debate.

Continue reading “Quantum Computing, Hollywood and geopolitics”

Sensor and Actuator for AI (1): Inserting Artificial Intelligence in Reality

Beyond hype and hatred, this article focuses on the way Artificial Intelligence (AI) – actually Deep Learning – is integrated in reality, through sensor and actuator.* Operationalisation demands to develop a different way to look at AI. The resulting understanding allows highlighting the importance of sensor and actuator, the twin interface between AI and its environment. This interface is a potentially disruptive driver for AI.

Listen to the article as a deep dive conversation on our podcast, Foresight Frontlines – AI series – created with NotebookLM.

Sensor and actuator, the forgotten elements

Sensor and actuator are key for the development of AI at all levels, including in terms of practical applications. Yet, when the expansion and the future of AI are addressed, these two elements are most of the time overlooked. This is notably because of this lack of attention that the interface may become disruptive. Indeed, could an approach through sensor and actuator for AI be key to the very generalised boom so many seek? Meanwhile, many subfields of AI could also benefit from such further development. Alternatively, failing to completely integrate this approach could lead to unnecessary hurdles, including temporary bust.

Sensor and actuator, another stake in the race for AI

Furthermore, we are seeing emerging three interacting AI-related dynamics in the world. The twin birth and spread of AI-governance for states and AI-management for private actors interact and feed into an international race for AI-power, i.e. how one ranks in the global relative distribution of power. As a result, AI increasingly influences this very distribution of power ( see The New AI-World in the Making). Thus, the drivers for AI are not only forces behind the expansion of AI, but also stakes in the AI-competition. Meanwhile, how public and private actors handle this competition, the resulting dynamics and entailed defeats and victories also shape the new AI-world in the making.

Thus, if sensor and actuator are crucial in widely operationalising AI, then the ability to best develop AI-governance and AI-management, as well as the position in the international race for AI-power, could also very well depend from the mastery of these sensor and actuator.

Related

Artificial Intelligence – Forces, Drivers and Stakes

Drivers for AI: 

Outline

This article uses two case studies to progressively explain what sensor and actuator are. It thus details the twin interface between the AI-agent and its environment. As a result and third, we highlight that one understands best AI as a sequence. That understanding allows us envisioning a whole future world of economic activities. That world is however not without danger and we highlight that it will demand a new type of security. Finally, we shall point out the necessity to distinguish the types of reality the AI sequence bridges.

The next article will focus on different ways to handle the AI sequence and its twin interface, notably the actuator. We shall look more particularly at the Internet of Things (IoT), Human Beings themselves, and Autonomous Systems, better known as robots. Meanwhile we shall explore further the new activities AI creates.

Looking at the game against AlphaGo differently

We shall examine again (Google) DeepMind’s AlphaGo, the supervised learning/AI-agent that plays Go and which victory started the current AI phase of development.

Replaying the game against AlphaGo

Now, let us imagine a new game is set between Mr Fan Hui, the Go European Champion AlphaGo defeated by a 5-0 win in October 2015 and the AI-agent (AlphaGo webpage). Mr Fan Hui, as happened in reality, plays first against the AI-agent AlphaGo. In front of him, we can see a goban (the name of the board for the go). AlphaGo is connected to the cloud for access to distributed computing power, as it needs a lot of computing power.

Mr Fan Hui starts and makes its first move placing a white stone on the Goban. And then it is the turn of AlphaGo. How will the AI-agent answer? Will it make a typical move or something original? How quickly will it then play? The suspens is immense, and…

Nothing happens.

What went wrong?

The (right) way DeepMind did it

If you watch carefully the video below showing the original game, you will notice that, actually, the setting is not exactly what I described above. A couple of other crucial elements are present. If DeepMind had put a human and an AI-agent face to face according to my described setting, then their experiment would have gone wrong. Instead, thanks to the elements they added, their game was a success.

You can observe these three elements at 1:19 of the video, as shown in the annotated screenshot below:

  • A: a human player
  • B: a screen
  • C: a human being with a bizarre device on a table.
Screenshot of the video Google DeepMind: Ground-breaking AlphaGo masters the game of Go – 1:19

Sensor

In our imagined setting, I did not create an interface to tell the AI-agent that Mr Hui had moved a stone, and which one. Thus, as far as the AI agent was concerned there was no input.

In DeepMind’s real setting we have the human agent (C). We may surmise that the bizarre device on the table in front of her allows her to enter in the computer for the AI-agent the moves that Mr Fan Hui does throughout the game.

More generally, a first input interface must exist between the real world and the AI-agent to see it functioning. Therefore, we need sensors. They will sense the real world for the AI. We also need to communicate to the AI-agent the data the sensors captured, in a way that the AI understands.

Let us assume now that we add agent C and its device – i.e. the sensor system – to our setting.

Again, nothing happens.

Why? The AI-agent proceeds and decides about its move. Yet, the algorithmic result remains within the computer, as a machine output whatever its form. Indeed, there is no interface to act in the real world. What is needed is an actuator.

Actuator

The interface to the outside world must not only produce an output that our Go Master can understand for each move, but also one that will make sense, for him, during the whole game.

It would not be enough to get just the position of a stone according to coordinates on the board. Such type of result would demand first that Mr Fan Hui has a good visualisation and mapping capability to translate these coordinates on the goban. It would demand, second, that our Go Champion has a truly very good memory. Indeed, after a couple of moves, being able to picture and remember the whole game would be challenging.

DeepMind actually used the needed actuators to make the game between human and AI possible.

At (B), we have a screen that displays the whole game. The screen also most probably shows the AI-agent move each time the latter plays. Then, at (A), we have a human agent, who translates the virtual game on screen in reality on the goban. To do so, he copies the move of the AI-agent as displayed on the screen by placing the corresponding stone on the board.

It is important to note the presence of this human being (A), even though it was probably not truly necessary for Mr Fan Hui, who could have played in front of the screen. First, it is a communication device to make the whole experiment more fully understandable and interesting for the audience. Then, it is possibly easier for Mr Fan Hui to play on a real goban. The translation from a virtual world to a real world is crucial. It is likely to be a major stake in what will really allow AI to emerge and develop.

As we exemplified above, specifying the process of interaction with an AI-agent, highlights the importance of twin interfaces.

This is actually how DeepMind conceptualised one of its latest AI achievement, to which we shall now turn.

Towards seeing as a human being

In June 2018, DeepMind explained how it had built an AI-agent that can perceive its surrounding very much as human beings do it (open access; S. M. Ali Eslami et al., “Neural scene representation and rendering“, Science  15 Jun 2018: Vol. 360, Issue 6394, pp. 1204-1210, DOI: 10.1126/science.aar6170).

“For example, when entering a room for the first time, you instantly recognise the items it contains and where they are positioned. If you see three legs of a table, you will infer that there is probably a fourth leg with the same shape and colour hidden from view. Even if you can’t see everything in the room, you’ll likely be able to sketch its layout, or imagine what it looks like from another perspective.” (“Neural scene representation and rendering“, DeepMind website). 

The scientists’ aim was to create an AI-agent with the same capabilities as those of human beings, which they succeeded in doing:

DeepMind uses “sensor and actuator”

What is most interesting for our purpose is that what we described in the first part is exactly the way the scientists built their process and solved the problem of vision for an AI-agent.

They taught their AI-agent to take images from the outside world (in that case still a virtual world) – what we called the sensor system – then to convert it through a first deep learning algorithm – the representation network – into a result, an output – the scene representation. The output, at this stage, is meaningful to the AI-agent but not to us. The last step represents what we called the actuator. It is the conversion from an output meaningful to the AI to something meaningful to us, the “prediction”. For this, DeepMind developed a “generation network”, called a “neural renderer”. Indeed, in terms of 3D computer graphics, rendering is the process transforming calculation into an image, the render.

The screenshot below displays the process at work (I added the red circles and arrows to the original screenshot).

The following video demonstrates the whole dynamic:

Developing autonomous sensors for the vision of an AI-agent

In the words of DeepMind’s scientists, the development of the Generative Query Network (GQN) is an effort at creating “a framework within which machines learn to represent scenes using only their own sensors”. Indeed, current artificial vision systems usually use supervised learning. This means that human intervention is necessary to choose and label data. DeepMind’s scientist wanted to overcome as much as possible this type of human involvement.

The experiment here used a “synthetic” environment (Ibid., p5). The next step will need new datasets to allow expansion to “images of naturalistic scenes” (Ibid). Ultimately, we may imagine that the GQN will start with reality, captured by an optical device the AI controls. This implies that the GQN will need to integrate all advances in computer vision. Besides, the sensors of our AI-agent will also have to move through its environment to capture the observations it needs. This may be done, for example, through a network of mobile cameras, such as those being increasingly installed in cities. Drones, also controlled by AI, could possibly supplement the sensing network.

Improving visual actuators for an AI-agent

Researchers will also need to improve the actuator (Ibid.). DeepMind’s scientists suggest that advances in generative modeling capabilities, such as those made through generative adversarial networks (GAN) will allow moving towards “naturalistic scene rendering”.

Meanwhile, GANs could lead to important advances in terms, not only of visual expression, but also of “intelligence” of AI-agents.

When GANs train to represent visual outputs, they also seem to develop the capability to group, alone, similar objects linked by what researchers called “concepts” (Karen Hao, “A neural network can learn to organize the world it sees into concepts—just like we do“, MIT Technology Review, 10 January 2019). For example, the GAN could “group tree pixels with tree pixels and door pixels with door pixels regardless of how these objects changed color from photo to photo in the training set”… They would also “paint a Georgian-style door on a brick building with Georgian architecture, or a stone door on a Gothic building. It also refused to paint any doors on a piece of sky” (Ibid.) .

Similar dynamics are observed in the realm of language research.

Using a virtual robotic arm as actuator

In a related experiment, DeepMind’s researchers used a deep reinforcement network to control a virtual robotic arm instead of the initial generation network (Ali Eslami et al., Ibid., p.5). The GQN first trained to represent its observations. Then it trained to control the synthetic robotic arm.

In the future, we can imagine a real robotic arm will replace the synthetic one. The final actuator system” will thus become an interface between the virtual world and reality.

AI as a sequence between worlds

Let us now generalise our understanding of sensor and actuator, or interfaces for AI-input and AI-output.

Inserting AI in reality means looking at it as a sequence

We can understand processes involving AI-agents as the following sequence.

Environment -> sensing the environment (according to the task) ->
doing a task -> output of an AI-intelligible result ->expressing the result according to task and interacting actor

The emergence of new activities

This sequence, as well as the details on the GAN actuator for example, shows, that actually more than one AI-agent is needed if one wants to completely integrate AI in reality. Thus, the development of performing AI-agents will involve many teams and labs.

Envisioning the chain of production of the future

As a result, new types of economic activities and functions could emerge in the AI-field. One could have, notably, the assembly of the right operational sequence. Similarly, the initial design of the right architecture, across types of AI-agents and sub-fields could become a necessary activity.

To break down the AI integration in sequence allows us starting to understand the chain of production of the future. We can thus imagine the series of economic activities that can and will emerge. These will go far beyond the current emphasis on IT or consumer analytics, what most early adopters of AI appear to favour so far (Deloitte, “State of ArtificiaI Intelligence in the enterprise“, 2018).

The dizzying multiplication of possibilities

Furthermore, the customisation of the AI sequence could be tailored according to needs. One may imagine that various systems of actuators could be added to a sequence. For example a “scene representation” intelligible to the AI-agent to use our second case study could be expressed as a realistic visual render, as a narrative and as a robotic movement. We are here much closer to the way a sensory stimulation would trigger in us, human beings, a whole possible range of reactions. However, compared with the human world, if one adds the cloud, then the various expressions of the “scene representation” could be located anywhere on earth and in space, according to available communication infrastructure.

The possibilities and combinations entailed are amazing and dizzying. And we shall look in the next articles at the incredible possibilities which are being created.

Towards the need to redefine security?

Altering our very reality

In terms of dangers, if we come to rely only or mainly on a world that is sensed, understood, then expressed by an AI sequence, then we also open the door to an alteration of our reality that could be done more easily than if we were using our own senses. For example, if one relies on a sequence of AI agents to recognise and perceive the external world miles away from where we are located, then an unintentional problem or a malicious intent could imply that we receive wrong visual representations of reality. A tree could be set where there is no tree. As a result, a self-driving car, trying to avoid it, could get out of the road. The behaviour of the users of this very expression of reality will make sense in the AI-world. It will however be erratic outside it.

Actors could create decoys in a way that has never been thought about before. Imagine Operation Fortitude, the operation though which the allies deceived the Nazis during World War II regarding the location of the 1944 invasion, organised with the power of multiple AI-sequences.

Actually, it is our very reality, as we are used to see it expressed through photographs, that may become altered in a way that cannot be directly grasped by our visual senses.

Breaking the world-wide-web?

Here we also need to consider the spread of propaganda and of what is now called “Fake News”, and most importantly of of the “Fake Internet” as Max Read masterly explained in “How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually” (Intelligencer, 26 December 2018). Assuming the spread of “Fake Everything” signals established widespread malicious intention, then adding to it the power of AI-agents could break the world-wide-web. The impacts would be immense. To avoid such a disaster, actors will have to devise very strong regulations and to favour and spread new norms.

Artificial Intelligence completely redefines the way security can be breached and thus must be defended.

Integrating AI-agents according to different realities: Virtual-Virtual and Virtual-Material

From virtual to virtual

When the AI-agent’s environment and the other actors are virtual, then the sequence is – to a point – easier to build. Indeed everything takes place in a world of a unique nature.

However, fear and need to know will most probably imply that human beings will want control at various points of the sequence. Thus, ways to translate the virtual world into something at least perceptible by humans are likely to be introduced. This will enhance the complexity of development.

From virtual to material

When the environment is real and when interactions take place between an AI-agent and human beings, the sequence becomes much more complex. The twin interfaces must indeed become bridges between two different types of world, the digital and the real.

Actually, if we look through these lenses to the deep learning ecosystem and its evolution since 2015, researchers devoted a large part of their initial efforts to create AI-agents able to “do a task” (playing, sorting, labelling, etc.). Meanwhile, scientists have developed ways first to make the real world intelligible to AI-agents. In the meantime, the actuator-systems developed become intelligible to humans but they remain nonetheless mostly virtual.

Lagging behind in expressing the virtual world in the real one – Visual AI-agents

For example, the real world is translated into digital photographs, which the AI-agent through deep learning algorithms recognises. The AI will sort them or label them in a way that human beings understand. For instance, human beings easily understand words, or images displayed on a screen, which are the result of the actuator part of the sequence. Yet, this output remains virtual. If we want to improve further, then we must create and use other devices to enhance or ease the interface from virtual to real. Object recognition proceeds in a similar way.

In terms of visual AI-related efforts, we may wonder if we have not progressed more in giving vision to AI-agents than in using this vision in a way that is useful enough to human beings in the real world.

From virtual to real, sensing more advanced than expressing?

A similar process is at work in China with sound recognition (Joseph Hincks, “China Is Creating a Database of Its Citizens’ Voices to Boost its Surveillance Capability: Report“; Time, 23 October 2017). Data analytics are also a way to explain to AI-agents what internet users are, according to various criteria. Sensors collecting data for example from pipelines (e.g. (Maria S. Araujo and Daniel S. Davila, “Machine learning improves oil and gas monitoring“, 9 June 2017, Talking IoT in Energy’;,Jo Øvstaas, “Big data and machine learning for prediction of corrosion in pipelines“, 12 Jun 2017, DNV GL) or from the flight of an aircraft, or from anything actually, are ways to make the world intelligible to an algorithm with a specific design.

Yet, have we made similar progress into the development of actuators that interface between the virtual world of the AI-agent and the reality of human beings? Alternatively, could it be that we did improve the whole sequence but that progresses remain limited to the virtual world? In all cases what are the impacts in terms of security, politics and geopolitics?

This is what we shall see next, looking more particularly at the Internet of Things, Robots and Human Beings, as potential actuator systems of AI.


*Initially, I used the word “expressor” instead of the adequate word, “actuator”. Thanks to Teeteekay Ciar for his help in finding out.

Featured image: U.S. Army graphic by Sonya Beckett, CERDEC NVESD – Public Domain – From By Aris Morris, January 9, 2018, Army ALT MagazineScience and Technology.

Strategic Foresight & Warning Analysis

Strategic Foresight and Warning (SF&W) is at once process and analysis.

By SF&W analysis we mean all methodologies and related issues allowing for the development of an understanding grounded in reality that will generate best anticipatory products, useful to decision-makers and policy-makers for carrying out their mission (to find your way within the myriad of labels given to anticipatory activities, see Intelligence, strategic foresight and warning, risk management, forecasting or futurism? (Open Access/Free) and When risk management meets SF&W).

The larger SF&W analytical method can be seen as following steps, with use of various methodologies, notably to face specific challenges for each stage:

Strategic Foresight and Warning analytical methodology, foresight analysis, scenarios

A good bibliography is a typical part of what is involved in step 1, to which must be added an ongoing scan as what is done with the Red (Team) Analysis Weekly. A more detailed discussion of step 1 and 6 can be found in the section scan & monitor.

A summary of the methodology used for the second and third steps is presented here with mapping dynamic networks part I & part II;  followed by Determining criteria: a revisited influence analysis; Variables, values and consistency in dynamic networks; and finally Constructing a foresight scenario’s narrative with Ego Networks.

The Chronicles of Everstate are an early experiment exemplifying one way to map an issue and how ego networks can be used to develop narratives. Our online course, From Process to Creating your Analytical Model… focuses on the creation of the model, thus on the most fundamental part of these steps..

Examples of scenarios and their indicators are given for Syria and Libya. Furthermore, as far as Libya is concerned, we detail the methodology to evaluate the likelihood of each scenario. Another example of the narrative can be found here.

Those steps are also addressed in the section Assessing future security threats, where we share our latest insights and foresights on methodology using specific geopolitical global issues, risks and uncertainties as case studies.

The public part of our monitoring – step 6 – is done for various issues through The Sigils, as well as through The Weekly(both open access/Free) You can also find monitoring at work in our Horizon Scanning Board (Open access/Free). Furthermore, these real life indications allow checking the validity of scenarios, and updating the model used for each issue, if necessary.

Finally, monitoring is necessary – if not crucial – to identify new emerging issues (the feedback on step 1).

Visualising the Steps to Foresee the Future and Get Ready for It

Strategic foresight and warning or more broadly anticipation is a step by step process to anticipate the future in an actionable way.

The graphic ideal type process displayed below is the result of more than a decade of work with and about systems of anticipation, from early warning systems to prevent conflicts for aid agencies to strategic warning and strategic foresight with security and intelligence agencies and practitioners. It considers too research through commissioned reports and teaching on the topic.

It is more particularly adapted to global security, external risks, political and geopolitical risks and uncertainties. Indeed, the process recommended builds upon more than twenty years experience in central administration and in research in the areas of war, international relations, political science, analysis and policy planning.

Foresight, warning, process, strategic foresight and warning

The architecture of the Red Team Analysis Society’s website is built following this process. Each section strives progressively to address the various challenges that are met at each step, to explain and apply various possible methodologies and tools, and finally to deliver real-life strategic foresight and warning products.

See also:

———

Featured image: Stanley Kubrick exhibit at EYE Filminstitut Netherlands, Amsterdam – The War Room (Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb)- By Marcel Oosterwijk from Amsterdam, The Netherlands [CC-BY-SA-2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons

Strategic Foresight Methodology

By strategic foresight methodology, we mean this part of the general strategic foresight and warning methodology that focuses on foresight analysis. In other words, it is the general method without the warning part. It thus consist in:

  • Defining the question
  • Step 1: Exploratory stage
  • Step 2 – The creation of the model for SF&W: mapping dynamic networks part I & part II. See also our online course for this part.
  • Step 3 – Building scenarios
  1. Determining criteria: a revisited influence analysis;
  2. Variables, values and consistency in dynamic networks;
  3. Constructing a foresight scenario’s narrative with Ego Networks: This methodology was experimented with the Chronicles of Everstate – It can be used as a guide and fall back in case the analyst faces a hurdle in developing its narrative. However, practically, building a whole narrative with ego network is likely to be too painstaking for an analyst to be systematically used. Should Artificial Intelligence be applied to SF&W, then, possibly, it could, benefit from the ego-network approach.
EN