(Art design: Jean-Dominique Lavoix-Carli)
In the late winter of 2022-2023, artificial intelligence (AI) made again the buzz. This time, it is not the amazing capacity an artificial intelligence agent, Alphago by DeepMind (now Alphabet/Google), demonstrated to win alone against a go Master, but the ability to discuss and do “many things as a human” displayed by the latest version of OpenAI (Microsoft) AI model, GPT-4.
Fear and fascination are once more unleashed.
- The New Space Race (1) – The BRICS and Space Mining
- Uranium for the U.S. Nuclear Renaissance: Meeting Unprecedented Requirements (1)
- Fifth Year of Advanced Training in Early Warning Systems & Indicators – ESFSI of Tunisia
- Towards a U.S. Nuclear Renaissance?
- AI at War (3) – Hyperwar in the Middle east
- AI at War (2) – Preparing for the US-China War?
- Niger: a New Severe Threat for the Future of France’s Nuclear Energy?
As each time disruptive technology spreads, we may complain, struggle against it, or fear it. Yet, it is highly probable (over 80%, although not certain) that AI and more specifically the “GPT-type” of AI, will generate significant changes.*
The wisest behaviour when facing such wild changes is to find out how we can make sure the technology remains a tool to serve us rather than becoming slave to or victim of the technology.
This series of articles will thus explore concretely how AI and more specifically those AI based on GPT-like-models can help us anticipating the future, notably in the fields of geopolitics, national and international security.
The first article of the series first presents what are GPT models, ChatGPT and why they matter in terms of occupation and jobs. Then, we test ChatGPT on a specific question: “the future of the war in Ukraine over the next twelve months”.
After an initial disappointment we explain, we proceeded with the test to determine if ChatGPT can assist us in identifying variables and their causal relationships for strategic foresight and early warning analysis. We provide a transcript of our conversation with ChatGPT along with our comments, and conclude that there is sufficient potential to create an AI assistant for variables named Calvin (please visit the link).
ChatGPT, GPT, Generative AI… Let’s get things right first
Why does it matter?
On 5 and 7 April 2023 Goldman Sachs in its Insights and in the special tech edition of its Briefings warned about the “sweeping changes to the global economy” that “Generative Artificial Intelligence” (see below) would bring.
“They could drive a 7% (or almost $7 trillion) increase in global GDP and lift productivity growth by 1.5 percentage points over a 10-year period…”
However, “Shifts in workflows triggered by these advances could expose the equivalent of 300 million full-time jobs to automation…”
Goldman Sachs Insights, “Generative AI could raise global GDP by 7%“, 5 April 2023
OpenAI, for its part, also carried out research on the impact on the labor market of its AI models and more generally of Large Language Models (LLMs) (Tyna Eloundou, Sam Manning, Pamela Mishkin, Daniel Rock, “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models“, ArXiv, 17 Mar 2023 (v1), last revised 23 Mar 2023 – v4). Note, however, that OpenAI also has a corporate and legal interest in promoting LLMs and more particularly its GPTs-models as “general purpose technology.” For OpenAI, to see its GPTs-models perceived as “general purpose technology” would, for example, imply a very strong advantage in the protection of its trademark and in preventing others to use the name GPT (e.g. Connie Loizos, “‘GPT’ may be trademarked soon if OpenAI has its way“, TechCrunch, 25 April 2023).
In general, OpenAI Research found that:
“… around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted…
Our analysis suggests that, with access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. When incorporating software and tooling built on top of LLMs, this share increases to between 47 and 56% of all tasks.”
Eloundou, et al. (“GPTs are GPTs, 2023)
Generative AI is meant to particularly impact “software, healthcare and financial services industries”, and, more largely, the media and “entertainment, education, medicine and IT industry” sectors (Ibid., Goldman Sachs Insights, “Stability AI CEO says AI will prove more disruptive than the pandemic“, 31 March 2023).
For example, in May 2023, Hollywood writers went on strike to make sure their work and pay is protected from Generative AI (Dawn Chmielewski and Lisa Richwine, “‘Plagiarism machines’: Hollywood writers and studios battle over the future of AI”, Reuters, 3 May 2023). A Californian company, Chegg, selling specialised tutoring services is seeing its shares plummet because of ChatGPT’s use by its clients (Prarthana Prakash, “Chegg’s shares tumbled nearly 50% after the edtech company said its customers are using ChatGPT instead of paying for its study tools“, Fortune, 2 May 2023).
In general, it is estimated that 900 types of occupations will be impacted, and first among them knowledge workers, scientists, and software coders (Ibid.).
As human beings concerned with the future, or more specifically as scientists and practitioners of strategic foresight and early warning, we need thrice to consider these developments.
First, because, being concerned with the future, we must be able to input the development and use of AI into all our foresight and warning analysis. This is why, for example, we created our initial series of articles on AI.
Second, practitioners of strategic foresight, early warning, risk management etc. are “knowledge workers” and “scientists.” We are thus on the front line of those who will be hit by Generative AI, if Goldman Sachs is right. Hence it is better to make sure AI serves us rather than kill us.
Finally, and more idealistically, if Generative AI indeed helps easing the process of anticipation in many ways, then it should also help spreading widely the use of actionable foresight and early warning. As a result, obstacles to crisis prevention and threat mitigation will be potentially reduced. However, challenges related to the organisation of the anticipation process and the willingness to heed warnings may remain largely unaffected.
What are ChatGPT, GPT-based AI and Generative Artificial Intelligence
GPT-models
GPT is a language model (Natural Language developed by OpenAI (Microsoft) that uses deep learning to generate human-like responses to prompts. The “GPT” stands for “Generative Pre-trained Transformer,” which means that the AI has been trained on vast amounts of text data to understand the nuances not only of the English language but also of other languages. GPT-models are part of the Generative AI family.
The newest model still tested by OpenAI is gpt-4. The most recent model OpenAI commercialises for usage is gpt-3.5-turbo. Waiting for gpt-4 to be available, gpt-3.5-turbo is the model we use here at RTAS for our experiment with AI assistants for Strategic Foresight and Warning: Aria (always available throughout the website, bottom right-hand corner), Calvin, Kai, Regina and Pithia.
ChatGPT
ChatGPT is an application using a GPT model that can hold a conversation with a human and generate responses that sound like they were written by a person. This is what sets the application apart and also frightens people. In March-May 2023, the public version of ChatGPT uses the model gpt-3.5-turbo. The most advanced application uses the newest model gpt-4, but is only available to paying subscribers (via ChatGPT Plus). For external usage through API, developers and users must apply through a waiting list.
Generative AI
Generative AI belongs to the Machine Learning > Deep Learning > Unsupervised, Supervised and Reinforcement Learnings category of AI (for more details on the typologies of AI, see Hélène Lavoix, “When Artificial Intelligence will Power Geopolitics – Presenting AI“, “Artificial Intelligence and Deep Learning – The New AI-World in the Making“, “Inserting Artificial Intelligence in Reality“, The Red Team Analysis Society).
GPT latest models were trained through supervised and reinforcement learning with human feedback (see OpenAI, “Aligning language models to follow instructions“).
Generative AI is thus typically based on deep learning models, which are trained on large datasets of examples in order to learn patterns and generate new content. We have, for example, generative adversarial networks (GANs – Ian Goodfellow et al. “Generative Adversarial Networks“, 2014) and variational autoencoders (VAEs – defined in 2013 by Kingma et al. and Rezende et al. – for an explanation J. Altosaar, “Tutorial – What is a variational autoencoder?“). For instance, a GAN might be trained on a dataset of images, and then used to generate new images that are similar in style and content to the original dataset, but are not exact copies.
Generative AI is thus a type of AI that is designed to create or generate new and original content, such as images, videos, music, or text. You can see below examples of images generated with OpenAI Dall-E model for this article, on three topics: deep sea security, space security and steampunk architecture.
Usage, applications and impacts
Generative AI has a wide range of potential applications, from creating realistic 3D models to generating personalised content for users. However, it also raises ethical and security concerns around the potential misuse of generated content, such as deepfakes or fake news.
The world into which an application such as ChatGPT actually evolves, as well as most GPT-based applications is digital. In other words, most GPT-based apps have no capability to act directly concretely in reality… except if human beings become their actuators (see Helene Lavoix, “Actuator for AI (1): Inserting Artificial Intelligence in Reality“, The Red Team Analysis Society, 14 January 2019).
Thus, those activities that will be most impacted by GPT-based applications and their likes are those located within the digital world. Actually, we may rephrase this as: the more insubstantial – in the meaning of not having physical existence – our activity, the more likely it will be impacted by Generative AI applications such as GPT-based apps.
For example, if you are a plumber or a mason, it is highly unlikely ChatGPT or similar GPT-based application will make a serious impact on your activity. It may help people understanding drains or masonry, and give them advice and steps to follow to do something, but the real activity of repairing a basin or removing a part of a wall to make a cupboard will still be made by a real human being and likely by a specialist.
On the contrary if your work is directly related to and carried out in the world-wide-web from web-marketing to coder and developer, your work is very likely to be heavily impacted by ChatGPT or its likes.
In the world of activities related to ideas, including knowledge, similarly, the impact of Generative AI will highly likely be important.
The logic is similar to what we identified and explained previously regarding the importance of sensors and actuators for the development of AI (see Lavoix, “Actuator for AI (1): Inserting Artificial Intelligence in Reality“, Ibid, 2019). This corresponds to what Eloundou, et al. (“GPTs are GPTs, 2023) found:
…information processing industries (4-digit NAICS) exhibit high exposure, while manufacturing, agriculture, and mining demonstrate lower exposure [to LLMs and GPTs].
Eloundou, et al. (“GPTs are GPTs, 2023)
Indeed, OpenAI’s research paper presents a table of impacted occupations, as reproduced here.
Logically, it appears that there are also variations in the degree of exposure of “ideational” activities according to the types of skills most necessary for an activity. Science and critical thinking are the least exposed and impacted by LLMs, whilst programming and writing are part of the most exposed activities:
“Our findings indicate that the importance of science and critical thinking skills are strongly negatively associated with exposure, suggesting that occupations requiring these skills are less likely to be impacted by current LLMs. Conversely, programming and writing skills show a strong positive association with exposure, implying that occupations involving these skills are more susceptible to being influenced by LLMs.”
Eloundou, et al. (“GPTs are GPTs, 2023), p.14.
Eloundou, et al. (“GPTs are GPTs, 2023) provide a table, reproduced here, estimating the exposure of various types of mainly cognitive skills.
This will allow each of us to appraise which of our tasks can be eased by GPT-like models, or threatened by them, according to the way AI is perceived, integrated within society and used.
Our concern with the use of GPT-based applications for strategic foresight and warning is thus a case of a larger issue regarding the way Generative AI will impact knowledge and ideas-based activities.
Now, concretely, the type and extent of impact of Generative AI models and their applications will depend upon the specifics of each model and its applications in regard to specific activities. Let us thus turn practically to such a case, the use of ChatGPT for strategic foresight and warning, and the first test we did, focused on the future of the war in Ukraine.
Testing ChatGPT on the future of the war in Ukraine
Not (yet) a tool to monitor current events
The first challenge we faced was that gpt-3.5-turbo, thus the model used by ChatGPT, ended its training in September 2021. The second was that ChatGPT is a closed system, unable to access the internet or any source of information after the end of its training.
In the words of ChatGPT, asked if it could give a political assessment on current events:
I can provide political assessments on current events to the best of my abilities based on the information available up until my last training cutoff in September 2021. However, please keep in mind that my responses might not reflect the most up-to-date developments or the current state of affairs, as the world is constantly evolving.
ChatGPT – 2 May 2023
Hence, ChatGPT and its likes cannot be used, for now, for all the steps of the strategic foresight and early warning process or risk management system that need up to date information. For example, it cannot be used for monitoring and surveilling early warning issues, nor to estimate the likelihood of scenarios or use the latter to steer policy. In terms of horizon scanning, it would need to be completed by other tools.
For those who would be disappointed, please note that OpenAI has opened its models to developers working on plugins for ChatGPT that would, for example, allow to “retrieve real-time information”. It is thus a question of time before GPT-based models and applications can be up-to-date. Indeed, the “New Bing“, Microsoft new search engine, uses GPT-4 and, de facto, accesses the world-wide-web.
Even though developments are very fast in the field, for now, our test will be done with what is available: a Generative AI which “knowledge” was cut off in September 2021.
If ChatGPT, as it is currently (Spring 2023) is not a tool we can use to monitor current evolutions, does that also mean it is useless for our purpose? Alternatively, even cut off from most recent events and knowledge, can it nonetheless be useful?
First milestones for a tool contributing to build a model for strategic foresight and early warning
Considering the methodology of strategic foresight and early warning, we shall test in which way ChatGPT and GPT-models can become tools for different aspects and parts of the analytical process.
Knowing that most serious methodologies of strategic foresight and warning, indeed any analysis, are grounded in explicit modeling, we shall examine first the capability of ChatGPT to help us with the creation and development of our model (see Hélène Lavoix, “Modeling for Dynamic Risks and Uncertainties (1) : Mapping Risk and Uncertainty“, The Red Team Analysis Society, 2018 [2011]; Joshua Epstein, Joshua M., “Why Model?“, Journal of Artificial Societies and Social Simulation 11(4)12, 2008; The Millennium Project: Futures Research Methodology,Version 3.0, 2009).
Can ChatGPT assist in identifying the crucial factors and actors relevant to our question? More precisely, can it aid in discovering the variables and causal relationships that drive the dynamics of our strategic foresight and warning issue?
We cut and pasted the conversation with ChatGPT below, and added comments in the column next to the answers.
Dialogue with Chat GPT (26 March 2023)
Comments
Can you identify variables with their attributes for the issue ‘the future of the war in Ukraine over the next 12 months’?
ChatGPT:
Certainly! Here are some possible variables with their associated attributes that could be relevant for the issue of the future of the war in Ukraine over the next 12 months:
- Military strength: The relative military strength of the opposing sides in the conflict, including their weaponry, training, and troop levels.
- Diplomatic efforts: The extent and success of diplomatic efforts to resolve the conflict, including negotiations between Ukraine, Russia, and other relevant parties.
- Economic sanctions: The use and effectiveness of economic sanctions against Russia, which has been accused of supporting separatist rebels in eastern Ukraine.
- Domestic politics: The domestic political situations in Ukraine, Russia, and other relevant countries, including the popularity of leaders and their willingness to compromise or escalate the conflict.
- International alliances: The level of support that Ukraine and Russia receive from their allies and partners, including NATO, the EU, and China.
- Humanitarian situation: The humanitarian situation in Ukraine and the impact of the conflict on civilians, including displacement, casualties, and access to basic needs like food, water, and medical care.
- Media coverage: The media coverage and public opinion on the conflict in Ukraine, both domestically and internationally, which can shape political decisions and public pressure for resolution.
These are just a few possible variables that could be relevant for understanding the future of the war in Ukraine over the next 12 months. Other factors, such as geography, historical context, and cultural differences, may also be important to consider.
Even though the GPT models were closed before the war started, a war is grounded in dynamics that often start long before the onset of the war. Furthermore, knowledge on international relations, war, the various actors involved etc. builds upon millenia of understanding. Hence a GPT training model stopped in September 2021 should be adequate to help us identifying variables and their causal relationships for our issue. We would only have to check that the variables identified allow for taking into account the real-life events and dynamics that took place after September 2021. If necessary, we could correct our model accordingly.
Here the variables given by ChatGPT are correct but, as stated by ChatGPT, not exhaustive, when we need not all but yet far more variables than what was given here by ChatGPT. Furthermore, we are still at a very high level of generality. The variables given are hardly sufficient to capture the intricacies of real dynamics, the specificities of each actor, etc. They do not either integrate aspects of knowledge and understanding such as the systemic level of analysis.
Thus, with the type of question used initially, ChatGPT answers could be helpful for beginners, lay people and non specialists starting an analysis on this type of issues. These variables can help as a starting point but are hardly sufficient.
In terms of the attributes of the variables, some are given but not systematically. This aspect will have to be handled more precisely and specifically later on.
Can you add variables relative to the international system
ChatGPT:
Certainly! Here are some additional variables related to the international system that could be relevant for the issue of the future of the war in Ukraine over the next 12 months:
- Geopolitics: The broader geopolitical context of the conflict, including the power dynamics and strategic interests of the major powers involved, such as the US, Russia, China, and the EU.
- International law: The role of international law and institutions in resolving the conflict, including the authority and enforcement capabilities of bodies like the UN and the International Criminal Court.
- Norms and values: The role of shared norms and values, such as democracy, human rights, and the rule of law, in shaping international attitudes towards the conflict and influencing the behavior of the parties involved.
- Economic interdependence: The degree of economic interdependence between Ukraine, Russia, and other relevant countries, and the potential impact of the conflict on trade, investment, and global economic stability.
- Military alliances: The impact of military alliances and security arrangements, such as NATO and the Collective Security Treaty Organization, on the conflict and the potential for escalation.
- International cooperation: The extent and effectiveness of international cooperation and coordination in addressing the conflict, including efforts to provide humanitarian aid, monitor ceasefire agreements, and support diplomatic negotiations.
These variables highlight the importance of understanding the broader international context in which the conflict is unfolding, and how it shapes the actions and decisions of the parties involved.
We thus asked ChatGPT to add other types of variables we knew to be important.
The results obtained were both interesting and good. It was particularly striking that ChatGPT thought about including norms and values.
However, again, variables are missing. For example, elements related to the Realist school of International relations, are not present, but we can expect that more precise prompts would yield better results. We should also be careful that biases are not introduced at the stage of the choice of variables.
After two exchanges, we could see a way to use ChatGPT for variables, through iterations between practitioner and IA, helping the former getting progressively the variables needed.
Should we also add the American national interest as variable?
ChatGPT:
Yes, the American national interest could be a relevant variable to consider in the issue of the future of the war in Ukraine over the next 12 months. The United States has been involved in the conflict in various ways, including providing military and economic aid to Ukraine, imposing sanctions on Russia, and supporting diplomatic efforts to resolve the conflict.
Some possible attributes of this variable could include:
- Strategic interests: The extent to which the conflict in Ukraine aligns with or challenges American strategic interests, such as promoting democracy and human rights, containing Russian aggression, and maintaining stability in Europe.
- Domestic politics: The role of domestic politics in shaping American policy towards the conflict, including the views and interests of key stakeholders such as Congress, the President, and interest groups.
- Alliances and partnerships: The impact of American alliances and partnerships, such as NATO and the EU, on the conflict and the potential for escalation.
- Economic interests: The potential impact of the conflict on American economic interests, such as trade and investment with Ukraine and other countries in the region.
- Military capabilities: The extent to which American military capabilities could be mobilized in response to the conflict, including the potential for direct military intervention or support for Ukrainian forces.
However, it’s important to note that the American national interest is just one of many variables that could be relevant to understanding the future of the war in Ukraine over the next 12 months, and it should be considered in conjunction with other factors, such as the interests and actions of other key actors in the conflict.
We further tested ChatGPT by asking if a very specific variable, in that case “the American National Interest” was a variable for our question.
The answer was interesting even though flawed from the strict point of view of the definition of a variable and its attributes in social science.
In social science terms, “a variable is a logical set of attributes” (art “Variable and attribute (research)” Wikipedia – using Earl R. Babbie, The Practice of Social Research, 12th edition, Wadsworth Publishing, 2009, p. 14-18).
It appears that here our instructions to and definitions for ChatGPT were not clear enough. Thus the result was not what we expected.
Rather than obtaining a clear set of variables with their attributes, we obtained one variable with influencing (or dependent) and influenced (independent) variables, with explanations and attributes, all presented in an explanatory manner.
The substance of the answer, yet, let us expect that if we correct our question and, eventually, train ChatGPT adequately, then we could – hopefully – obtain the right set of variables with their clear attributes.
Now you highlighted variables can you specify causal links between them?
ChatGPT:
Certainly! Here are some possible causal links between the variables previously identified for the issue of the future of the war in Ukraine over the next 12 months:
- Military strength -> Conflict escalation: If one side perceives a significant military advantage, they may be more likely to escalate the conflict, leading to increased violence and casualties.
- Diplomatic efforts -> Conflict resolution: Successful diplomatic efforts could lead to a peaceful resolution of the conflict, potentially through negotiations or a ceasefire agreement.
- Economic sanctions -> Political pressure: Economic sanctions could put political pressure on Russia to change its behavior, potentially leading to a reduction in support for separatist rebels and a de-escalation of the conflict.
- Domestic politics -> Policy decisions: The political situations in Ukraine, Russia, and other relevant countries could influence the policy decisions of their leaders, potentially leading to either escalation or de-escalation of the conflict.
- International alliances -> Military and economic support: The level of support that Ukraine and Russia receive from their allies and partners could influence the military and economic capabilities of the opposing sides, potentially leading to escalation or de-escalation of the conflict.
- Humanitarian situation -> International intervention: The humanitarian situation in Ukraine could increase international pressure for intervention, potentially leading to a more forceful response to the conflict.
- Media coverage -> Public opinion: The media coverage of the conflict could shape public opinion and political decisions, potentially leading to either escalation or de-escalation of the conflict.
- Geopolitics -> Strategic decisions: The broader geopolitical context of the conflict could influence the strategic decisions of the major powers involved, potentially leading to either escalation or de-escalation of the conflict.
- International law -> Compliance: The role of international law and institutions in resolving the conflict could influence the compliance of the parties involved with ceasefire agreements and other diplomatic efforts.
- Economic interdependence -> Economic impact: The degree of economic interdependence between Ukraine, Russia, and other relevant countries could influence the potential economic impact of the conflict, potentially leading to pressure for de-escalation.
- Military alliances -> Escalation risk: The impact of military alliances and security arrangements could increase the risk of escalation, potentially leading to more forceful responses to the conflict.
- International cooperation -> Conflict resolution: International cooperation and coordination could support diplomatic efforts to resolve the conflict, potentially leading to a peaceful resolution of the conflict.
- American national interest -> Policy decisions: The American national interest could influence the policy decisions of the US government towards the conflict, potentially leading to either escalation or de-escalation of the conflict.
It’s important to note that these are just some possible causal links between the variables, and the actual causal relationships between the variables may be more complex and interdependent than these simple relationships suggest.
Now we knew we could with corrections, training and iterations get variables for our model, the next step was to know if ChatGPT could identify causal relationships.
Interestingly, ChatGPT was able to gather all variables of our exchanges since the start of the conversation.
It could also identify some relationships among variables, but by no means all. To highlight just one missing relationship among many, “conflict escalation” and “escalation risk” are not linked by ChatGPT.
We may imagine, although we did not test it, that again, we could prompt ChatGPT into testing further the existence or not of other causal relationships. However, here using ChatGPT could prove heavier and longer than not using it and creating our causal relationships directly out of our understanding, out of classical build up of knowledge (search, reading, comprehending) and/or expert workshops.
Furthermore, an output listing causal relationships between variables could be extremely boring to read and hardly practical.
To build causal relationships between variables, it is best to use graph building and visualisation software and not linear-based text. ChatGPT being a text-based app., it cannot really help, or at least not directly here.
For the sake of completing the experiment we nonetheless went on, trying to find a way to translate easily ChatGPT output in something a graph software could read and use.
Now can you show the causal relationships between variables as a matrix?
ChatGPT:
Certainly! Here is a matrix that shows possible causal relationships between the variables identified for the issue of the future of the war in Ukraine over the next 12 months:
Causal Links | Military Strength | Diplomatic Efforts | Economic Sanctions | Domestic Politics | International Alliances | Humanitarian Situation | Media Coverage | Geopolitics | International Law | Economic Interdependence | Military Alliances | International Cooperation | American National Interest |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Military Strength -> Conflict escalation | 1 | ||||||||||||
Diplomatic Efforts -> Conflict resolution | 1 | ||||||||||||
Economic Sanctions -> Political pressure | 1 | ||||||||||||
Domestic Politics -> Policy decisions | 1 | ||||||||||||
International Alliances -> Military and economic support | 1 | ||||||||||||
Humanitarian Situation -> International intervention | 1 | ||||||||||||
Media Coverage -> Public opinion | 1 | ||||||||||||
Geopolitics -> Strategic decisions | 1 | ||||||||||||
International Law -> Compliance | 1 | ||||||||||||
Economic Interdependence -> Economic impact | 1 | ||||||||||||
Military Alliances -> Escalation risk | 1 | ||||||||||||
International Cooperation -> Conflict resolution | 1 | 1 | |||||||||||
American National Interest -> Policy decisions | 1 | 1 | 1 | 1 | 1 | 1 |
In the above matrix, the row variable represents the cause or independent variable, and the column variable represents the effect or dependent variable. An “X” in a cell indicates a potential causal link between the corresponding row and column variables, and an empty cell indicates no significant causal link.
Again, it’s important to note that these are just some possible causal links between the variables, and the actual causal relationships between the variables may be more complex and interdependent than these simple relationships suggest.
Because the mathematical tool behind a graph is a square matrix, we asked ChatGPT to translate the causal relationships for the variables it identified as a matrix.
The result was disappointing, yet also exciting.
Exciting because it indeed created a matrix, disappointing because the relationships ChatGPT had found initially were included within the variables of the rows, and the links, as a result, were not right.
However, with training of the GPT model and effort, it may be possible to obtain something better… to be tested in the future.
We could then export the matrix as a csv. file and read it in Gephi as a graph.
Conclusion of the test
As it is, ChatGPT and gpt-3.5-turbo model cannot directly output a good enough model for a strategic foresight and early warning’s question on an issue pertaining to national and international security.
For this purpose, there is however a great potential in GPT and its apps, and thus probably in Generative AI.
As it is, we estimate that ChatGPT can be nonetheless very helpful to beginners and non-specialists. Iterations and asking increasingly more precise questions appear as the best systematic way to use ChatGPT for variables and causal relationships. We may also imagine the AI can help specialists if the questions asked are precisely formulated – and thus well understood by ChatGPT.
This also highlights the key importance of prompts (the questions asked to a Generative AI) when interacting with such AIs, and suggests thus a new type of activity that may emerge alongside traditional ones, for strategic foresight and warning in particular and more generally for good usage of generative AI.
We also identified other challenges impacting the identification of variables and causal relationships, such as the problem of bibliographic references, which are lacking here in ChatGPT answers. Indeed, variables and causal relationships should be grounded in facts, logic, as well as scientific research. Thus, ideally, we would need a reference for each variable and each relationship. We shall deal with this key aspect in another article. Indeed, if erroneous or plainly fake variables and links were included in models, or if variables were omitted either willingly or because o biases, then the model would become useless, or biased, with severe consequences in terms of prevention.
As a whole, we estimate that there is enough potential here to create a corresponding AI Assistant, Calvin, that our members and readers will be able to use. We intend to continue improving this AI assistant. Notably, in the future, further training of the AI for our specific purpose – called fine-tuning – could yield very interesting results. However, it seems best to wait until the full release of GPT-4 to start such a process, after carrying out a full test again with GPT-4.
Finally, assuming a Generative AI could indeed create the conceptual models we need, then one potential danger would be that practitioners would stop making the effort to think (see also ** below). We may wonder if this would lead to a decrease in cognitive capacities. In that case, it would be important to make sure that thinking continues being exerted, and that the capacities of Generative AI are properly embedded into a process that does not harm human beings.
On the positive side, the use of Generative AI could so much increase the speed of creation of a proper model that it would ease and spread the use of proper strategic foresight and warning and as a result enhance human capacities to prevent threats.
Notes
*… unless other even more upsetting events take place. The specifics of the changes that will emerge from the use of AI in general, ChatGPT-like AI in particular, may also differ from our current expectations. These two points will be addressed in future articles. Here we concentrate on the highly probable and the evolution generated by ChatGPT-like AI.
**… this task is also used in workshops to open-up the minds of participants, to generate new ideas. For analysts, the very fact to make the cognitive effort to find out variables and their causal relationships enhance understanding and comprehension of an issue. Thus, being able to carry out this task alone or in a group has great value in itself. To see it being completely done by an AI-agent would is thus likely to have serious negative consequences. It will thus be key to creatively integrate the AI tool, once operational, in a way that minimise drawbacks.