Generative AI chatbots like ChatGPT have created a huge amount of buzz since their emergence, raising questions about their staying power and integration into everyday life.
Adam Farquhar discusses the boom of AI and related technologies in the context of peace and conflict studies, raising questions about its ethical implications and long-term sustainability.
Highlighting the range of useful tools available, Adam argues for their responsible use to address existing research needs.
Boom or Bust, Clear Heads are Needed for Peace Actors Navigating AI
Predicting whether a new technology will have staying power is hard. There are so many unknowns that foreseeing what will become the next iPhone versus the next LaserDisc can be anyone’s guess. To add to the challenge, some technologies don’t take off initially, but when used in a different context years later, they can have wide-reaching applications (see QR codes).
It is with this lens that many people have been trying to get to grips with how much Artificial Intelligence (AI) will be a part of their world in the coming years. The emergence of generative AI chatbots, trained on enormous amounts of data, such as ChatGPT or Claude, created an initial frenzy of excitement, and most importantly, investment in AI applications in the private, academic, and policy sector.
This initial wave of excitement, however, is starting to be tempered by concerns about whether the latest AI chatbots will turn out to be as profitable as once hoped, if at all. The massive amount of resources needed computationally, financially, and environmentally are causing worry that the investment boom may hit a brick wall. Predicting if this will be the case is a fool’s errand. Yes, there are real and legitimate concerns about whether the current model of chatbots being trained on vast sums of data is sustainable in both a business and legal sense. There are also real ethical concerns around these models, such as automation bias during decision-making process, unjustified actions, information privacy, and more. However, it’s possible that in the medium to long term, technological innovations may help overcome these challenges, potentially realizing the level of potential that many have envisioned for AI.
Given this uncertainty, how should people who study or practice in the fields of Peace and Conflict navigate, interact or invest in AI? Some will celebrate a slowing momentum of development, or even crash of the business around AI, as many correctly identify all the potential for harm this technology can do. Others, however, are concerned this could have a negative impact, as generative AI and large language models are not the only technologies classified as ‘AI’. Many machine learning techniques, particularly those used in Natural Language Processing (NLP), including algorithms from Named Entity Recognition (NER) technologies, have shown great potential when applied in specific tasks for ‘PeaceTech’ research. There is a danger that some of the traditional rule-based and statistical techniques could receive less funding if the Generative AI boom comes to symbolise another tech bust, similar to the dot.com bubble of the late 90s. This should be a concern that is actively addressed in the field of peacebuilding.
While it remains to be seen how this field will evolve, there are ways for people involved in peace process research and practice to better navigate this period. First of all, it’s important to not be afraid of engaging with a basic understanding of AI, particularly the terminology. It is still tempting for many from a qualitative background to consider themselves ‘not a tech person’, and leave any investigations or insights to the experts. The field needs voices from a diverse background, which are not always forthcoming. It would also allow people to better understand that the term ‘artificial intelligence’ was created in the 1950s and is simply a catch-all for any process that seeks to mimic human decision-making.
People involved in peace studies should also remember to use AI to address real, existing problems, rather than searching for a problem to fit some new machine learning techniques. There are often free and easy-to-use software or technologies, including ways to quickly extract text from images, translation tools, and social media analysis tools, that can be more than sufficient to address some existing research or practical needs.
Finally, it’s worth remembering that desk research can no longer be seen as a less ‘ethically fraught’ way of doing research than fieldwork. The latest boom in AI has revealed major ethical concerns around data colonialism, propagating bias, environmental concerns, and others. People in this field must understand these concerns and account for them before they begin to use AI. The private sector mantra of ‘move fast and break things’ should give way to a mantra of ‘think first, try not to break things in the first place.’
Whether or not AI will revolutionize peace and peacebuilding remains to be seen. Looking forward, however, there are helpful AI techniques that are currently available that should be engaged with and understood by all in this field.
About the author
Adam Farquhar is a Research Associate and Data Officer at PeaceRep. He supports the management, development, and coding of the PA-X Peace Agreement Database and its sub-databases. His research interests include the application of geocoding and AI in peacebuilding.
Adam’s report on geocoding for the Peace Analytics Series is available here: A Primer on Geocoding for Peace and Conflict Studies