Adam Farquhar reviews “AI for Peace” by Branka Panic and Paige Arthur, exploring how the authors make complex AI concepts accessible to a non-technical audience, while highlighting the book’s strengths in providing practical examples and ethical considerations.
The book offers a timely introduction to the intersection of AI and peacebuilding and serves as a primer for policymakers and scholars interested in using AI to promote global peace.
This blog was originally published by Global Policy Opinion on 09 October 2024.
Book Review - AI for Peace
The recent surge in interest and research around Artificial Intelligence (AI) has brought this collection of computing techniques to the forefront of scientific and public discourse. The term ‘artificial intelligence’ was initially coined by a research proposal for a summer workshop in Dartmouth College in 1956, bringing together researchers who studied the ‘conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it’ (McCarthy et al., 1955). The field of Peace and Conflict Studies has also begun to incorporate AI into its work, contributing to the emerging domain of PeaceTech. While considerable attention and research have been devoted to AI applications in defence, less focus has been given to AI’s role in peacebuilding efforts. “AI for Peace” by Branka Panic and Paige Arthur serves as an introductory guide for those interested in understanding the current state and potential of AI in promoting peace.
The book aims to provide a primer on AI and its applications in peacebuilding initiatives. It offers specific examples in key areas while addressing the associated ethical dilemmas. The book is divided into five concise chapters covering the application of AI to peace, including conflict prevention, hate speech, human rights, climate, and ethics. Chapter One explores the increasing use of conflict prediction models and how AI is applied to these models. It provides examples of programs utilizing predictive techniques and discusses issues with underlying data and tool reliability. Chapter Two examines how AI algorithms amplify hate speech on social media and how Natural Language Processing is employed to counter its proliferation. Chapter Three investigates the critical role of human rights in peace and how machine learning techniques with geospatial and visual data can help tackle human rights abuses and promote human rights generally. The fourth chapter addresses the connection between climate and conflict, exploring how various machine learning and natural language processing techniques could help assess risks associated with climate change. The final chapter outlines key ethical concerns regarding the increasing use of AI in addressing peace.
One of the book’s primary strengths lies in its clarity and accessibility. The authors successfully provide a book easily understandable to anyone who wants to understand this subject better and isn’t in the technology field. Experts in emerging and complex disciplines like AI often struggle to explain their subject matter to policymakers, and it takes real skill to put a non-specialist audience immediately at ease with such intricate topics. This achievement should not be understated and is something the authors do well.
An example of this is how there are different entry points into the content. Readers can approach the chapters in any order without needing to have read the previous ones, allowing for a more flexible and personalized learning experience. Chapter 4, which focuses on climate change and AI, is self-contained and can be read independently without prior knowledge from earlier chapters. This will appeal to policymakers who need something well laid out and accessible to understand these tools for their jobs. The ethics section, for instance, provides a well-structured overview of some of the key ethical concerns with using AI in the areas described above, including dual-use applications, the disempowerment of local populations, and private-sector engagement.
While the authors clearly point out that this is not comprehensive and just an overview of some of the tools that currently exist, this could represent a missed opportunity to also discuss some of the more peace-specific AI applications that are presently being explored. The book puts a lot of emphasis on Conflict Early Warning Systems (CEWS), but an equally important issue is what to actually do if you have been warned a conflict could happen. This might be a more pressing area for AI to address when it comes to peace itself. The authors indicate that one of the potential outcomes of using AI to forecast conflict is potentially moving into the emerging field of understanding why societies are peaceful or how they become so after conflict. It can be argued that this field requires a much more prominent position in this book when laying out how AI can be used for peace.
The gendered aspect of AI development and peacebuilding should also be more prominently featured. As AI develops rapidly, it is progressing in a male-driven context. This requires urgent redress and more attention in a book of this nature. While the introductory sections touch on this topic, there is room for a more comprehensive exploration of these issues, even in books offering a high-level overview, as these books often have a broader and more influential audience in policy circles.
Compared with others in the field, there is a similar pattern of wanting AI to support human input rather than replace it. In a recent article, Andreas Hirblinger (2022) emphasizes “hybrid peace-making intelligence” that combines human and machine capabilities rather than viewing AI as a standalone solution. He acknowledges AI’s limitations in this area and concerns about its ability to predict conflict. He proposes using AI to support a ‘hermeneutical approach’ that engages with multiple interpretations of conflict rather than trying to produce singular, definitive analyses. If ‘AI for Peace’ proves to stimulate the reader’s interest in how AI can be applied to the peace-making processes, Hirblinger might be the logical next author to explore, as he goes more into the depth about the possibilities of AI and peace-making, as well as being accessible to non-experts.
In conclusion, “AI for Peace” does a commendable job of making two potentially broad and complicated disciplines accessible. Global Policy readers would find this book easily digestible and hopefully inspire confidence to engage in the topic more. It is this informed engagement by multiple stakeholders that could lead to the most successful and ethical application of AI to peacebuilding. As a primer for the field, this book is a starting point for studying peacebuilding and AI. While it succeeds in many aspects, it should encourage further exploration into this exciting emerging field.
This blog was originally published by Global Policy Opinion on 09 October 2024.
About the author
Adam Farquhar is Research Associate and Data Officer with PeaceRep: The Peace and Conflict Resolution Evidence Platform at the University of Edinburgh. He supports the management, development, and coding of the PA-X Peace Agreement Database and its sub-databases. His research interests include the application of geocoding and AI in peacebuilding.
The Peace and Conflict Resolution Evidence Platform (PeaceRep) is funded by UK International Development from the UK government. However, the views expressed are those of the authors and do not necessarily reflect the UK government’s official policies.
Works Cited
Hirblinger, A. T. (2022). When Mediators Need Machines (and Vice Versa): Towards a Research Agenda on Hybrid Peacemaking Intelligence. International Negotiation (Hague, Netherlands), 28(1), 94–125. https://doi.org/10.1163/15718069-bja10050.
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006, Winter). A proposal for the Dartmouth summer research project on artificial intelligence: August 31, 1955. AI Magazine, 27(4), 12+.