The Tech of Disquiet: Use of Agentic AI in Peacebuilding

Fernando Pessoa could be considered well ahead of his time, pioneering ideas that feel startlingly relevant to technological discussions we are having today. Pessoa was an author from the early 20th century, hailing from Lisbon, Portugal. He was known for using a unique literary innovation he coined the ‘heteronym’. The ‘heteronym’ was a not quite an alter ego, yet not a separate character from himself either. In order to portray the complexity and multitudes of ideas around existence that Pessoa was musing on, he created fully realised personas that he would use to explore ideas through different, sometimes conflicting viewpoints. It was as if he had an entire literary movement to tackle the concepts that he wanted to explore in his work. A prime example of this was in the posthumous collections of his thoughts and musings entitled The Book of Disquiet, where through the ‘heteronym’ of Bernardo Soares, Pessoa observes the everyday life of Lisbon and uses it to meditate on profound questions of existence, dreams and loneliness among others.

A century later, practitioners across various fields are developing something remarkably similar: agentic AI systems that can autonomously coordinate multiple analytical perspectives on complex challenges. While these systems are being developed for many areas of practice, peacebuilding is one area with particularly interesting applications. Unlike traditional AI tools that respond to isolated queries, emerging agentic AI systems are designed to independently plan and execute multi-step processes toward complex goals, coordinating multiple tools and data sources in a single workflow. Recent work on AI agents in political deliberation and in enterprise “copilot-to-autopilot” frameworks explicitly describes agentic AI in these terms. The appeal mirrors Pessoa’s heteronyms. Just as his alternative selves could approach reality from humanitarian, philosophical, and aesthetic angles simultaneously, agentic AI systems promise to analyse peacebuilding challenges through multiple lenses at once: diplomatic, economic, cultural, and strategic. This differs from human analytical processes, which typically require sequential attention to different dimensions of a conflict, for example examining diplomatic factors, then economic patterns, then cultural dynamics, rather than processing them in parallel.

Emerging projects show this potential. Researchers have built systems that can autonomously curate knowledge bases about refugee populations, engage in simulated negotiations with conflict parties, and model policy interventions across complex social systems. The technology promises to equip practitioners with sophisticated analytical capabilities that continuously learn and evolve.

But Pessoa’s work contains a warning, the parallels of which we may observe in complex AI. Soares, despite his extraordinary perceptiveness, suffers from existential paralysis. His ability to see from multiple perspectives doesn’t lead to clarity—it leads to an overwhelming sense of complexity and detachment from authentic human experience.

While the broader questions raised here apply to AI across multiple domains, agentic AI carries particularly high stakes in peacebuilding. When systems can process infinite data streams, model countless scenarios, and simulate multiple stakeholder perspectives, the result might not be enhanced understanding but a technological version of Soares’s disquiet—endless analysis that distances us from the human realities we’re trying to address.

Consider social media. Social media hasn’t brought us closer together, even as it allows us to connect digitally in a way not thought possible 30 years ago. Instead, it seems to amplify feelings of hopelessness and disconnection, creating an environment where events and experiences feel increasingly ephemeral and inconsequential.

Another case is Netflix. The company developed 77,000 ‘altgenres’ to better understand and serve viewer preferences—a system far more sophisticated than anything currently deployed in peacebuilding. Yet as a recent Guardian investigation revealed, this analytical sophistication produced ‘algorithm movies’: generic, formulaic content designed for background viewing rather than genuine engagement. The quest to personalize entertainment made entertainment itself increasingly impersonal.

The parallel to peacebuilding is uncomfortable. If entertainment platforms, with massive resources and relatively simple goals, struggle with authenticity when they algorithmically multiply perspectives, what happens when we apply similar approaches to the infinitely more complex dynamics of conflict and peace? The risk isn’t just analytical paralysis, but the production of ‘algorithm peace’, i.e., technically sophisticated but fundamentally detached from the human realities it aims to address. Recent work on AI semantics suggests even the most advanced language models may lack deep causal and embodied grounding of meaning, further reinforcing the idea that complex analytics without meaningful human connection might generate outcomes that are polished yet hollow.

This isn’t an argument against technological innovation in peacebuilding. The idea behind a lot of the technologies being discussed is not to dehumanise people and their interactions, but to maintain human dignity in decision making processes that are already automated to a degree and lacking in the different perspectives of the people they are supposed to address.

Advocating for agentic AI in peace is different from advocating for automated decision making. A recent paper suggests a framework that maintains clear boundaries around system autonomy—distinguishing between what these systems can do independently and what we allow them to do independently. This distinction is an important one, as it puts the responsibility back on us to decide how much we are in the decision-making process when using agentic AI.

Pessoa’s posthumous work can serve as not so much a rejection, but a warning to those of us in the field of PeaceTech and peacebuilding who try to understand our own motivations and complexities through constructions of ourselves. There is the risk that this work could lead to more fragmentation and paralysis, without the human interaction that we try to simulate though technology. As the heteronym Bernardo Soares observes: ‘From so much self-revising, I’ve destroyed myself. From so much self-thinking, I’m now my thoughts and not I. I plumbed myself and dropped the plumb; I spend my life wondering if I’m deep or not, with no remaining plumb except my gaze that shows me – blackly vivid in the mirror at the bottom of the well – my own face that observes me observing it.’ (Pessoa, 2017, Text 193).


Adam Farquhar is a Research Associate and Data Officer at PeaceRep. He supports the management, development, and coding of the PA-X Peace Agreements Database and its sub-databases. His research interests include the application of geocoding and AI in peacebuilding. Adam holds an MA in Comparative Ethnic Conflict from Queens University Belfast and a BA in International Studies from Huntingdon College in Montgomery, Alabama, USA.

Works Cited:

Pessoa, F. (2017). The Book of Disquiet (Bernardo Soares; R. Zenith, Trans.). New Directions.

 

Get the latest PeaceRep news, blogs and publications delivered directly to your inbox. Subscribe to receive PeaceRep’s monthly newsletter.