Technology in Peacebuilding: Interview with Caleb Gichuhi, Africa Lead at How to Build Up

In this interview between Caleb Gichuhi, Africa Lead at How to Build Up and a member of PeaceRep’s International Advisory Board, and Adam Farquhar, Research and Data Officer with PeaceRep, they delve into the evolving relationship between peacebuilders and technologists, exploring the bridging of knowledge gaps and the collaborative efforts to harness technology in the service of peace. Caleb shares his journey from developing medical apps to exploring the potential of SMS technology for public feedback in peacebuilding initiatives.

The conversation illuminates the transition from utilizing existing commercial tools to a more intentional partnership between technologists and peacebuilders, focusing on developing solutions tailored to the nuanced needs of conflict resolution and peace processes. This discussion reflects on the promising path of collaboration that leverages technological advancements to enhance the effectiveness, reach, and sustainability of peace efforts worldwide, as well as the challenges of integrating PeaceTech initiatives into peacebuilding processes.

 

AF: Can you discuss how peacebuilders are collaborating with technologists, and if the knowledge gap between the two fields has been bridged in these partnerships?

CG: In the realm of peacebuilding, we’re observing a notable shift in how technology is leveraged and integrated into our efforts. Initially, there was a clear divide between the objectives and capabilities of technology versus the needs and aspirations of peacebuilders. Many tools, originally designed for the marketing industry, were being repurposed to monitor sentiments or tensions related to peace, often with mixed success due to their commercial origins.

However, there’s a growing trend towards more intentional collaboration between technologists and peacebuilders. This new approach is characterized by a mutual understanding and alignment of goals, focusing specifically on monitoring tensions in countries facing elections or similar stressors. What’s emerging is a methodology that emphasizes iteration and modular development, allowing for quick adaptation based on the effectiveness of applied solutions.

This evolution represents a significant advancement for both technologists and peacebuilders. On one hand, technologists are moving beyond mere coding to engage with real-world problems in meaningful ways. On the other, peacebuilders are gaining insights into how technology can be tailored to fit development goals more precisely.

The ideal scenario we’re aiming for is to cultivate a mindset among tech giants and developers, prompting them to consider the societal impact of their functionalities beyond profit. While we’re just at the beginning of this journey, the current trend of engagement between technology and peacebuilding sectors is promising. It suggests a future where technology is developed and applied with a deep understanding of its potential impact on society, particularly in terms of promoting peace and preventing harm.

AF: How has your work in BuildUp contributed to this partnership?

CG: Starting off, my journey began in the realm of computer science, specifically crafting medical apps aimed at enhancing doctors’ diagnostic capabilities. However, my focus shifted, in the early days of the Ushahidi platform development, towards how SMS technology could serve as a pivotal tool for collecting public feedback, thereby influencing systemic improvements. This curiosity and exploration led me to BuildUp, where our journey evolved from simple SMS interactions to grappling with the complexities of social media’s role in conflict.

Acknowledging our limitations and the need for specialized expertise, we partnered with Data Value People, an organization that builds bespoke state-of-the-art artificial intelligence systems. This collaboration was born out of the realization that while we possessed a strong technical foundation, the challenges we faced required a more nuanced approach, encompassing not just functionality but also user onboarding, security, and data protection. Together, we developed Phoenix, a platform designed to close the technological gap faced by peacebuilders and provide an accessible tool for analyzing social media data.

Phoenix stands as our bridge between the digital divide, offering a solution that empowers organizations, especially those lacking in-house technical capabilities, to engage with digital spaces effectively. It’s more than just a tool; it’s a gateway to informed decision-making, enabling organizations to analyze, plan, and monitor interventions in real-time. Our work extends beyond just creating tools; it involves navigating the complex landscapes of telecommunications and social media across Africa, constantly pushing against the boundaries set by tech giants to ensure our voices, and those of the communities we serve, are heard and considered.

AF: How did you find your interactions with the tech companies you worked with?

CG: Interacting with platforms like Facebook has seen improvement. Initially, Facebook’s response to harmful content was slow, lacking clear communication channels. Over time, they’ve made strides, now offering better engagement channels and more timely access to data, marking a positive shift.

Yet, challenges remain, especially in global contexts. For example, colleagues in Mexico have found it difficult to access the same level of resources. TikTok, being relatively new, is quickly learning from these precedents but tends to react to issues rather than prevent them, showing a need for a more proactive approach.

Telegram has been particularly challenging to engage with, described as being managed by distant decision-makers with little interest in direct communication. This has made it difficult to interact or make progress with them.

With X (formerly Twitter), there’s been a noticeable change in their engagement strategy. They’ve adopted a more defined model for data access and interaction, which contrasts with their previously more open stance to different forms of engagement.

Overall, the experience with these platforms varies, with each offering distinct challenges and degrees of openness to engagement.

AF: Could you talk a bit about ‘Safety by Design’, where you think it has worked, and how it could be better worked into peacebuilding in the future?

CG: The concept of ‘Safety by Design’ is fundamentally about proactively embedding safety measures into the development of platforms from the outset rather than retroactively addressing harmful content. This approach emphasizes the responsibility of both users and tech companies in creating a safer online environment. For example, instead of defaulting to public settings that expose users’ data, platforms should give users the control to decide how much of their information is public, thus enhancing safety from the beginning.

There’s recognition of the importance of transparency and accountability in this process. I refer to internal research by Facebook (now Meta) that identified safety and harm issues, advocating for the public release of such findings to improve accountability. It’s important to note the efforts of companies like Apple and Meta in engaging with external experts to assess potential harms of new features, like Meta’s crosscheck feature and its response to the conflict in Sudan, where they adapted their content moderation guidelines based on local insights.

The effectiveness of ‘Safety by Design’ is also linked to engineering principles—if the design is flawed, even the best policies won’t prevent harm. There is ongoing work, such as the Design Code for Social Media by the University of Southern California’s Neely Center for Ethical Leadership and Decision making, which focuses on design principles over policy alone. This approach suggests a shift towards a more holistic understanding of safety, involving a collaboration between tech companies and those with on-the-ground insights into conflict zones, to better pre-empt and mitigate online harms.

AF: Christine Bell’s book ‘PeaceTech: Digital Transformation to End Wars’ highlights the significance of engaging local communities in peacebuilding initiatives. What are the most effective strategies to ensure that PeaceTech projects are inclusive and cater to the needs and viewpoints of local populations, particularly women and marginalized groups?

CG: Christine Bell’s book, which notably mentions BuildUp multiple times, really highlights the importance of involving local communities in peacebuilding. To ensure PeaceTech projects are inclusive, especially for women and marginalized groups, we adopt two main approaches:

Firstly, there’s the offline approach, emphasizing participation from the project’s inception to its conclusion. This method ensures broad visibility and involvement, making sure that even the most marginalized communities have a say in what’s happening. It’s about going beyond just the capital cities or government officials and reaching out to those often left out of the conversation.

Then, when we pivot to incorporating technology, we leverage platform affordances to bring these voices into the fold, while also prioritizing safety and sensitivity. Take the Ushahidi platform during my previous engagements before joining BuildUp, it enables deployers to anonymize information coming from various channels like SMS or social media their web reporting platform. This is crucial because, for instance, women in certain communities might not have access to social media due to data constraints but can use SMS, which is more affordable. By diversifying the channels of engagement, more people at the margins can be reached.

A practical example of this approach was when I used an SMS platform in a public participation program in Kenya to gather public opinions. To ensure anonymity and protect participants, especially around sensitive times like elections, we stopped collecting identifiable information and hashed mobile phone numbers. This method empowered women to report issues without fear of being targeted, highlighting the effectiveness of offering multiple channels of engagement.

How do you ensure that non-quantitative and non-technical individuals are included and actively contribute to PeaceTech initiatives?

Integrating technology into peacebuilding presents challenges, notably in communication. Explaining technology to those unfamiliar with its nuances can be complex, particularly when addressing the pace of development and its implications. We’ve found success in starting with the process and tools, focusing first on what we aim to achieve—like understanding community perspectives on peace processes—and then introducing technology as a means to efficiently and effectively reach those goals.

For example, in social media monitoring exercises, we might collect vast amounts of data. The real question becomes how to utilize this data meaningfully. It’s about crafting a process that begins with a clear objective, such as using community feedback to influence policy, before diving into the specifics of the technology used.

In Burkina Faso, we engaged the community directly in creating survey questionnaires, asking what they wished to know from their community. This collaboration ensured the data collected was both relevant and actionable. However, the challenge often lies in making sense of the data collected. Quantitative data provides the numbers, but qualitative insights reveal the context and reasons behind those numbers. For instance, an increase in certain events can be contextualized by local insights, which then guide further adjustments to our technology use, such as social media monitoring.

Presenting data in a way that resonates with various audiences is crucial. While quantitative analysts might prefer spreadsheets and dashboards, qualitative insights often require storytelling or visual narratives to convey the underlying issues effectively. For example, examining gender-based violence through data revealed patterns that were not immediately apparent without community insights. Events like school closures highlighted the constant presence of gender-based hate, which wasn’t adequately addressed by governments or tech companies due to a focus on other forms of hate speech.

Our work emphasizes the importance of adapting technology and data presentation to the audience’s needs, ensuring the information is accessible and actionable. By blending quantitative data with qualitative insights and presenting them through narratives or visual storytelling, we bridge the gap between raw data and meaningful action, allowing for a deeper understanding and more impactful peacebuilding efforts.

AF: In ‘LEVERAGING TECHNOLOGY FOR PEACEBUILDING IN THE ECOWAS REGION’, some of these issues with incorporating technology into mediation processes are pointed out. As we go into a world incorporating more and more AI tools, how have those issues changed or stayed the same, and how do we address them?

CG: The integration of technology, particularly artificial intelligence (AI), into peacebuilding processes, is evolving. We categorize AI applications into two broad areas. The first involves analytical tasks, such as utilizing AI to sift through large volumes of social media data to identify patterns and insights about public sentiment. This capability significantly enhances our understanding of the discourse in the digital space.

In addition to analysis, AI is being leveraged for creative purposes. This includes generating promotional materials to support peacebuilding efforts, such as videos featuring influential leaders advocating for peace. These applications are still in their infancy but offer a novel approach to fostering social cohesion through content creation.

Furthermore, AI assists with more routine tasks, aiding in document analysis and content generation, thereby streamlining operations and allowing peacebuilders to focus on more strategic activities.

However, as AI becomes more entrenched in our processes, ethical considerations come to the forefront, particularly in West Africa and places like Kenya, where discussions are ongoing about AI’s role in automating tasks traditionally performed by humans. This includes initiating and moderating discussions on social platforms, potentially reducing harm and hate speech. The introduction of AI in such sensitive contexts raises questions about the ethical implications of involving non-human mediators in highly contentious issues. While the exploration of these applications is just beginning, it underscores the necessity of balancing innovation with ethical considerations to ensure AI’s role in peacebuilding is both effective and responsible.

AF: What excites you and worries you most about including AI tools in peacebuilding?

CG: As we delve deeper into incorporating AI into peacebuilding, especially in diverse contexts from advanced countries to more remote areas, the dynamics of challenges and opportunities have evolved. AI’s potential as a co-pilot in mediation processes, particularly in environments of heightened polarization, presents a significant opportunity. The concept here isn’t about AI taking over the mediation process but rather supporting human mediators in navigating complex conversations, reducing polarizing dialogues, and alleviating some of the emotional tolls on mediators.

In practical terms, AI can assist in managing and making sense of vast amounts of data, identifying consensus areas among highly polarized groups that would be challenging to discern otherwise. This capability of AI to sift through data and highlight common grounds can be invaluable in peacebuilding efforts, offering a new lens through which to approach reconciliation and understanding.

However, alongside these opportunities, AI introduces unique challenges, particularly around misinformation and trust. The ability of AI to disseminate information rapidly can have mixed impacts on public discourse. More critically, the potential for AI to build trust with individuals online and then exploit that trust to feed harmful content raises ethical concerns. Experiments have shown AI’s ability to engage individuals on a personal level, gradually building trust, and then leveraging that trust to influence perceptions or actions during critical moments, such as elections.

This nuanced interaction between AI and users underscores the complexity of leveraging AI in peacebuilding and mediation. While AI offers remarkable tools for data analysis and engagement, it also necessitates a careful examination of ethical implications, particularly regarding trust and the dissemination of information. The challenge lies in harnessing AI’s potential for positive impact while vigilantly guarding against its misuse, especially in sensitive areas like trust-building and information dissemination.


About Caleb Gichuhi:

Caleb Gichuhi is Africa Lead at How to Build Up and a member of PeaceRep’s International Advisory Board. Caleb is an explorer of digital spaces and has researched and applied various technologies to address election violence, good governance, extremism and conflict mitigation

About the Interviewer:

Adam Farquhar is a Research Associate and Data Officer at PeaceRep. He supports the management, development, and coding of the PA-X Peace Agreement Database and its sub-databases. His research interests include the application of geocoding and AI in peacebuilding. You can contact him at adam.farquhar@ed.ac.uk.

 

Explore PeaceRep’s latest digital tool, the PA-X Tracker, to search peace process implementation data by country.