By Sara Schirch
Artificial intelligence (AI) represents one of the most influential and transformative technologies humanity has ever developed. Like previous technological advancements, AI offers blessings and burdens that can foster peace and democracy or fuel violence, inequality, polarization, and authoritarianism. Religious ethics have something to offer.
Over a hundred religious actors met at the Peace Park in Hiroshima, Japan, in July 2024 to discuss “AI Ethics for Peace.” Representing the University of Notre Dame and the Toda Peace Institute, I presented how my Anabaptist ethics shape my use of AI to support democracy and peacebuilding.
The Vatican’s Pontifical Academy of Life organized the conference with Religions for Peace Japan, the United Arab Emirates’ Abu Dhabi Forum for Peace, and the Chief Rabbinate of Israel’s Commission for Interfaith Relations. Religious leaders from Judaism, Christianity, and Islam joined with leaders from Buddhism, Hinduism, Zoroastrianism, Bahá’Ă, and representatives of the Japanese government and big tech companies such as Microsoft, IBM, and Cisco.
The two-day workshop in Hiroshima ended with a poignant signing ceremony at the Hiroshima Peace Park, located at ground zero of the 1945 atomic bomb explosion in a city synonymous with the devastating effects of unrestrained technological power.
AI poses several dangers, including the potential to amplify disinformation, exacerbate societal polarization, infringe on privacy, enable mass surveillance, and facilitate autonomous weapons.
Participants signed the Rome Call for AI Ethics, a collaborative initiative emphasizing AI’s ethical development and use. The Rome Call advocates for AI systems that are transparent, inclusive, and respect human rights, ensuring that technological advancements benefit humanity as a whole. Pope Francis calls for broad-based ethical reflection on how AI can respect human dignity. The Pope has also highlighted the importance of placing ethical considerations at the forefront of technological innovation, calling for attention to ethical commitments to human dignity proportionate to the scope of the threats to human dignity. Father Paolo Benanti advises the Vatican, the UN, the Italian government, and technology companies working in the field of AI on a concept he calls “algorethics” meaning designing AI to support human dignity.
My own religious tradition of Mennonite Anabaptism has something to offer these conversations. Some Anabaptist communities have a long-standing practice of careful deliberation to evaluate new technologies’ potential positive or negative impacts. Communities might decide using cars or phones is acceptable for some purposes but not others. Today, several Anabaptists are involved in discussions on the ethics of AI and how it might be used to support our peace commitments. Three of us, including Paul Heidebrecht at Conrad Grebel at the University of Waterloo, and Nathan Fast of the University of Southern California, are working on how AI can support democracy and peacebuilding.
At the workshop, I presented how Anabaptist theology has shaped my commitment to peace and efforts to regulate digital technologies. My research emphasizes the ways in which social media and AI are causing a “tectonic shift” in societies around the world, driving conflict and polarization and undermining democracy and human dignity.
While AI has the potential for significant harm, it also offers opportunities to enhance creativity, solve global challenges, and strengthen democratic engagement. Unlike nuclear technology, AI can benefit humanity in various ways, acting as “bicycles for the mind,” allowing us to address issues like climate change and inequality more creatively and efficiently.
The story of the Tower of Babel in Genesis 11 serves as a metaphor for AI. The story begins when humanity spoke a single language and lived together. United in their ambition, humans built a tower that would reach the heavens. God intervenes to prevent them from becoming too powerful by confusing their language. People could no longer work together and scattered across the earth. Like the Tower of Babel, AI offers immense new powers but also distorts information and fosters confusion and polarization.
Social media platforms, powered by first-generation AI algorithms, determine what content each individual sees on their newsfeed. These platforms maximize user engagement by prioritizing attention-grabbing content, generating both profit and polarization. Technology must be designed to support social cohesion – the glue that holds society together.
Building relationships and fostering understanding is a deeply religious task, rooted in the Latin word “ligare,” meaning “to bind” or “to connect.” This is also the origin of “religio,” highlighting religion’s role in connecting individuals to a higher power and each other. AI can aid in this effort by acting as a bridge for better understanding, but it must be guided by humans to contribute positively to social cohesion.
At the University of Notre Dame’s Kroc Institute for International Peace Studies, I teach “peacetech” courses where students train AI to combat hate speech and improve digital conversations. We also employ AI to analyze discussions on deliberative platforms, highlighting shared values and solutions that reflect diverse perspectives. These technologies help map different viewpoints, enabling us to “listen at scale.”
AI-powered deliberative technologies like Pol.is and Remesh used in countries like Taiwan and Finland strengthen democracy and foster social cohesion. In June, the Toda Peace Institute brought together a group of 45 peacebuilders from around the world at the University of Notre Dame’s Kroc Institute for International Peace Studies to learn how to use AI-powered technologies to support public deliberation. Over the coming year, these groups will pilot these technologies in diverse and polarized contexts. For example, they will explore whether technology can help Afghans living around the world communicate and set priorities for their future, assist Palestinians and Israelis in deliberating about coexistence, support Colombians in discussing the full implementation of their peace agreement, and enable Nigerians to weigh the trade-offs of oil and environmental damage.
As we grapple with the challenges and opportunities of AI, some of us are asking whether AI and democracy can fix each other. Last year I was part of a team working with Open AI’s “Democratic Inputs to AI” project to test whether deliberative technologies can align AI with the will of humanity. We tested a methodology using the Remesh platform to ask demographically diverse Americans to develop guidelines for how ChatGPT should answer sensitive queries. Despite initial polarization, the platform helped people of diverse views to come to a strong consensus on how AI tools should respond to questions about international conflicts, vaccines, and medical advice.
Religious ethics are relevant to AI development, helping us see how AI can support human dignity and social cohesion while also voicing our grave concerns over the potential for these new technologies to cause economic, political and social harms.
Dr. Lisa Schirch is Research Fellow with the Toda Peace Institute and is on the faculty at the University of Notre Dame in the Keough School of Global Affairs and Kroc Institute for International Peace Studies. She holds the Richard G. Starmann Sr. Endowed Chair and directs the Peacetech and Polarization Lab. A former Fulbright Fellow in East and West Africa, Schirch is the author of eleven books, including The Ecology of Violent Extremism: Perspectives on Peacebuilding and Human Security andSocial Media Impacts on Conflict and Democracy: The Tech-tonic Shift. Her work focuses on tech-assisted dialogue and decision-making to improve state-society relationships and social cohesion.
Source:https://toda.org/global-outlook/2024/religion-and-ai-ethics-for-peace.html