Welcome to AI and Our Faith! In this newsletter, I hope to bring you my best insights and reflections on the ways in which theological thinking can inform the ethical (dis)use of artificial intelligence (AI). I plan on publishing once a month for the next year, but hope to publish more frequently than that in the future! Along with my first email, “What does AI have to do with theology?” this email serves as an introduction to this newsletter.
A few years ago in 2022, when I was working as a software engineer, I was in a handoff meeting with my manager (who was leaving the company) and my future manager (who was replacing him). For clarity, I’ll call the former Manager A and the latter Manager B. When Manager A was introducing me to Manager B and giving Manager B an overview of who I was (or at least who Manager A understood me to be), I was surprised when Manager A described me to Manager B as “very religious.”
Very religious? I mean, he wasn’t wrong (after all, I did end up going to seminary), but I had hardly ever brought up my faith to Manager A. As far as I could recall, I mentioned it a grand total of two times: once to request paid time off to attend a religious conference, and another time to voice my desire not to be on call for engineering emergencies during Sunday services. And when I thought about it afterwards, I realized that in the secular culture of Silicon Valley, the seemingly small act of carving out space for religious observance really does count as very religious.
Now I have to say at this point that I am very grateful to both Manager A and Manager B for their support while I was their direct report. In retrospect it’s clear to me that they both understood that my faith was an important part of my life and did what they could to accommodate my needs. They might even end up reading this email and recognizing themselves in this story. (In which case, thank you! Seriously.)
So why am I telling this story? Well, I think it’s a helpful illustration of how far apart the worlds of high technology and religion often are from one another. During my time working as a software engineer in Silicon Valley, I rarely met another coworker who was religiously observant, let alone a fellow progressive mainline Protestant like myself. (For full disclosure, I’m a Member in Discernment, which is to say considering ordination, in the United Church of Christ.) It doesn’t surprise me (though it does disgust me) that with the Christian nationalist right’s gains in political power, a certain segment of the Silicon Valley elite (the Peter Thiel type) is warming up to Christianity (in the same way, I suspect, that the Roman emperor Constantine once did). Although it is becoming a little dated in terms of its descriptions of workplace amenities, the book Work Pray Code by sociologist of religion Carolyn Chen is a great description of the religious dynamics (or lack thereof) in Silicon Valley workplace.
One of my foundational beliefs is that the way we live our everyday lives ends up shaping our ethical principles, whether we are aware of it or not. One aspect of that has to do with the kinds of people and ways of thinking that we are exposed to on a daily basis. It seems clear to me that the decision makers in Silicon Valley, and the AI industry in particular, inhabit a certain kind of secular materialist ideological bubble that excludes the perspectives of people of faith (and especially faith traditions like mainline Protestantism that have made principled stands against nationalism).
At this point, if you are a secular technologist yourself, you might ask (quite reasonably) what the problem is. While I certainly value my religious beliefs (since otherwise I would have no reason to live by them), it’s not a given that secular technologists need to engage with theology or that they would particularly benefit from doing so. The burden of proof is on me to demonstrate the value of theological thinking for the field of technological ethics in general, and AI ethics in particular. In the rest of this email, I’ll make my case for what theology as an intellectual discipline has to offer AI ethics by exploring three concepts: the three levels of technological ethics, the virtue of historical awareness, and the theological imagination.

The three levels of technological ethics
One of the best arguments that I have encountered for the necessity of theological thinking is found in the article “Technologies of Desire” by theologian Gerald P. McKenny.1 He observes that when we think about technological ethics, we are actually looking at three distinct levels of inquiry. The first level considers how a given technology is developed and implemented. In the case of AI, an example of first-level ethics might be drafting a safety policy that constrains inputs to and outputs from an AI system (e.g. preventing a chatbot from generating step-by-step instructions for making chemical weapons). On the second level, societal considerations enter the picture, as we examine second-order effects of a given technology once it is made broadly available. A real world example of second-level inquiry might be found in the alarming news of AI-triggered psychosis, which suggests that generative AI is creating or exacerbating mental health issues, a behavior that the developers of these systems probably never anticipated. Finally, at the third level of inquiry, we consider technology as “an entire way of relating to the world.”2 In other words, technology is a tangible manifestation of our underlying attitudes towards the world. Collectively, our technologies indicate how we want to exist in the world as embodied human beings.
That is a very abstract line of thinking, so let’s make it more concrete by considering a case study: the push by AI companies to develop an “artificial general intelligence” (AGI) that could meet or exceed human cognitive abilities. Closely related to this pursuit is the effort to develop methods for controlling AGI, so that if AGI is ever achieved, it will not act against its creator’s interests. In the field, this objective is called “AI alignment.” What underlying attitudes might these projects express? For one, I argue that it elevates some aspects of human life over others. It valorizes (sometimes, in a very literal sense) the human capacity to perform abstract cognitive labor (e.g. coding, mathematics), while disregarding other human activities like embodied play and contemplative spiritual practices that are also constituents of human intelligence. Yet, one set of capacities is monetizable, and hence valuable, to capitalist decision-makers, and the other is not. Meanwhile, does the desire to “align” (read: control) an artificially intelligent agent not express a controlling attitude towards intelligent entities in general? (I am referring to humans, of course.) After all, what is the discipline of management but a set of best practices to “align” humans?
Even if you disagree with the details of my analysis, I hope that you can see the value of this kind of third-level inquiry, which explores dimensions of technological ethics that a strictly technical analysis could never cover. This kind of inquiry is precisely what theologians, philosophers, and other humanists are trained to do‒a kind of intellectual practice often foreign to technologists in my personal experience.
The virtue of historical awareness
In his book After Virtue,3 philosopher Alasdair MacIntyre makes the case that for people to act in accordance with the virtues (which is to say, to engage in a given human community’s shared quest for the common good), they must be aware that they are actors within a larger communal history. MacIntyre argues that, whether they realize it or not, each person is the bearer of a particular historical tradition passed down to them by their community. It is a virtue‒a character trait that helps individuals and communities pursue their shared quest for the common good‒for someone to be aware of their place in history. I should note here that “tradition” in After Virtue does not refer to anything like the “traditional values” sometimes espoused by right-wing ideologues. For MacIntyre, “tradition” includes knowledge of the historical debts owed by one community to another‒including the ongoing consequences of slavery that adversely affect Black Americans. One cannot simply say “I never owed any slaves” and wash their hands of all responsibility in a grand act of self-absolution.4
The Christian Church at its best, I argue, is one of the human institutions that best exemplifies this virtue. Through the act of collective reading and reflection on Scripture, passed down from antiquity through the hands of faithful people, Christians are forced to grapple with the history of their faith and the many historical (and contemporary) shortcomings of the Church. For instance, I could point (again) to the issue of slavery. Historians of early Christianity and the classical world have come to a strong consensus that not only was early Christianity not abolitionist, it often accepted and even benefited from slavery.5 The Church’s complicity with slavery did not end in Roman times, but continued into the early modern era with the transatlantic slave trade. (Consider, for instance, the practice of redacting the Bible so that enslaved people would not be exposed to passages like the Exodus‒the Hebrew Bible’s epic tale of God’s revelation to Moses and the liberation of the Israelite slaves from Egypt.)
This might not seem like an auspicious start for a case for the moral exemplarity of the Church. Indeed, the evidence I have presented thus far might be considered a good reason to disregard the input of the Church on any ethical issue whatsoever. My point is, however, not that the historical practices of the Church in this area (or indeed in many other areas) should be emulated. Instead, what I am saying is that because of the presence of passages upholding slavery in the Christian scriptures,6 Christians are repeatedly called to account for their historical failures. Contemporary Christians are tasked with developing an awareness of the evils of slavery and the necessity of righting the Church’s historical wrongs. Of course, many Christians fail to live up to these moral obligations, but my point is that Christians have a particular motivation to develop the virtue of historical awareness that many secular people do not, because without this virtue, there is no way to make sense of difficult and painful passages of our ancient Scriptures (like those dealing with slavery) in our contemporary world.
In contrast, the technology industry, with its “move fast and break things” mindset, rarely pauses to look backwards for historical precedents, let alone to right historical wrongs. I brought up the issue of slavery precisely because it does not seem out of the question to me that developments in artificial intelligence could lead to some novel, futuristic form of slavery. In fact, in Superintelligence: Paths, Dangers, Strategies,7 philosopher and AI theorist Nick Bostrom raises the possibility of this very scenario:
A salient initial question is whether these working machine minds are owned as capital (slaves) or hired as free wage laborers. […] Investors would find it most profitable to create workers who would be “voluntary slaves”‒who would willingly work for subsistence wages. Investors may create such workers by copying those workers who are compliant.8
How can you get more dystopian than that? In any case, a community that is aware of the historical wrongs it must atone for (i.e. the Church) has strong motivations to prevent the recurrence of atrocities like slavery, whereas a community that does not cultivate the virtue of historical awareness will not be nearly as responsive. Theologians are well-positioned to observe and articulate these historical wrongs.
The theological imagination
What would it be like to live in a world in which we coexist with non-human intelligences?9 The Christian tradition (not to mention other religious traditions) offers surprising conceptual resources that might assist with envisioning this world. After all, the Christian Scriptures describe the existence not only of an omniscient, omnibenevolent God, but also of a host of benevolent and malevolent spiritual beings (angels and demons respectively). Even though a secular person might reject the existence of such beings, I argue that theological reflections about the non-human beings described in the Christian tradition can still provide useful conceptual resources to imagine what kinds of non-human beings might exist, especially AI.10
A compelling example of this can be found in theologian Marius Dorobantu’s article “A for Artificial, but Also Alien: Why AI’s Virtues Will Be Different from Ours.”11 He speculates, a priori, what kind of qualities might be considered virtuous in an artificial intelligence. Some possibilities that Dorobantu raises include “unbounded empathy” (the ability to understand and consider many people’s emotions at once), “quasi-infinite patience” (being able to take more time to make decisions due to having a faster processing speed), and “immutable conformity” (adherence to a fixed set of guiding principles). In his conclusion, Dorobantu makes a fascinating argument:
As I kept brainstorming about strong AI’s alien-like virtues, one unexpected thought kept creeping into my mind. Most of these virtues are attributed to God in the religious imaginary [sic] of monotheistic traditions. This is not completely surprising, given that God is conceived of usually in terms of anthropomorphic characteristics, but without the limitations imposed by human nature. So, instead of purely speculating on this topic, I might have been better off searching in a textbook of systematic theology.
Let’s put this idea to the test. What passages in Scripture describe God this way? Well, here’s what I might call “unbounded empathy”:
For it was you who formed my inward parts; you knit me together in my mother’s womb. I praise you, for I am fearfully and wonderfully made. Wonderful are your works; that I know very well. My frame was not hidden from you, when I was being made in secret, intricately woven in the depths of the earth. Your eyes beheld my unformed substance. In your book were written all the days that were formed for me, when none of them as yet existed. ‒Psalm 139:13-16
“Quasi-infinite patience”:
But do not ignore this one fact, beloved, that with the Lord one day is like a thousand years, and a thousand years are like one day. The Lord is not slow about his promise, as some think of slowness, but is patient with you, not wanting any to perish but all to come to repentance. ‒2 Peter 3:8-9
And “immutable conformity”:
Every generous act of giving, with every perfect gift, is from above, coming down from the Father of lights, with whom there is no variation or shadow due to change. ‒James 1:17
Could a created being exercise such exalted virtues? Your guess is as good as mine. But if an AGI ever was developed, I’d feel a lot more comfortable if I knew that it had a notion of moral and intellectual virtue like what Dorobantu describes!
In conclusion…
Theology has significant intellectual resources and practices to offer the field of technological ethics, even if one does not share the metaphysical assumptions of the theologian. Theologians pursue lines of ethical inquiry that might not be obvious to secular technologists, because 1) they consider technology not only as a tool, but also as representations of an entire way of being in the world; 2) they bring an awareness of historical perspectives and pitfalls to technological issues, and 3) they can draw on the rich imagination and conceptual resources of religious cosmologies.
Gerald P. McKenny, “Technologies of Desire: Theology, Ethics, and the Enhancement of Human Traits,” Theology Today 59, no. 1 (2002): 90–103, https://doi.org/10.1177/004057360205900107.
McKenny, “Technologies of Desire,” 91–92.
Alasdair MacIntyre, After Virtue: A Study in Moral Theory, 3rd ed. (University of Notre Dame Press, 2007).
MacIntyre, After Virtue, 218–223.
For more information on slavery and the early Church, refer to Jennifer A. Glancy, Slavery in Early Christianity (Fortress Press, 2024).
e.g. Ephesians 6:5-8.
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies. (Oxford University Press, 2014).
Bostrom, Superintelligence, 167.
Arguably, animals are such non-human intelligences, albeit of a very different order. Perhaps I should say non-human sapients. For more, refer to Martha C. Nussbaum, Justice for Animals: Our Collective Responsibility (Simon & Schuster, 2023).
For some secular speculations on this subject, refer to Murray Shanahan, “Conscious Exotica,” Aeon, accessed September 2, 2025, https://aeon.co/essays/beyond-humans-what-other-kinds-of-minds-might-be-out-there.
Marius Dorobantu, “A for Artificial, but Also Alien: Why AI’s Virtues Will Be Different from Ours,” Christian Perspectives on Science and Technology 3 (December 2024), https://doi.org/10.58913/RXJR6727.