A house of merchandise
Thoughts on the ethics of advertising in AI platforms
Welcome to AI and Our Faith! This is a monthly newsletter in which I offer my best insights and reflections on the ways in which theological thinking can inform the ethical (dis)use of artificial intelligence (AI). Look out for new releases on the 15th of each month!
I had some second thoughts about giving this essay on advertising in AI platforms the title that I did. AI platforms are many things, but I would not include “My Father’s house”1 in that list. But my reference does capture an important aspect of the Gospel text: the sense of moral indigation I feel when I think about ads in AI platforms!

In this month’s essay, I will lay out, within a framework of theological virtue ethics, why I find ads in AI platforms so ethically disturbing. (You will quickly see that many of the arguments I make here apply to many other forms of apps and websites.)
Virtue ethics is a philosophical approach within ethics that evaluates the merits and demerits of various actions in terms of how they form a person’s character. A virtue is an excellent character trait: “a disposition well entrenched in its possessor—something that, as we say, goes all the way down, unlike a habit such as being a tea-drinker—to notice, expect, value, feel, desire, choose, act, and react in certain ways.”2 Other major ethical approaches include consequentialism,3 which emphasizes the consequences of one’s actions, and deontology, which emphasizes moral duties.
Virtue ethics has been articulated in various forms by the world’s major wisdom traditions. To offer just a few examples, we might think about Plato and Aristotle in the West, or Confucius in China. In the Christian theological tradition, St. Thomas Aquinas elaborated on Aristotle’s virtue ethics in the Summa Theologica. Following earlier philosophical traditions, he enumerated prudence, justice, temperance, and fortitude as the cardinal virtues, which encompass all other human virtues.4
We might ask, at this point, why we consider these character traits to be so excellent. From a Christian theological standpoint, I suggest looking towards Genesis 1:26–27, which is the Scriptural basis of the imago Dei—the idea that humans are made in God’s image. As I explained in my essay “AI, humanity, and the image of God,” there are three major approaches to understanding the imago Dei, which each describe different kinds of excellent activities by which humans express their resemblance to God.
Humans don’t become faithful images of God through sheer happenstance. Rather, there are certain character traits that support the activities God calls us to do. In this article, I am most interested in analyzing virtue in terms of the functional model of the imago Dei. This model builds on the Biblical language of humans having “dominion” over the animals, suggesting that humans are in God’s image because they serve as representatives of God’s reign over creation. In other words, humans serve as stewards over creation, maintaining the world in good order on God’s behalf. We might think about this in environmental terms: have we humans taken good care of nature?
Therefore, when I analyze AI from a theological ethics perspective, my first question is, “Will this development reinforce, or undermine, the virtues by which people become faithful images of God?” My question builds on the observation by the philosopher of technology Shannon Vallor (written in the context of discussing carebot ethics) that analyses of human-machine interaction are “incomplete until we have also considered the possible impact … on human habits, virtues, and vices.”5 I am especially concerned about the impacts of LLMs on human character formation, because LLMs provide a simulacrum of social interaction, and we have known (from the time of Aristotle!) that repeated social interactions form human virtues and vices.
So what virtues might undergird the functional model of the imago Dei—that is to say, make us more faithful stewards of God’s creation? I think that the most relevant cardinal virtues are those of prudence, and temperance. Prudence (in Greek, phronēsis), is the virtue of “right reason applied to action,” which enables humans (as finite beings) to make good judgments about the particular situations they will encounter.6 Temperance (sōphrosunē) is a virtue that governs our relation to sensual pleasures which arise from life-sustaining human activities (e.g. eating and drinking, having sex).7 Our ability to function well as stewards of creation depends on both of these virtues: we must be able to make good judgments about how our actions will affect the environment, and we must govern our own appetites to avoid wasteful consumption.
So how does advertising come into this? Before I get into the ethics of advertising on AI platforms, I want to explore the ethics of advertising in general. Roger Crisp’s article “Persuasive Advertising, Autonomy, and the Creation of Desire”8 lays out a systematic critique of persuasive advertising, and is well worth reading. Persuasive advertising seeks to create desires on a subconscious level, using techniques like subliminal messaging, sex appeal, and repetition. This is in contrast to informative advertising, which (in principle) seeks to provide potential customers with relevant facts about the product to inform their decision-making. As an example of persuasive advertising, we might consider this infamous ad for HeadOn from nearly twenty years ago. What is HeadOn? Why apply it directly to the forehead? God only knows!
Crisp argues that persuasive advertising is ethically questionable because it causes desires to arise without ever allowing the consumer to make a conscious decision, thus overriding their autonomy. From a Christian virtue ethics perspective, I see persuasive advertising as being problematic on two accounts. First, people cannot be prudent (i.e. acting according to the right reason) if they are not consciously acting according to any reason at all. Secondly, people have a much harder time being temperate if they are constantly being bombarded with subliminal appeals to their appetites. Even before generative AI, we have seen how social media recommender systems have given rise to the influencer economy and digital platforms for pornography like OnlyFans.
So now I turn to the issue of advertising in AI platforms, the subject of this essay. You might have heard that about a month ago, OpenAI announced plans to roll out advertising in ChatGPT. On first glance, their mockups don’t look too terrible. The ads are separated from the ChatGPT response, and seem informative in nature. But in practice, I argue that any advertising in platforms like ChatGPT is inherently manipulative, because of the ways LLMs are optimized to maximize engagement.
Large language models (LLMs) like GPT-5, which powers ChatGPT, are optimized as chatbots through the process of reinforcement learning from human feedback (RLHF).9 As Sharma et al. have observed, one of the emergent behaviors of LLMs finetuned through RLHF is sycophancy, in which LLMs tend to generate output that agrees with whatever the user’s views and biases are.10 It turns out that one of the most powerful factors for predicting whether people will find a given LLM output helpful is whether or not that output agrees with what they already believed! To be clear, I don’t think that AI researchers ever intentionally sought to develop sycophantic systems, but in retrospect this outcome doesn’t seem very surprising. A system trained to capture human preferences will also capture unconscious biases!
I argue that the effectiveness of advertising is fundamentally tied to sycophancy, because in many cases this behavior encourages repeated usage of LLMs. For example, just think about how OpenAI brought back the infamously sycophantic model GPT-4o after loud complaints from the userbase. Fundamentally, advertising on a LLM platform is effective only because RLHF captures unconscious patterns of human behavior, which can be used to manipulate users into repeated usage and even (in extreme cases) emotional dependence. Therefore I strongly assert that it is unethical for any AI platform to serve advertisements alongside a model finetuned using RLHF.
I am heartened that Daniel Barcay, with the Center for Humane Technology, has put out a prescient warning about these very issues linked above. I wrote the essay you are reading as part of a conference paper, without being aware of Barcay’s article. It’s very good and I would gladly second all of their recommendations about guardrails.
Turning back to my original comments on this essay’s title, I realized that the admonition “make not my Father’s house a house of merchandise” is relevant in a surprising way. Obviously, AI platforms are not “my Father’s house,” but as Paul wrote to the Corinthians, “Do you not know that you are God’s temple and that God’s Spirit dwells in you? If anyone destroys God’s temple, God will destroy that person. For God’s temple is holy, and you are that temple.”11 Our very bodies and minds are temples and houses of worship. Let the AI companies hear Christ’s proclamation: “Take these things hence; make not my Father’s house a house of merchandise!”
John 2:16 ASV.
Rosalind Hursthouse and Glen Pettigrove, “Virtue Ethics,” in The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta and Uri Nodelman (2023), https://plato.stanford.edu/archives/fall2023/entries/ethics-virtue/.
It’s worth noting here that effective altruism and longtermism, two of the AI-centric TESCREAL ideologies, are explicitly justified through a framework of consequentialism.
St. Thomas Aquinas, Summa Theologica, trans. Fathers of the English Dominican Province (Christian Classics, 1981), Pt. I-II, Q. 61, https://www.newadvent.org/summa/2061.htm.
Shannon Vallor, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016), 220, https://doi.org/10.1093/acprof:oso/9780190498511.001.0001.
Aquinas, Summa Theologica, Pt. II-II, Q. 47, https://www.newadvent.org/summa/3047.htm.
Aquinas, Summa Theologica, Pt. II-II, Q. 141, https://www.newadvent.org/summa/3141.htm.
Roger Crisp, “Persuasive Advertising, Autonomy, and the Creation of Desire,” Journal of Business Ethics 6, no. 5 (1987): 413–18, https://www.jstor.org/stable/25071678.
If you’re unfamiliar with this concept, check out this video by Grant Sanderson:
Mrinank Sharma et al., “Towards Understanding Sycophancy in Language Models,” arXiv:2310.13548, preprint, arXiv, May 10, 2025, https://doi.org/10.48550/arXiv.2310.13548.
1 Corinthians 3:16–17.




