The Ape of God in the Age of AI
In Latin there is a saying some ascribe to Tertullian: Diabolus simia Dei — literally, “the Devil is the ape of God.” The idea is that the Devil mimics God, promoting a kind of “good lite” that turns out, unsurprisingly, to be evil itself.
It can be illuminating to ask what various trends in society or the Church are trying to mimic. Social networks, for example, were supposed to draw people into a more perfect communion — but they have produced instead a multitude of isolated echo chambers. These are communions of a sort, but rather than fostering empathy, they amplify hate. But this much we already know. Let us now analyze the newest invention — Large Language Models — from the same perspective.
LLMs could likely never have raised enough investment money were they not marketed on the promise of layoffs in expensive white-collar creative jobs. There could have been a positive promise to humanity instead, which remains largely untold: controlling things effectively through natural language, building better user interfaces or code generators, aiding writing, improving search. But let’s skip that for now since, as with social networks, it is telling what LLMs have already ended up causing.
One could argue it is too early to tell in many areas, but I want to focus on an outcome that already seems almost certain: humanity is to be relieved of the fear that we might soon have to understand, even slightly, how these technologies work inside. Many find it unpleasant enough to acquire that kind of literacy in math, physics, or biology; for the ubiquitous computer science, apparently, we can be excused.
I can already hear the cheering that the dreaded science of the IT nerd will stay out of the curriculum — even though, only a year ago, it seemed certain that every future student would have to learn the basics of coding, computational complexity, and how information systems communicate. Now, with many predicting that software engineers will become about as rare as the designers of passenger airliners, we can rest assured: we will not need to understand information technology from the inside — not even to the extent we understand what happens under the hood of our cars. Which, generally, is not much.
Outsourcing understanding is never a good thing, but in many areas it is inevitable — and has been since the division of labor. Yet we should ask whether dealing with information is separable from our humanity at all, and especially: to what extent can our relationship with God happen outside it? Many Christians inhabit two worlds. There is the private space of life and worship, and there is the technosphere — which they enter only minimally, certainly less than their worldlier neighbors. The minimal engagement is taken to justify illiteracy in the field. Physics and biology are different, of course: noble disciplines, fit for the admiration of God’s creation. The only small flaw is that it is not quite possible to live with others — to love them in any meaningful way — without their having influence on you; and if you do not understand a significant part of the world you all live in, you may end up ensnared by the technosphere without even using it.
Another saying goes: if you do not care about politics, politics will care about you. I am afraid with technology it is exactly the same. Through the influence of social networks, I have seen many people living inside an echo chamber and carrying more hatred than before — even when they themselves do not use social networks and abstain from technology as much as they can. But with LLMs, things are about to get much more serious.
Before LLMs, the amount of information we were supposed to responsibly process was already overwhelming, and there was no infrastructure to help us make the right decisions in a responsible way. With LLMs, we can seemingly just delegate everything to a well-trained statistical machine. We can ask it to study our work contract, mortgage, insurance, or any other terms and conditions, opinions, news, political decisions, healthcare decisions, parenting decisions. We can ask it to summarize the emails others have sent us — or have it generate the responses for us, or better still, train it on our past output and let it impersonate us. We can rightfully argue that we cannot afford to spend the whole day reading hundreds of pages of natural-language content just to stay on top of things. Frankly, consuming all the necessary information with the attention it deserves and cross-checking it is the only way we could stay responsible — and it is not physically possible. And there are no longer journalists or other trustworthy authorities to help us; those gatekeepers were swept away by an earlier technological wave. So delegating to a machine seems inevitable, since we have not built any real infrastructure to help us organize information collaboratively — or build trustworthy blocks of it together.
So we have decided to delegate our understanding and our orientation to a global statistical machine — one supposed to be reliable, at least for the mediocre and average, which is most of us. We may hope our lives will not be disturbed. The technosphere will live the information-heavy part of life for us, while we ourselves are not required to be mentally present. And now, having already lost some of the trustworthy middlemen who once helped us handle the complexity, we are removing the rest by automation: lawyers, clerks, marketing specialists, software engineers, and who knows whom else. The hope is that the deluge of information will somehow handle itself, without our ever looking into it or understanding it.
Are you still able to believe this? Does it not seem obvious that the probability of this not crashing global society — and crashing it hard — is now very low?
I find this genuinely sad. I had thought we would follow a different path — but that path would have required basic technological literacy at a level comparable to our grasp of physics, mathematics, or biology. My expectation was based on an observation from my own field, software engineering, where we already build infrastructure for managing the incredible complexity of information across many domains of expertise. We do it every day, and, I must say, rather successfully. And here I do not mean applications for sharing selfies and statuses. I mean something more fundamental: the building of software itself.
Building a codebase is, in essence, an act of collective agreement about how some real-world challenge will be handled, from the highest level down to the smallest detail. If we tried to write that agreement out as a continuous block of natural language — a book, or a bill of law — it would be impossible to manage. The result would be as overwhelming as the ocean of information we are already drowning in.
A code repository solves this differently. Natural language is used only to name things: components, processes, pieces of data. The structure itself is hierarchical, not linear, and held together by links rather than repetition. What has already been described is not described again millions of times over.
The fundamentals of this approach are not terribly complex — arguably simpler than basic math, physics, or biology. If we were even a little literate in them as a society, we could collaborate on a common understanding of the very things mentioned earlier: work contracts, mortgages, insurance terms, news, our political and healthcare and parenting decisions. This is how software developers already work: unlike an article or a book, a codebase can be built by many people at once, each contributing to a shared understanding of the same domain. The repetition is reduced as much as possible, and there is a working industry-standard review process in place, as well as automated quality testing.
There is no reason we could not do the same elsewhere — building a source of trust, and an understanding as detailed as it needs to be. In my book I go even further: this can be compared to the idea of logos, with direct implications for how Catholic teaching might be passed between generations, accurately and effectively.
LLMs are unusually good at processing and generating code precisely because code can always be rendered into plain language — though when so unfolded, it becomes too long for any human to read. LLMs are comfortable with that unfolded form: they summarize it, reformulate it, translate it between programming languages and human languages.
Yes, statistical machines are better than humans at handling natural language. But that does not mean we should delegate understanding to them. It means we should use them as a tool while finding a better infrastructure for managing information complexity than natural language, one that keeps understanding in our hands.
And this is where the dividing line actually runs. Between the God-like and the ape-like.


