← All Articles
  • science
  • ethics
  • economics

Artificial Intelligence & Scientific Progress

Every generation has faced a technology that seemed to threaten the sacred. The printing press, the telescope, the theory of evolution, the internet — the tradition's response has never been to stop the clock. It has been to ask: who does this serve, who does it harm, and who gets left out?

The Answer

Artificial intelligence is not the first technology that has seemed to threaten what it means to be human.

Gutenberg's printing press (1440) was condemned by church authorities who feared that direct access to Scripture would undermine clerical authority — and they were right. It did. It also gave the Reformation, the Scientific Revolution, and modern democracy their foundation.

Darwin's theory of evolution was denounced as an attack on human dignity — and some theologians still treat it that way, despite the Catholic Church's formal acceptance of it in 1950 and the overwhelming scientific consensus. The digital revolution was greeted with both utopian and dystopian extremes, neither of which fully materialized.

The question for this tradition has never been "is this new?" or "is this disruptive?" It has been: who does this serve, who does it harm, and are we organized to make sure the benefits go to everyone rather than concentrating in the hands of the few?

Artificial intelligence is a genuinely powerful technology. Its applications span medicine (earlier cancer detection, protein folding for drug development), labor (automation of both low-skill and high-skill tasks), warfare (autonomous weapons systems), surveillance (facial recognition, behavioral prediction), creative work (text, image, audio generation), and economic productivity. The potential benefits are real. So are the potential harms.

The tradition has no instruction manual for large language models. What it has is a consistent set of principles that apply to every technology: Does it serve the poor or exploit them? Does it expand human dignity or diminish it? Who controls it, for whose benefit, and with what accountability? Is it being deployed in ways that are transparent and contestable, or in ways that are opaque and unaccountable?

The Jewish Reformer's Lens

The Jewish tradition has an ancient archetype for the question of artificial intelligence: the Golem.

In medieval and early modern Jewish legend, a Golem was an artificial being created by a rabbi using mystical knowledge — typically formed from clay and animated by the word emet (truth) inscribed on its forehead. The most famous Golem is the Golem of Prague, said to have been created by Rabbi Judah Loew ben Bezalel (the Maharal) in the 16th century to protect the Jewish community from antisemitic attacks.

The Golem legends are not simple endorsements of artificial creation — they typically end with the Golem becoming uncontrollable and requiring deactivation by erasing one letter from emet to make met (death). The pattern is consistent: artificial creation in service of a just purpose can be legitimate; the question is always whether the creator maintains control and whether the creation remains a tool rather than becoming an end in itself.

The concept of tzelem Elohim — being made in the image of God — is central here. What is distinctive about human beings in the Jewish tradition is not merely intelligence or creativity but the capacity for moral agency: the ability to choose good or evil, to be held responsible, to repent, to form covenantal relationships. An AI system, however sophisticated, does not have moral agency in this sense. It does not bear responsibility. It cannot repent. It cannot enter covenant.

This does not make AI unimportant or unregulated — it makes it a tool that requires especially careful governance, because the people who deploy it bear the moral responsibility that the system itself cannot bear. The Golem's creator is responsible for what it does.

The principle of Pikuach Nefesh (saving life overrides most obligations) supports the development and deployment of AI in medical contexts — if AI can detect cancer earlier, diagnose disease more accurately, or identify risks that human physicians miss, the tradition would support this use. The obligation to preserve life is strong enough to embrace technological means.

Catholic Social Teaching

The Pontifical Academy for Life — the Vatican body that addresses bioethical questions — issued guidelines on artificial intelligence titled Humanizing the Algorithm in 2019, followed by the Rome Call for AI Ethics (2020), co-signed by Microsoft and IBM alongside religious leaders. Pope Francis addressed AI at the G7 summit in 2024, becoming the first pope to address that forum, specifically on the ethical dimensions of artificial intelligence.

The Church's engagement with AI reflects its consistent application of the principle of integral human development — the idea that technology must serve the full flourishing of all human beings, not merely efficiency or profit.

Key concerns from CST's framework:

Labor displacement: The Church has championed workers' rights since Rerum Novarum (1891). Automation that eliminates jobs without providing alternatives or safety nets for displaced workers is not morally neutral. The gains from productivity must be distributed — through shorter work weeks, retraining, safety nets, or shared ownership — not captured entirely by capital.

Weaponization: The Church has specifically condemned the development of autonomous lethal weapons — weapons systems that select and engage targets without meaningful human control. Just war theory requires human moral responsibility at every lethal decision. A weapons system that kills without human authorization violates this requirement categorically.

Surveillance and dignity: AI-enabled mass surveillance systems — facial recognition, social credit scoring, behavioral prediction — threaten the conditions necessary for human dignity and free conscience. The Church has been an opponent of totalitarian surveillance since Pius XI's Mit Brennender Sorge (1937), and the same principles apply to technological surveillance infrastructure.

Bias and justice: AI systems trained on historical data inherit and amplify historical biases — in hiring, lending, criminal sentencing, and healthcare. Deploying these systems against the poor and marginalized while presenting them as objective is a form of structural injustice with a new face.

Sources & Citations
  • Genesis 1:26–28 — Stewardship and the Image of God (Hebrew Bible) The Torah. Human beings are created in the image of God (*tzelem Elohim*) and given *dominion* over creation — a word that has been extensively debated. The best contemporary scholarship understands this as *stewardship* rather than exploitation: the human being is appointed guardian and co-creator, accountable for how that responsibility is exercised. Applied to AI: humans bear responsibility for the systems they create and deploy. Abdication of that responsibility to "the algorithm" is not acceptable under this framework.
  • Pontifical Academy for Life, Rome Call for AI Ethics (2020) A document signed by the Pontifical Academy for Life, Microsoft, and IBM, calling for AI development guided by principles of transparency, inclusion, accountability, impartiality, reliability, and security and privacy. Notable as an example of religious institutions engaging proactively with technological governance rather than either condemning technology wholesale or ignoring its implications.
  • Pope Francis, Address to the G7 Summit on AI (2024) Pope Francis became the first pope to address the G7 forum, specifically on the ethical implications of artificial intelligence. He called for binding international agreements on autonomous weapons, emphasized the need for human oversight of AI decision-making, and connected AI governance to the preferential option for the poor — warning that AI benefits must not accrue only to wealthy nations and corporations.
  • Rerum Novarum (1891) applied to AI and labor displacement Pope Leo XIII's encyclical establishing the Church's defense of workers' rights — including the right to just wages, safe conditions, and association. Written about the industrial revolution's impact on labor. Applied to AI: the same principles require that the productivity gains from automation be distributed equitably rather than captured entirely by the owners of AI systems. The Catholic tradition has a 130-year track record of applying labor rights principles to new technological contexts.
  • Joy Buolamwini & Timnit Gebru, "Gender Shades" (2018) A landmark academic study demonstrating that commercial facial recognition systems from major technology companies had significantly higher error rates for darker-skinned women than for lighter-skinned men. One of the foundational papers establishing that AI systems trained on non-representative data inherit and amplify historical biases. Provides the empirical foundation for the justice critique of algorithmic decision-making in high-stakes contexts.
What Should We Do?

For everyone: Approach AI with neither panic nor naivety. It is a powerful tool — like all powerful tools, its moral quality depends entirely on how it is used, who controls it, and what accountability structures govern its deployment.

In your own life: be aware of where AI is making decisions that affect you — in hiring, lending, insurance, healthcare, criminal justice, content moderation. You have a right to know when automated systems are making decisions about you and a right to challenge those decisions through human review. Support legislation that establishes these rights.

In your professional and organizational life: if you are deploying AI systems, ask: does this system make decisions about people? What are the error rates? Are errors distributed equitably across demographic groups? Is there human review for consequential decisions? Is the system's operation transparent to the people affected by it?

On labor displacement: support policy frameworks that distribute the gains from AI-driven productivity — through universal basic income, work-sharing, profit-sharing, shorter work weeks, robust retraining programs, and strengthened safety nets. The productivity gains from automation belong to society, not only to the owners of the systems.

For Catholics specifically: The Church's engagement with AI ethics is serious and growing. The Rome Call for AI Ethics is worth reading. The Church's 130-year tradition of labor rights teaching applies directly to the question of who benefits from automation and who bears its costs. Your diocese likely has staff or organizations working on social justice issues where AI's distributional impacts are already visible — in criminal sentencing, housing discrimination, and healthcare access. Connect with that work.

Support This Project

This site is free, ad-free, and will always be. If it's been useful to you, consider helping it reach more people.