technologyliberal
Making AI Moral: Why a Robot Can’t Pray
New York City, USATuesday, April 21, 2026
Anthropic, the creator of the Claude chatbot, is aiming to embed a moral compass into its AI. The company has even partnered with Catholic leaders to teach Claude values directly from religious teachings.
The Limitations of Text‑Based Morality
- No Physical Experience
Claude can read scriptures and sermons online, but it cannot meditate, fast, or feel the bodily cues that spark emotions such as gratitude and compassion.
Emotion Drives Ethics
Research shows that moral behavior stems from emotional responses triggered by bodily signals—a process AI cannot replicate.Practice Over Belief
Studies indicate that merely claiming belief does not yield the same health and ethical benefits as active practice (attending services, praying, meditating).
The Bottom Line
While Anthropic’s intention is commendable, relying solely on religious texts will not endow Claude with genuine moral understanding. True ethics require the embodied experience that AI simply cannot access.
Actions
flag content