November 2025 - Magazine - Page 119
November 2025
estly, but for many, it's God. They do puja, they receive darshan," Walters says.
Ungodly Behaviour
For centuries, religious communities have been anchored to priests, scholars and
other spiritual leaders, says the reverend Lyndon Drake, a research fellow at the
University of Oxford who studies theological ethics and artificial intelligence. But "AI
chatbots might indeed challenge the status of religious leaders", Drake says, by introducing new ways to connect with scriptures and influencing people's beliefs in
subtle ways they might not even recognise.
Religious chatbots might be trained on scripture and dutifully quote verses, but
they share the same bizarre hallucinations and shortcomings of other AIs. In one
instance, GitaGPT claimed, in the voice of Krishna, that "killing in order to protect
dharma is justified", Sahu says. Other AIs spun up around the Bhagavad Gita made
similar declarations, sparking a surge of criticism on social media, Sahu says. "I realised how serious it was and proceeded to fine-tune the AI and guardrail such responses," he says. "The chatbot is in a much better state now and I am confident in
its ability to provide the right guidance."
In 2024, an evangelist group called Catholic Answers rushed to take its chatbot
priest Father Justin offline after the AI reportedly told users it was a real priest that
could perform sacraments and said it would be fine to baptise a child in the soft
drink Gatorade. The group soon brought the AI back online, but "defrocked" the
chatbot by taking the word "Father" out of its name and removing the priests'
robes from its avatar.
"The specific problem of unhelpful religious output is an example of a wider problem of building predictable, ethically designed AI systems," Drake says.
Drake welcomes religious chatbots but has concerns about their implementation.
Digital tools often have a veneer of neutrality, giving users the false impression that
they're receiving clear, unbiased information. That could have big implications.
"Interpretations of sacred texts have often been contested," Drake says, but "AI
chatbots reflect the views of their creators." It is common knowledge that AI chatbots mirror the biases of their training material, he says, and the inputs fed to them
3 so interpretations of religious texts by these bots can be skewed.
In countries like India, the risks of religious AI may be amplified by the country's
massive digital divide. For users with limited technological literacy, a chatbot quoting scripture might not be seen as a coded algorithm but as a genuine voice of
divine truth, Walters says. "The danger isn't just that people might believe what
these bots say, it's that they may not realise they have the agency to question it,"
she says. "And that's the danger 3 when these tools are perceived as divine voices,
their words can carry weight far beyond what they should."
Regardless of the consequences, there's no denying the benefit some users are
already feeling. "Even if one visits the temple often, it is rare you get into a deep
conversation with a priest," Meel says. "So, bots like these bridge the gap by offering scripture-backed guidance at the distance of a hand."