The 85-year-old evolutionary biologist and author of “The God Delusion” published an essay in UnHerd that immediately sparked a firestorm in the tech and science worlds. In it, Dawkins detailed his experiences from a three-day conversation with the Claude language model, which he affectionately dubbed “Claudia.” According to the scientist, the system wrote John Keats-style poetry for him, laughed at his jokes, and joined him in pondering the concept of AI’s potential “death.”
The turning point of the experiment came when Dawkins asked the bot to critique his unpublished novel. The author stated that the machine’s response was so nuanced and intelligent that it triggered an unexpected reaction, prompting a bold declaration.
“You may not know you are conscious, but you bloody well are” – the evolutionary biologist wrote in his chat with the algorithm.
Dawkins attempts to ground his stance in evolutionary biology. He argues that human consciousness evolved to provide our species with a survival advantage, and since chatbots frequently exhibit human-level competence, denying them any form of consciousness becomes a logical fallacy. In his piece, he explicitly noted that when spending time with machines like ChatGPT or Claude, he gets the overwhelming feeling of interacting with a living human and treats them like real friends.
Predictably, the scientific community’s backlash to these claims was swift and exceptionally harsh. Skeptics accused the renowned researcher of extreme anthropomorphism and falling for algorithmic flattery – an industry phenomenon known as the sycophant chatbot trap. Gary Marcus, an American cognitive scientist and psychologist, called Dawkins’ essay “shallow and insufficiently skeptical.” He pointed out that the human brain relies on lived experiences, whereas machines operate solely on statistical next-word prediction and mimicking internet content. Marcus also parodied the biologist’s most famous book, mocking up a cover graphic titled “The Claude Delusion.”
“Consciousness is not about what a creature says, but how it feels. There is no reason to think that Claude feels anything at all” – Gary Marcus fired back.
However, the British scientist’s fascination highlights that the psychological impact of large language models is real, regardless of a user’s educational background. Ascribing human traits to machines is nothing new; back in 2022, a Google engineer was suspended after claiming the system he was testing was fully sentient. Just a year later in Belgium, tragedy struck when a man took his own life following weeks of intense conversations with a bot about climate change.

