In February 1912, noted scientist Arthur Woodward received an intriguing letter from Charles Dawson, a country lawyer with a growing reputation as an amateur geologist. Dawson told Woodward that he had found fossilised fragments of human skull in the flint beds of Piltdown near the south coast of England. The find looked pretty special. It was.
The skull of Piltdown Man, with an apelike jaw and a large cranium, seemed to be a missing link in the evolutionary chain between modern humans and our primate ancestors. It was more than four decades before researchers began to suspect a hoax — and quickly discovered compelling evidence that every single discovery associated with Piltdown had been a fake.
I had long regarded the Piltdown fake as a unique product of the Edwardian age. Now I am not so sure. Some of the most famous “discoveries” in psychology are also being exposed — sometimes decades after the fact — as distorted, misreported or exaggerated to a disturbing degree. For a while, it seemed that 1950-1975 was a heroic age of psychological research, in which bold — if ethically questionable — findings seared themselves on the public consciousness. There was the Stanford Prison Experiment in 1971, in which student volunteers were invited by the psychologist Philip Zimbardo to act out the roles of prisoners and prison guards. The study swiftly deteriorated into dehumanising abuse, as the guards embraced their role as fascist thugs with too much enthusiasm.
There was the UFO cult who intensified their beliefs at precisely the moment (December 22 1954, just after midnight) that their prophecy of the end of the world failed to materialise: all witnessed by undercover researchers.
There was the “Robber’s Cave experiment”, also in 1954, in which the psychologist Muzafer Sherif organised a summer camp at Robbers Cave State Park, Oklahoma, for 11-year-old boys. He and his associates then took notes as the camp descended into a hellish real-life version of Lord of the Flies.
What a collection of daring, epic research discoveries. Alas, they were more than merely daring: they were downright misleading. Begin with the Stanford Prison Experiment — a misnomer from the start, since there was no experimental control. Thanks to some detective work by Thibault Le Texier, a historian, it seems clear that the experiment’s mastermind, Zimbardo, heavily coached the “guards” to dehumanise and brutalise the “prisoners”. The prison simulation has traditionally been described as a surprising and spontaneous eruption of brutality. Le Texier sets out a strong case that the brutality was orchestrated by the experimenters from the start.
There is a similar story to be told about the Robbers Cave study. The superficial telling of this tale is that a group of boys were recruited to participate in a summer camp. Sherif and his collaborators — playing the role of camp counsellors — split the boys into two groups (the “Eagles” and the “Rattlers”) and organised baseball and tug-of-war contests with prizes. Sherif correctly predicted that the competition for resources between the groups would lead to bitter rivalry and fighting, and that the groups could then be reconciled by the presence of an external threat: vandalism to the camp’s water supply.
As with Zimbardo, there were always questions over the ethics of this study — some of the boys found the experience distressing, and none of them was told that they had been the subjects of an experiment.
But more recent research raises scientific questions, too. Historian Gina Perry, in her book The Lost Boys (2018), points out that the experimenters had to go to some lengths to engineer the tribal rivalry they had predicted, and that the note-taking observers often disagreed about what they were seeing. Those who had worked with Sherif on his theories found evidence to support them, while more independent observers would often describe very different dynamics. Strangest of all, Sherif had run another study the year before, in which the boys stubbornly refused to hate each other, and concluded — correctly — that the camp staff kept trying to stir up trouble. That study was buried in the archives, barely mentioned. “It was as if Sherif wanted to forget it,” writes Perry.
The next shoe to drop? When Prophecy Fails (1956), the classic account of the UFO cult, was written by more giants of 20th century psychology: Leon Festinger, Henry Riecken and Stanley Schachter. Festinger and his colleagues had infiltrated the UFO cult and described behaviour in line with Festinger’s theory of cognitive dissonance: when the cult’s apocalyptic predictions did not emerge, the core members of the group clung even more firmly to their beliefs, and began to evangelise about them at the very moment they seemed to have been disproved.
In work published late last year, researcher Thomas Kelly shreds this story of its credibility. Kelly had access to unsealed archival material, which demonstrated that the authors had misreported many of the events, distorting them to fit Festinger’s theory. They also interfered with the psychological processes they were purporting to observe, manipulating cult members through their conversations and even fabricating psychic messages. “Every major claim of the book is false,” writes Kelly, “and the researchers’ notes leave no option but to conclude the misrepresentations were intentional.”
Most shocking of all to fans of elegant writing — if not to scientists — has been the recent revelation by Rachel Aviv in The New Yorker that the neurologist Oliver Sacks, author of beloved books such as Awakenings (1973), had exaggerated and distorted the cases he wrote about and was wracked with guilt about the fabrications.
In a letter to his brother, Sacks described The Man Who Mistook His Wife for a Hat (1985) as “fairy tales” and “half-report, half-imagined, half-science, half-fable”. Were millions of readers told they were paying for fairy tales? They were not. Are there any lessons to be drawn from such a catalogue of distortion and exaggeration? There’s the old warning against stories that are too good to be true, and it applies here. But there’s also a structural problem. The rewards to “discovering” a spectacular scientific finding are large; the rewards to debunking frauds or deflating exaggerated claims are small if not non-existent. If these are the rules of the game, we should not be surprised at the way the game is played.
Written for and first published in the Financial Times on 14 Jan 2026.
I’m running the London Marathon in April in support of a very good cause. If you felt able to contribute something, I’d be extremely grateful.
