For the full version in Hebrew – click here
The past few weeks have been a run of bad luck for Professor Dan Ariely, one of the most well-known Israelis around the world. His books – most notably Predictably Irrational – were translated into many languages, and he had become an academic star in the sky of behavioral economics, giving lectures in every corner of the globe and advising many organizations, including the Israeli government. At the core of his work is shaping public behavior, but according to a series of new revelations, his own career is replete with behavioral issues of his own, leaving his credibility hanging by a thread.
What started last week with an exposé published on Data Colada – which suggested a notable study Ariely and his colleagues released in 2012 was based on fabricated data – and then carried on with similar claims by an Israeli researcher who looked into another paper, from 2004, could now become a tidal wave calling into question one of the most esteemed and publicized academics and intellectuals in the world. Here, we expose another part of Ariely’s academic history from 2007, which in part led to his decision to quit the MIT Media Lab in Boston for Duke University.
Over the past week, we reached out to Ariely on several occasions to ask him why he left MIT. At first he claimed, “There was nothing wrong. I wanted to move to a place where people work together.” When asked the same question a few days later, he responded that it had to do with family circumstances. The truth, in fact, is slightly more complicated than these two versions of the story.
Replaced as primary investigator
In chapter 10 of his major breakout bestseller, Predictably Irrational, Ariely describes the power of setting a price on the way we experience things. He uses placebo, and the part that price plays in the effect it has, to prove his argument. That is, does more expensive medicine make us feel better than cheap medicine? This chapter is based on several of Ariely’s studies from his time at the prestigious Massachusetts Institute of Technology, where he was a researcher and a lecturer. Hamakom reveals for the first time that one of these studies, which was published in The Commercial Features of Placebo and Therapeutic Efficacy, had a role in Ariely’s temporary suspension, and a year later, departure from the university.
We interviewed MIT officials who said the way in which the placebo experiment was conducted raised some serious concerns. According to sources with close knowledge of the study, Ariely did not request an IRB review from MIT’s ethics committee, COUHES, nor did he follow the required protocol for the experiment, which involved electric shocks to 80 participants, who were then given supposedly cheaper and more expensive medicine, both placebo.
One of the participants reported it, and a member of the ethics committee contacted Ariely in order to look into the case. In emails seen by Hamakom, Ariely responded jokingly, saying he has a general protocol for this type of experiment titled “Electric Shocks,” and that’s the one he had been following.
The ethics panel member had a harsh response, telling him she is aware of 12 protocols for experiments with him as research director, but not a single one has the words “electric shocks” in the title. She demanded Ariely provide the protocol number and the title of the experiment, and warned him the situation was under inquiry – and his experiment had been suspended indefinitely. The committee member asked him to send her to experiment protocol and notify her of the number of participants that same day. She also wrote to him that according to the institution’s records, his research assistant isn’t qualified to run human experiments, stressing that if true, this is a serious violation.
The IRB intervention led to arguments between Ariely’s research students and their professor. The students claimed they had been told by him that he had the necessary approval, whereas Ariely claimed he had a general protocol for an experiment with electric shocks, which he said he thought was enough. According to a person close to the research team at the time, many of them felt Ariely was trying to roll the blame for running an unauthorized experiment on his research students.
Ariely and his team eventually did get a belated IRB authorization for the experiment, which had to go through several changes before it was approved. One of these changes was replacing Ariely as primary investigator with one of his colleagues at the Media Lab, Prof. Andrew Lippman. In the final paper, Lippman is only credited with providing “logistical support.”
Following its investigation, the ethics committee banned Ariely from conducting research for a year. In emails Hamakom has seen, Ariely protests before COUHES Chairman Leigh Firn, saying that his colleagues were shocked by the harsh penalty he’d been given by suspending him. After that ban was imposed on him, Ariely left MIT for Duke University, where he teaches and researches to this day.
Asked why Ariely hadn’t asked for a review as required, a member of the research team told us: “Ariely likes to cut corners, and he doesn’t think he has to follow the same rules as everyone else. He didn’t think he’d get caught.” According to the researcher, Ariely would most probably not have gotten an approval for that experiment, as “you can’t lie to people and then electrocute them.”
Ariely admits at the end of that chapter in his book, “The moral dilemmas involved in medical and placebo experiments are real. The potential benefits of such experiments should be weighed against their costs, and as a consequence we cannot, and should not, always do placebo tests. But my feeling is that we are not doing nearly as many of them as we should.”
Ariely said in response: "The study I conducted in 2005 received the necessary approvals. There were indeed disagreements regarding the technical aspects of the research protocols, which led to the suspension of the experiment, but I remained a full-time faculty member at MIT. I left MIT of my own accord three years later, due to professional and personal considerations, when Duke University offered academic positions to my wife and myself”.
MIT didn't respond to our many requests for comment.
"My colleagues are mad”
It has been a tumultuous few weeks for the author of The Honest Truth About Dishonesty: How We Lie to Everyone – Especially Ourselves. His credibility has taken a hit by new claims and revelations that suggest some of his studies are based on data that is problematic or falsified.
An article published last week on Data Colada deals with a 2012 paper by Ariely, Lisa Shu, Nina Mazar, Francesca Gio, and Max Bazerman. Based on three empirical studies, the paper found that “dishonesty can be reduced by asking people to sign a statement of honest intent before providing information (i.e., at the top of a document) rather than after providing information (i.e., at the bottom of a document).” It was published in PNAS, the US National Academy of Science’s official journal.
Ariely was the only team member in contact with the insurance company that provided the data for the experiment, which was given directly to him. Data Colada revealed that the experiment could not be replicated because the data had been distorted, and went on to claim that this was no innocent mistake but actual fraud.
After these claims came to light, Ariely's co-authors Shu and Bazerman released stark responses. "Shu and I were the only two of the original five authors explicitly in favor of retraction, and lacking a majority we did not retract the 2012 paper. I now believe I should have independently taken action to push for retraction even without a majority of co-authors in favor of such action… I wish I had worked harder to identify the data were fraudulent," Bazerman wrote.
"My colleagues are mad at me because I was the liaison with the insurance company and it really is a heartache," Ariely said in response. "It's a very sad incident. It's not good for me, for my colleagues, or for science. There are many lessons to be learned here, and I did learn my lesson quite painfully. I spoke to quite a few researchers who outsource their data and believe that no one has any interest to deceive them. The lesson is that we have to be involved in the collection of data."
A BuzzFeed article by Stephanie M. Lee revealed over the weekend the name of the insurance company in question, The Hartford. The company did not respond directly to Ariely’s claim that it was responsible for the flawed data.
Contacted by Hamakom, PNAS said they are "aware of the findings, and are in contact with the investigators and authors." It may be likely that more papers by Ariely will soon come under scrutiny.
Two weeks earlier, Retraction Watch published an article pointing to issues with another Ariely experiment from 2004. Psychologist Gilad Feldman from the University of Hong Kong wanted to replicate the original study, which looked into the relationship between work and wages, with a group of students, but when they checked the data used for the original study they found 13 discrepancies. While some discrepancies do not change the bottom line of the study, others “would change the interpretation of the data in a manner that substantively alters the conclusions drawn from the research.”
Masturbation research and fake nipples
Back in 2010, Ariely hit a nerve with the American Dentistry Association, when he told NPR’s Robert Siegel in an interview that people are irrationally loyal to their dentists, far more than to other medical professionals.
In that interview on “All Things Considered,” Ariely claimed that if two dentists are asked to identify cavities in the same x-ray image of the same tooth, they will give the same conclusion only in half of the cases. He cited data from dental insurance supplier Delta, but immediately after the interview aired the company said it had no information or data that could lead to that conclusion.
According to Ariely, he had based his statement on a conversation with someone at Delta. "He could not name Delta as the source of that information or that statement, we do not collect any kind of information which could lead to that conclusion," a spokesperson for the company said.
New concerns were raised online yesterday, this time about statistical precision in a 2005 study that dealt with making decisions while masturbating. Students at MIT at the time told Hamakom Ariely would often mention this study, even without any clear reason. It was conducted at MIT's Media Lab, which, as revealed in 2019, was heavily sponsored by Jeffrey Epstein.
According to some of Ariely's former students in MIT, it appears that in 2005 he also wanted to conduct an experiment, which he titled "Sex and the City Project", in which he would ask waitresses to wear fake pointy nipples and examine how this affects her tips. The proposal didn’t advance beyond the initial planning stage.
"He fiddles with his studies to match the story"
Experts in behavioral economics told us the latest revelations might affect the credibility of the field as a whole. Some who are more closely familiar with Ariely's work say they were not surprised. What Ariely does, they explained, is colloquially known as “p-hacking,” meaning selective reporting or interpretation of the data to achieve the desired result. Any attempts to recreate such studies should reveal it, which is exactly what led to the latest revelations.
"Many people in the field are accused of p-hacking," says a senior researcher who worked with Ariely in the past and knows his work well. "It's not that Ariely invents data, he merely looks at the data the way he wants to see it, in order to tell the story he wants to tell. He's not lying, he just looks at the data in a way that will lead to the conclusion he's after."
"Let's say Dan is a guy with big ideas, and he has a hypothesis about something and wants to prove it, even if the data doesn't work out,” the former colleague added. "He's an excellent storyteller, and he twists and fiddles with his studies so that they match the big story."
Ariely said in response to the claims: “These anonymous claims against my professional integrity are complete nonsense, which may be easily debunked by the following chart, summarizing the results of my studies over the past 20 years. I find it disheartening that there are academic scholars who hide and hurl journalists with alternative facts, instead of conducting a professional discussion”.
Ariely explained: “The dotted black line represents the expected distribution of results from many studies over time. Some are above the level of significance and others below. The blue line represents the results of all my experiments between the years 2000 and 2020, 600 statistical tests in all. It is evident that the results of my experiments are highly compatible with the statistical predictions. If these experiments were biased, the blue line would appear to the right. You can find additional details about the model and comparison to other investigators here.”