It seems fair to venture that the vast majority of people in management regard “groupthink” – the practice of making decisions in a group – as a pejorative term. This is to be expected, since the person who carried out much of the original research into the phenomenon made clear from the outset that it should have “an invidious connotation”.
Irving Janis, a psychologist at Yale University, first studied groupthink by examining the effect of extreme stress on group cohesiveness. Defining it as “a deterioration in mental efficiency, reality testing and moral judgements”, he went to on detect its presence in the misguided strategies underpinning large-scale catastrophes such as the failed military ‘Bay of Pigs’ invasion of Cuba, and the Vietnam War.
More than 40 years on from this formative work, which itself came nearly two decades after sociologist William H Whyte coined the term, groupthink has been accused of contributing to everything from the Challenger space shuttle tragedy to the “hive mind” mentality of social media, whereby people produce either uncritical conformity or collective intelligence.
It has also been blamed for numerous high-profile disasters in the business world, including Marks & Spencer’s precipitous share price decline in the late 1990s, Swissair’s sudden collapse in 2001 and BP’s Deepwater Horizon oil spill in the Gulf of Mexico in 2010.
In all these events, the argument goes, a collective rush to judgement prevented decision makers from reaching conclusions they would very likely have come to as individuals. Alternatives were ignored. Tunnel vision prevailed. Overconfidence reigned. To borrow Janis’s phrase, “illusions of invulnerability” held sway.
Why does this happen again and again when the supposed purpose of making decisions on a group basis is to achieve better outcomes? And is the threat of groupthink so overwhelming, as research sometimes seems to indicate, that resistance is all but futile?
The culture of 'go'
In the Novum Organum, seventeenth-century philosopher and statesman Francis Bacon’s magnum opus on humanity’s cognitive frailties, he noted the average person’s tendency to seek validation of his or her own opinion and “neglect and despise” everything else. Nowadays, thanks to advances in the field of psychology, we refer to this propensity as “confirmation bias”.
Almost anybody who has taken part in a business meeting will have fallen victim. Basically, we just want to get the job done. We have our opinions, and we intend to stick to them. We welcome anything that concurs with our outlook and disregard anything that contradicts it. In the words of the Rogers Commission, which investigated the Challenger disaster, we pursue “a culture of go”.
Only with the benefit of hindsight does the narrow-mindedness of this approach become painfully obvious. Only when things have gone spectacularly wrong – when the company has folded or the environment has been destroyed or seven astronauts have been killed in front of a live TV audience of billions – do we condemn the recklessness of those responsible, shake our heads and say: “What on Earth were they thinking?”
The problem is that groupthink and confirmation bias are inherently seductive. Janis expressed the issue in the spirit of Parkinson’s Law, the dictum that work expands to fill the time available for its completion, when he observed: “The more amiability and esprit de corps there is among the members of a policymaking group, the greater the danger that independent critical thinking will be replaced by groupthink.”
Does it follow, then, that we would make better decisions if we were less fond of each other? Not exactly; but it would certainly help if more of us were prepared to fight our corner.
Playing devil's advocate
The way our weighting of information is shaped by how we receive that information is crucial here. A recent study by economists at the London School of Economics and the University of Nottingham set out to shed new light on this question.
According to confirmation bias, we tend to attach greater significance to information that supports our favoured hypothesis and less to information that undermines it. But is this always the case?
Our findings suggest not. Using laboratory experiments to test how decision makers react to consensus-challenging information when it is (a) only reported and (b) genuinely discussed, we uncovered evidence of reverse confirmation bias. Our subjects placed more weight on conflicting information in the latter scenario – not necessarily because information delivered verbally is more persuasive but because they were made to see how incomplete their own perspectives were.
This echoes an argument Janis put forward all those years ago in outlining a framework for guarding against groupthink. Every group, he said, should feature at least one member whose task is to play devil’s advocate – to challenge, to contest, to ensure that ideas that might otherwise be celebrated as irrefutably wonderful and utterly beyond criticism are duly scrutinised, dissected and, maybe most importantly, compared to other ideas.
There is, after all, a huge difference between a discussion and a done deal. Careful and constructive criticism – even healthy conflict – has many advantages over a fait accompli. As bad decisions throughout the ages have illustrated, firmly held preconceptions can be damaging; and the preconception that groupthink is inevitable is among the most potentially damaging of all.