Let’s say that the US is preparing for the outbreak of an unusual disease and it’s expected that 600 people will die. Should you a) choose an option that will save 200 people for sure, or b) choose and option with a one-third chance of saving all six hundred people and a two-third chance of no one being saved?
Same scenario, different question. Should you a) choose an option that will kill 400 people for sure or b) choose and option that has a two-third chance of killing everyone and a one-third chance of killing no one?
You’ve probably noted that those questions, whilst framed differently, have the exact same outcomes. The first option in both questions saves 200, and the second gives you a one-third chance to save everyone. However, in a new study, intelligence officers were less likely to notice the chance, more likely to be swayed by the wording, and more likely to be confident in their (shoddy) reasoning.
The study, published in Psychological Science, presented questions like these to a group of intelligence agents, a group of college students, and a group of post-college adults. Of the three, intelligence agents were more willing to take risks with lives when outcomes were framed as losses, despite the fact that outcomes were frequently the same, merely framed differently. They also expressed greater confidence in their reasoning when outcomes were framed as losses.
College students made more conservative choices, preferring to save lives, and post-college adults occupied the middle ground. Sadly, putting college students in charge of intelligence probably won’t help things, as there’s very little intelligence to be gathered from Twitter and memes. But at least they’d interpret it properly.