Making sense of scientific information

Amazon.comWhile I was in the UK recently, I picked up a copy of Ben Goldacre’s book Bad Science on a tip from a friend.  Ben is a medical doctor and writer for The Guardian newspaper—and a vociferous crusader of what he sees as the misuse and misrepresentation of science.  And when he comes to communicating why science matters in a highly accessible way, he has few peers.

If you read my recent “Five Good Books” blog, you will already have seen a micro-review of Bad Science, which can be summed up pretty succinctly in three words: “buy this book.”

Bad Science is a great read… which is probably why it topped the popular science charts in the UK when it first came out (although I should caution that despite it being endorsed as “quite possibly the funniest” book you’ll read this year, it is more likely to leave you incensed at the blatant and dangerous abuse of science in some quarters).

It is also an essential read for anyone in the business of making science-informed decisions.

And in this context, there is one chapter in particular that should be compulsory reading matter for anyone involved in generating, interpreting or using scientific information.

This is chapter 12: “Why clever people believe stupid things.”

The chapter is prefaced rather fittingly by a quote from Zen and the Art of Motorcycle Maintenance by Robert Pirsig:

“The real purpose of the scientific method is to make sure nature hasn’t mislead you into thinking you know something that you actually don’t know.”

As Goldacre explains “When we reason informally—call it intuition if you like—we use rules of thumb which simplify the problems for the sake of efficiency.”  But these short-cuts are vulnerable to misdirection—we can be fooled into thinking reality is other than it is.

The problem is, we have no internal reference for what is real.  When we get something wrong and the consequences are obvious, we self-correct pretty fast.  But where the consequences of misunderstandings are not direct or are not clear, things get more difficult—especially as we are hard-wired not to question our perceptions of reality.

So how do we know when we are deluding ourselves (and as a consequence, making potentially dangerous decisions)?  The answer, argues Goldacre, is in the scientific method—because it provides a systematic approach to testing our assumptions and correcting our perceptions.  Goldacre writes:

“When our cognitive system—our truth-testing apparatus—is fooled, then, much like seeing depth in a flat painting, we come to erroneous conclusions about abstract things.  We might misidentify normal fluctuations as meaningful patterns, for example, or ascribe causality where in fact there is none.

“These are cognitive illusions, a parallel to optical illusions.  They can be just as mind-boggling, and they cut to the core of why we do science.”

Goldacre goes on to identify five traps people fall into when evaluating information which lead to misunderstanding, misinterpretation and, at the end of the day, bad decisions:

  1. We see patterns where there is only random noise
  2. We see causal relationships where there are none
  3. We overvalue confirmatory information for any given hypothesis
  4. We seek out confirmatory information for any given hypothesis
  5. Our assessment of the quality of new evidence is biased by our previous beliefs.

The chapter—which is brief (only 14 pages) needs to be read in full to appreciate how these traps arise and how they can be circumvented.  But even without the accompanying text, recognizing the traps is a critical step toward avoiding them.

As he writes, Goldacre has journalists firmly in his sights as some of the worst offenders for falling into these traps.  But on reading through the chapter, what struck me most what how easy it is for other users of science-based information to get things wrong.  Scientists aren’t immune—especially when they are communicating their work to audiences outside their field.  Neither are policy advisers and makers, who have been known occasionally to conveniently overlook inconvenient data!

The bottom line here is that clever people are quite capable of believing stupid things, and that without good science-based checks and balances in place, bad decisions can result that may cause a lot of damage.

The solution: Buy beg or borrow Ben’s book, read it, and use it.  And get those checks and balances working – however you are using scientific knowledge!

___________________________________________

Notes

Ben Goldacre blogs at www.badscience.net, The website is also great portal into the world of bad science!

Bad Science (Publisher: Fourth Estate – London) isn’t directly available in the US, but can be ordered from amazon.co.uk—be sure to use the click-through on Ben’s Bad Science website.

If you can’t get hold of the book, you could always lobby Ben and his publisher to make chapter 12 available for free—as a service to humanity ☺


Mea Culpa – why “clever” people write stupid things!

Sharp-eyed readers will have noticed that in the original version of this post, “Ben Goldacre” mysteriously transmuted to “Ben Goldring” after the first paragraph (thanks Devan at Holford Watch for the tip).

I can only lay blame at the feet of too many distractions, poor copy editing and just plain stupidity – and apologise unreservedly to Ben!

And confirm that this is now a Goldring-free posting

(11/15/08)

Bookmark at: | del.icio.us | Digg it | Google | StumbleUpon |