Science journalism breakthrough – hyperbole cured, boffins say!

This is about bad journalism.

Yes, I know, pot – meet kettle.

Specifically, it’s about bad science journalism, and the harm it does – to scientists, to Science, and to the community.

CONSUMER WARNING: long thinky article ahead. May contain traces of nuts.

Most science is unintelligible to the public, despite its importance. Everybody knows that our society could not and would not exist without science, yet it’s so pervasive it’s like the air we breathe – essential, yet ignored. Ask anyone and they’ll have an opinion on politics, on finance, on sport, on the weather; but science? Dunno. It’s really important, but I dunno.

So everyone agrees it’s important, yet we recently abolished our Minister of Science and significantly cut the budget for the country’s own scientific research organisation, and there was hardly a peep. Science is so little understood that the public will rise up and fight to defend the ABC, but not CSIRO. Ironically, the ABC and other media are home to the journalists who should tell us about science. Some journalists are; some are brilliant, but based simply on results it’s clear that many are not.

So what’s  good or bad science journalism?

Good science journalism, like any good journalism, should inform and possibly entertain its readers.

Apart from achieving the famous “who, what, when, where, why and how” goals, good science journalism must explain things that appeared inexplicable to readers. A news report of a car crash doesn’t have to explain the internal combustion engine, but a report of a new engine principle does. Good science journalism must simplify the complex, without oversimplifying. As Einstein said “as simple as possible, but no simpler”. But wait, there’s more.

Good science journalism must do more than simply translate and explain. It must place the work in context, and into perspective; show us where this work fits in the larger picture, what it will change, and how things may develop. Just as importantly, good science journalism should try to talk about what’s important. Let’s face it, it’s 2014 and still no flying car or jet boots! Those stories were probably a bit premature.

Unfortunately, too much science journalism fails to achieve those goals, and in the process does more harm than good.

“Such as?”, I hear you cry.

Harm such as:

  • “breakthrough exaggeration”, by scientists and institutions, where through a combination of specialist myopia, enthusiasm, and self-promotion, they tout their latest findings as really, really important (in a few years, once some additional work is done) and the journalists just uncritically repeat them
  • “breakthrough fatigue”, for the public, where in order to sell articles, journalists headline every minor advance and hail it as a miracle cure, or giant leap forward, often years in advance of any proven benefit, leading to public cynicism or simple disbelief;
  • its corollary, “breakthrough blindness” for the public, where the truly remarkable is lost in the noise of the mundane because all the superlatives have been used up, leading to
  • “breakthrough complacency” by the public, or the rarely spoken but widely held belief that any problem must eventually be solved by scientists because we read about breakthroughs all the time, and, because Science.

The clichés of bad science journalism are well known. Every headline exaggerates. Preliminary research is presented as just moments away from worldwide adoption, and a modest but useful advance is – you guessed it – a breakthrough. Every discovery is a major step forward, a cure, is massive in its implications, paradigm-shifting, game-changing and will herald in a new era of flying cars and jet boots and cure cancer.

Let’s face it, a headline that reads:

Endogenous Matrix Metalloproteinases 2 and 9 Regulate Activation of CD4+ and CD8+ T cells

is never going to trend on Buzzfeed or Upworthy. It’s not even going to appear in Scientific American or Nature, at least not with that headline. On the other hand, the headline:

Motor Neurone Disease Breakthrough!

or

Motor Neurone Disease Cure Only Years Away, Say Boffins

might actually attract some traffic. Whether it’s an accurate “translation” of the first headline is another matter, and it’s probably a contributor to the “breakthrough” problems I’ve described.

The corpus of scientific knowledge is like a huge building, constantly under repair and extension, and bad science journalism explains bricks as entire rooms, and an architect’s sketch as a finished extension.

So where’s the harm in a bit of exaggeration?

Bad science journalism skews the public perception of Science, and of scientists. On the one hand, readers rapidly habituate to the hyperbole and exaggeration. Repeated promises, never fulfilled, lead to the public seeing scientific headlines as no different than the “New, Improved” label on their washing powder. This cheapens the dedication, effort and real contribution of scientists to society and places science on an even, trivial footing with marketing and advertising hyperbole.

It also relegates and confuses real scientific advance with marketing and minor technology improvement. It’s probably no exaggeration to say that the average smart-phone user is more impressed by programmable ringtones than by the depth and sophistication of science and technology embodied in the GPS, 4G and even the toughened oleophobic  glass in their handset. That’s not to sneer at you and me, but to look at the real failure of science journalism.

In the particular case of medical and drug research, frequent announcements of “cures” and “breakthroughs” are not only misleading, but cruel. They needlessly raise the hopes of people whose lives may literally depend on the truth of that headline. How cruel to think that there is hope, only to find that the article is actually about “promising phase I trial results”, which translated means “an 80% probability this drug won’t survive the subsequent phases, and if it does, won’t be available for eight years”, or they find that the “cure” is for specially bred mice, and has serious side-effects. No doubt it represents an advance in knowledge, and possibly a way towards a real cure, but this is cruel and misleading journalism, driven by scientists’ need to stand out from the pack and solicit funding, and journalists’ need to sell articles.

At the same time, this “breakthrough” journalism encourages a complacency and lack of respect for the small proportion of research that truly is a breakthrough – those advances that really will change the way we live, or cure disease, or famine, or allow us to achieve things that were unthinkable 20 years ago. Here’s a quick quizz – do you know who Norman Borlaug is? If you do, you’re in a small minority, and I regard that as another demonstration of the failure of science journalism. Borlaug developed dwarf wheat, and is credited with saving hundreds of millions of lives, yet where are the “Borlaug saves millions” headlines?

An article about a real breakthrough should make us all sit up and be amazed, if only for a moment. It should make us say “Wow, I’m glad we don’t have a Science Minister and rank  15th in the world behind India, Brazil and Spain in R&D spending of a massive 1.7% of GDP“. (That was sarcasm, for the sarcasm impaired.) Seriously, if scientists believe that popular science writing  generates public support for science as a discipline and for science funding they’re clearly  and empirically mistaken, and science journalism is at least partly to blame.

That’s the harm of bad science journalism.

A further problem that I’m not going to discuss here (because this will already be tl;dr, for certain), is the problem of the social acceptability of scientific results, regardless of how presented. That is, the apparent paradox that some scientific results that are agreed by 97% of the scientific community are readily and widely “believed”, while others (such as global warming) are enormously contentious despite apparently having the same objective underpinning.

This is a very important related topic. Scientific knowledge is value free; as Neil deGrasse Tyson observed “The good thing about Science is that it’s true whether or not you believe in it”. Unfortunately that doesn’t help inform social debate when values determine what people believe, and not facts. Science journalism must play a part here too, but that’s outside the scope of this article. Inconveniently for my own argument, one of the best articles I’ve read on this subject is by Chris Mooney, a respected popular science writer, in his piece The Science Of Why We Don’t Believe In Science. I’ll explain why it’s inconvenient a bit later. If you prefer a shorter, bad article to Mooney’s long, good one, I’ve written an article myself on values versus facts here, and there are excellent books and articles by HaidtKahan, and others. Check it out!

Returning to the main problem of bad science journalism, obvious questions to ask then,  are: what causes it, and what can we do about it?

First, who are the players, and how do they contribute?

Scientists – just people, with real careers and aspirations,  and passionately curious. They’ve studied and worked very hard for a long time to know more about their topic than just about everybody else on the planet. They’re competing for scarce funding in a competitive environment that’s so cut-throat that by comparison ordinary commerce looks like afternoon tea at the Ritz. Their single most important measure of success is published, peer-reviewed scientific papers and the associated impact/citation count. Some would probably like to be rich and famous, as though that’s going to happen to a scientist…

Journalists – just people, with real careers and aspirations. Some, but not all, trained as scientists and share scientists’ curiosity and passion, but have chosen to explain science rather than create it. Their job depends on writing stories that editors, media and blogs will publish, in the belief that people will want to read them. Make no mistake, they’re not publishing peer-reviewed scientific papers, they’re publishing edited articles in blogs, magazine and newspapers for money, and their measure of success is column centimetres or page views. Some would probably like to be rich, as though that’s going to happen to a journalist…

Scientific Institutions – not people. The research institutes, universities, hospitals, and private laboratories that employ scientists exist to “do” science, yet their focus is a bit different. They don’t have the ideas, do the work, or directly publish the results. They are as much concerned with their public profile and reputation as the results they produce. You’ve heard of Harvard, Oxford, MIT, Thomas J. Watson, Fraunhofer, Karolinska, McGill, and Mt. Sinai? In many cases they solicit funding, and have Public Relations staff creating press releases for science journalists. They want to project prestige, success, and stay in the public eye.

Science – in its multiple forms: as a body of knowledge that we are hopefully increasing; as the consensually agreed method by which we seek and agree on knowledge; and as the socially approved activity performed by scientists, working and paid for by society or by private enterprise. Science, the collected, shared body of knowledge, just is, and isn’t affected by journalism, although it’s what journalists write about. Science, the way of seeking and agreeing knowledge, doesn’t and shouldn’t have anything to do with journalism (except for articles about the scientific method). The problems arise when science journalism either misrepresents the science facts, or fails to separate the concerns of working scientists and institutions from the facts.

The Public – or you and me, the ultimate beneficiaries (or victims) of the application of Science, and the work of scientists. From time to time we are curious about what scientists are doing, or how something works, or what climate change is about, or see a headline “Cancer cured!” that grabs our attention.

So who’s to blame? The public, for demanding cheap oversimplified entertainment and titillation? The journalists, for writing misleading, attention-grabbing headlines and articles to make a living? The scientists, for misrepresenting the importance of their work? The institutions, for constantly feeding the journalists material to improve their profile? Or funding bodies, for allowing public opinion and involvement to influence decisions about scientific merit?

Sadly, all of them.

In the end, though, scientists just produce Science. The media mould public opinion just as much as public opinion moulds the media. Press releases from scientific institutions are just another source of information. It is the science journalists who, in choosing the stories to write, and acting as translators between the scientific community and the public, exert considerable power and influence, or more frequently abuse that influence for a quick buck.

The relationship between journalists, publishers and the public is easy to understand. The relationship between journalists and scientists, less so.

I recently read a tweet from Upulie Divisekera pointing to an article for scientists about science journalism:

Ms. Divisekera is a scientist –  a molecular biology postgraduate doing her PhD at Monash University. She’s also passionate about science communication, and is coordinator of the very popular Twitter group @realscientists which is currently in the running for an international Shorty Award. I admire what she and the @realscientists group are doing, and this article isn’t a criticism of their efforts. Her tweet just triggered my thinking.

The linked article made me uneasy, and was the proximal cause (see, sciency words) for this article. The article is about advice from Chris Mooney, the science writer I mentioned earlier, telling scientists how they can “get the media’s attention”.

Hang on! How, or why,  did scientists and their work become like film stars?

Why do scientists need the media’s attention, and why is it scientists’ responsibility to seek it out, rather than the other way round? Shouldn’t journalists do their own legwork and choose stories that they think are important to their readers? Surely scientists should concentrate on doing research and publishing research in expert journals, not competing to get popularised versions of it in blogs, in glossy magazines, and the weekend section in (fast disappearing) newspapers?

There’s a fundamental tension here. Scientific research is nearly always complicated, specialised, and usually requires years of study just to understand the articles. Scientific papers are not reviewed or judged in the popular press, they are subject to scrutiny by other well qualified scientists. The work they describe, if useful, is often repeated and confirmed by other researchers. The heart of science, the scientific method, is not based on public popularity, page views or column centimetres.

It was estimated that during 2006 about 1.3 million scientific papers were published. That’s around 3,500 a day. That would translate into a lot of popular science articles, probably rather more than you or I could read. What’s more, it’s fair to say that only a small fraction – or possibly none, on a given day – of those articles is actually newsworthy by itself.

The final problem with science journalism is that usually the importance of research, particularly basic research, is not apparent until some time later. It’s very rare (can you say “Higgs Boson”) for a single experimental result or theory confirmation to be understood as revolutionary on the day it happens, or when it’s first published. Usually that’s a judgment that can only be made some time afterwards, which is not when Mooney is proposing that scientists “get the media’s attention”. When we understand the importance and impact what should be translated is not a single paper, but a whole body of research – the building, not the brick.

The science journalist should then be adding the social and economic context, perspective and analysis in a way that the original research paper does not, and scientists may not be interested in doing. This is information that’s interesting and valuable to the public, and isn’t available from the raw research. The science journalist has a challenging and very useful role to play, apart from re-releasing Press Releases from university laboratories about breakthroughs.

So why is Chris Mooney teaching scientists how to make their research, which is probably about 10 minutes old, “get the media’s attention” and isn’t this advice likely to contribute to the “breakthrough” problems I described earlier?

The simple and pragmatic answer is money. Funding. Funding, institutional prestige, and in the larger context, research policy. Although the activities of science use the scientific method, and its results are judged by peer review and replication, the value of each paper is not measured in citation counts alone. The struggle for funding and institutional prestige happens well and truly in the real political world, and in that world publicity and public opinion matter. A single article in a popular newspaper will come to the attention of politicians or science bureaucrats, and to the attention of their constituents or political masters.

This aspect of science journalism has nothing to do with translating complex research into publicly accessible information. It’s about marketing, and marketing is all about “New, improved”, “breakthroughs” and “cures”. Isn’t this where we came in?

So while on the face of it Mooney’s advice is practical and harmless, in reality I think it hides several harmful ideas. First, that scientists should spend time and effort producing “tabloid ready” graphs and copy, ready for lazy science journalists to use without any further work. Second, that instead of using the agreed and consensus-based methods of peer review and criticism, scientists should gain support and possibly funding by trying to make their research sound important to the public, public that cannot have the depth or breadth of knowledge to judge whether particular research is good or flawed, so inevitably the simplification that occurs will be to portray everything as (you guessed it) a “breakthrough”.

This works for the scientist or institution – who want recognition and approval, and for the journalist – who wants articles that sell. It may win one round of funding, but in the end it creates a public perception that harms science rather than helping it.

Chris Mooney, as a science writer, is extremely good as I mentioned. By contrast, in providing this advice, and in advancing this form of collaboration between scientists and journalists he is doing neither scientists nor us, the public, any good.

In summary, there’s a great deal that good science journalism, or “popular science” can do, both in informing society as a whole about the implications and promises of current research, and in re-affirming the vital position that science plays in our society.

Bad science journalism, on the other hand, only debases science and its value, and in the end does more harm than good. Current events and attitudes would suggest it’s winning.

Science journalist’s responsibilities are clear. For the rest of us, if we care about Science and scientists, we should call this out. The scientists among us should not feed this cycle that ultimately harms them and us. We, the public, as consumers of good and bad journalism have a responsibility to let journalists and publishers know, and speak up when they’re feeding us marketing and rubbish.

Advertisements