Can literature reviews ever provide conclusive evidence?
Decision makers, scientists, journalists and the public often base their judgements on particular issues on summaries of the research done to clarify the issues. But despite its authoritative sounding title, a literature review is depressingly often a starkly dishonest way of trying to resolve a problem about which a lot of research has been done. Most literature reviews simply measure the height of the piles of all the research available, whether good, bad, robust or flimsy, that support or reject a proposition. These summaries then offer an apparently neutral and therefore apparently conclusive opinion about a controversial issue based on the ability to count while blind and the height of the pile.
But this is a fallacy, a loophole in modern political consciousness and credulity in which studies, research reports, speeches, opinions, and random references found on the internet both real and fictional, TV programmes and newspaper articles all count, increasingly, as real research in a media that still retains some influence, albeit waning fast as people lose faith in anything but celebrity and football. The sum total of all these references for and against the proposition that they seem to investigate are weighed, counted and the results published with the clear implication that the result is an neutral, unbiased conclusion about the issue. Sometimes, incredibly, literature reviews include other literature reviews in the ceremonial march-past. While this is an admirably Borgesian labyrinth, it’s nothing significant logically or scientifically. It’s actually just a disgrace.
Literature reviews, and those who peddle and publicise them, suffer from serious shortcomings because the fundamental information upon which they are based does not change unless genuinely new and reliable data is created by new research. Literature reviews lack an intelligent assessment of the research quality of each of the studied reviews, and when such an assessment is available, its robustness is rarely investigated properly. Each document in the review acquires a credibility because it is accepted as valid within the literature review when or after it was published.
But the same research may have been discredited since, as new evidence came to light, or as it was replaced by something more robust, exact or scientific, or other necessary corrections made. Like a fly in amber or framed share scrip, the faulty or disputed original research reference not only remains but rides again, and new or more modern reviews rarely check again for evidence of such changes wrought by time, tide, logic or accuracy.
When literature reviews are used to summarise a poor or complex evidence base without a robust critique of the quality of their selection, they can easily become far less than the sum of their parts. In fact, reading just the highest level conclusions of many of these reviews can be misleading, by conveying confidence where there is uncertainty and encouraging the audience to form the wrong conclusions.
We need new studies that answer the central questions being posed in the whirlpool of arguments that compose the issue at hand, not need more syntheses of the same inadequate data sets or reviews that cannot discern the quality of the evidence used.
Consequently, when there is systematic bias in how evidence is generated and interpreted, more evidence can sometimes lead to less rational decisions. The scientific community needs to build much more rigorous evidence than it has hitherto seemed capable of doing. It needs to face up to the challenge that there is no substitute for properly controlled experimental studies carried out at appropriate scales and with sufficient data, and that many fictions are currently hawked around as fact.
It would also help the cause of independent research if those researchers, campaigners, companies or institutions with known agendas did more sharing of their proposed methodologies with the interested audiences before any original research was commissioned. The research when finished might then just live a little longer than a few weeks of controversy and mutual counter-declamation.
But social media is churning up the playing field of academia before our very eyes, converting it from a relatively level terrain into troubled areas of deep valleys, jungles, sudden chasms, cliffs and proverbial rivers running through them.
Social media likes literature reviews for the same reason as it likes epidemiology, and such reviews are becoming more and popular in the stressed and pressed academic environment. They are easy, provide what appear to be conclusive answers, and can effortlessly be headlined in under 140 characters.
With the new academic landscapes being promised, we will soon be rejecting all research as self-serving, all science as false, and all logic as madness. And literature reviews will crown celebrity interviews and Pinterest boards as the new heavyweights in the scales of academic justice.