Is it science or is it a marketing machine?
This press release with psychedelic art tells us land regions will warm by more than the global average, because oceans are slower to heat. No kidding. They use more broken models to breathlessly talk about being locked in to 1.5 °C rise — “more than preindustrial times”. How scared do we need to be about a 1.5C rise — it’s not just locked in, it’s already here. NASA chief climate scientist Gavin Schmidt says so – ” 2016 so far is about 1.5 degrees Celsius ( 2.7 degrees) warmer than pre-industrial times.” Since Gavin is talking “globally” the extra rise over land above and beyond that is not so much programmed in, as pre-baked.
The art might be the most original part of the paper.
Let’s redesign those cities:
The results of the new study have implications for international discussions of what constitutes safe global temperature thresholds, such as 1.5°C or 2°C of warming since pre-industrial times. The expected extra warming over land will influence how we need to design some cities.
Human civilization already lives in towns from -50C to +40C. I reckon we’ll manage a 1.5 degree [...]
John Ioannidis paints a picture of a vast hive of researchers all pushed to publish short papers that are mostly a waste of time. The design is bad, the results useless (even when meta-collated with other badly designed studies). Basically, humankind is pouring blood, sweat and tears into spinning wheels in medicine — just paper churn. Most papers will never help a patient.
Ioannidis wants rigor – full registration before the study, full transparency afterwards, fewer studies over all, but with better design. Astonishingly, fully 85% of what is spent on clinical trials is wasted. It’s really a pretty big scandal, given that lives are on the line. I can’t see the media or pollies joining the dots. Imagine how many quality life-years are being burnt at the stake of the self-feeding Science-PR-Industry.
And this is clinical medical research, where standards are higher than in many other scientific areas and where there are easily defined terms of success unlike “blue sky” studies. Ioannidis doesn’t say it directly, but his description of the effect current funding has (which is almost all government based) almost guarantees that researchers will be wasting time in the paper churn — fast, short papers of little [...]
This example below shows the dangers of cherry picked and buried data. It shows how great news and joy can be reported from rancid results, and the only protection against this is open access. When the taxpayer funds research that is not fully and transparently public, and immediately available, the people are funding PR rather than science. “Peer review” does little to stop this, little to clean up the mess after it happens, and the truth can take years to be set free.
Ten percent of teenagers taking an anti-depressant harmed themselves or attempted suicide. This was ten times the rate of the teens on the placebo. The results of this clinical trial were published in 2001, but those alarming statistics were not reported. The drug went on to be widely used. A new reanalysis of the data, reported in the BMJ, revealed the dark and hidden dangers. The company that funded the research, Glaxo Smith Kline, has already faced record fines of $4.2 billion. The Journal of the American Academy of Child and Adolescent Psychiatry won’t retract the paper.
There are many ways to hide data. In this case, the results of the trial include 80,000 records which were [...]
Was that a half-truth or a lie by omission? Trick question…
Malcolm Kendrick reports on a new study that he says should “shake the foundations of medical research” but laments that it almost certainly won’t.
In the year 2000, the US National Heart Lung, and Blood Institute (NHLBI) insisted that all researchers register their “primary aim” and then later their “primary outcome” with clinicaltrials.gov. This one small change in the way medical studies were reported transformed the “success” rates in peer reviewed papers. Before 2000, fully 57% of studies found the success they said they were testing for, but after that, their success rate fell to to a dismal 8%. When people didn’t have to declare what their aim was, they could fish through their results to find some positive, perhaps tangential association, and report that as if they had been investigating that effect all along. The negative results became invisible. If a diet, drug or treatment showed no benefit at all, or turned up bad results, nobody had to know.
The world of peer reviewed climate research: like a universe of dark matter
It’s not like climate science suffers from unpublished “negative results” — no, it’s more [...]
Joy. It’s another profoundly unscientific “consensus” study. At least one person thought that the 97% PR figure was not enough, and that magic 99.9% would sway the crowds. As if there was even one fence-sitter sitting, waiting, saying, “97% was too low…”
For the herding type of human, “consensus” is magnetically convincing. Not so for the independent minds who have seen prediction after prediction fail. If a 97% consensus on a highly complex, immature science is difficult to believe, a 99.99% one is comic. More of the same unconvincing stuff will do nothing except set off the BS meter. This new study will sway no one. The supernatural purity of it will work against “The Cause”.
A consensus is the one and only argument of the unskeptical, and they are doing it to death.
One fan, James Powell, was so enthused he spent nine months reading titles and abstracts of 24,000 papers, and found only four scientists (4!) who didn’t agree with the consensus. Some 69,402 other scientists apparently endorse “the consensus” (whatever it is) because they used the terms “climate change”, or “global warming” and they didn’t also make a clear statement that it was false, or [...]
Richard Tol has an excellent summary of the state of the 97% claim by John Cook et al, published in The Australian today.
It becomes exhausting to just list the errors.
Don’t ask how bad a paper has to be to get retracted. Ask how bad it has to be to get published.
As Tol explains, the Cook et al paper used an unrepresentative sample, can’t be replicated, and leaves out many useful papers. The study was done by biased observers who disagreed with each other a third of the time, and disagree with the authors of those papers nearly two-thirds of the time. About 75% of the papers in the study were irrelevant in the first place, with nothing to say about the subject matter. Technically, we could call them “padding”. Cook himself has admitted data quality is low. He refused to release all his data, and even threatened legal action to hide it. (The university claimed it would breach a confidentiality agreement. But in reality, there was no agreement to breach.) As it happens, the data ended up being public anyhow. Tol refers to an “alleged hacker” but, my understanding is that no hack took place, and the [...]
How to separate creative genius from creative mistakes? Not with peer-review. It is a consensus filter.
Classical peer review is a form of scientific gatekeeping (it’s good to see that term recognized in official literature). Unpaid anonymous peer review is useful at filtering out some low quality papers, it is also effective at blocking the controversial ones which later go on to be accepted elsewhere and become cited many times, the paradigm changers.
And the more controversial the topic, presumably, the worse the bias is. What chance would anyone have of getting published if, hypothetically, they found a consequential mathematical error underlying the theory of man-made global warming? Which editors would be brave enough to even send it out for review and risk being called a “denier”? Humans are gregarious social beings, and being in with the herd affects your financial rewards, as well as your social standing. Even high ranking science journal editors are afraid of being called names.
Mark Peplow discusses a new PNAS paper in Nature:
Using subsequent citations as a proxy for quality, the team found that the journals were good at weeding out dross and publishing solid research. But they failed — quite spectacularly — [...]
If you suffer from an uncontrollable urge to claim that peer review is a part of The Scientific Method (that’s you Matthew Bailes, Pro VC of Swinburne), the bad news just keeps on coming. Now, we can add the terms “Peer Review Rigging” to “Peer-review tampering”, and “Citation Rings”.
Not only do personal biases and self-serving interests mean good papers are slowed for years and rejected for inane reasons, but gibberish gets published, and in some fields most results can’t be replicated. Now we find (is anyone surprised?) that some authors are even reviewing their own work. It’s called Peer-Review-Rigging. When the editor asks for suggestions of reviewers, you provide pseudonyms and bogus emails. The editor sends the review to a gmail type address, you pick it up, and voila, you can pretend to be an independent reviewer.
One researcher, Hyung-In Moon, was doing this to review his own submissions. He was caught because he sent the reviews back in less than 24 hours. Presumably if he’d waiting a week, no one would have noticed.
Nature reports: “THE PEER-REVIEW SCAM”
Authors: Cat Ferguson, Adam Marcus and Ivan Oransky are the staff writer and two co-founders, respectively, of Retraction Watch in [...]
An excellent article in The New Yorker: Is Social Psychology Biased Against Republicans?
It’s an article about the failings of peer review and research design in psychology due to the dominance of one particular political ideology (rather than having a spread more representative of the total population). You won’t be shocked to find there is a dominance of liberal left-leaning views in the profession. The paper it discusses is by Jonathan Haidt and co-authored by our friend Jose Duarte — the psychology PhD candidate and blogger who entertainingly and comprehensively dissected Lewandowsky on his blog: Do we hate our participants?
It will be no surprise that controversial psychology papers (which disagree with the reviewer’s world view) are usually treated harshly — no matter if the data is as strong. So, thinking of another field we know, what does it mean for research design and peer review when 97% of certified climate scientists hold one world view? (They not only agree on the scientific hypothesis but on the political action as well — and they boast about that?) What chance does a “controversial” paper have? Has anyone done a study on the political diversity of official climate scientists? There are plenty [...]
Most of the results reported in peer reviewed literature in medicine are mere artefacts of poor methodology, despite being done to more exacting standards than climate studies. There are calls in the medical literature for all data to be made public and for higher P values to be required. (Yes please say skeptics everywhere). Miller and Young recommend that observational studies don’t be taken at all seriously until they are replicated at least once. That would have ruled out the original HockeyStick two times over.
Even the absolute best medical papers are wrong 20% of the time, but mere observational studies (like climate research) failed 80 – 100% of the time. These studies of papers demonstrate why anyone who waves the “Peer Review” red flag is in denial of the evidence — “Peer Review” is not part of the scientific method. It’s a form of argument from authority. A fallacy of reasoning is still a fallacy, no matter how many times it is repeated. Those who claim it is essential or rigorous are not scientists, no matter what their government-given title says.
GEN, Genetic Engineering and Biotechnology News, May 1, 2014, Point of View
Are Medical Articles True on Health, [...]
21 contributors have published
2564 posts that generated