Menu
Log in


Statisticians might be the key to solving the reproducibility/replication crisis

  • 28 Sep 2021 12:46 PM
    Reply # 11125963 on 10957924

    The Reproducibility Crisis, and More

    Notes and links that follow relate to the 28 September Queensland Statistics Society meeting

    Is 85% or research funding really wasted?
    blogs.bmj.com/bmj/2016/01/14/paul-glasziou-and-iain-chalmers-is-85-of-health-research-really-wasted/

    Retracted paper, Vaccines, June2021: "The Safety . . ."
    healthfeedback.org/claimreview/retracted-study-misused-statistics-and-adverse-event-reports-to-claim-that-covid-19-vaccines-dont-offer-clear-benefit-and-caused-deaths/

    A recent preprint similarly misused publicly available data
    www.medrxiv.org/content/10.1101/2021.08.30.21262866v1
    See also
    https://sciencebasedmedicine.org/peer-review-of-a-vaers-dumpster-dive/

    Two papers that used observational data

    Palach et al: Steps per Day and All-Cause Mortality . . .
    doi:10.1001/jamanetworkopen.2021.24516
    An article in The Conversation that discussed this study, but without the caveats, was very widely reprinted

    Midwife-Led vs Medical-Led Models of Care . . .
    journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1002134#sec023
    Among other issues, this used an area based measure (60-100 people) to adjust for social deprivation.

    Journals need to up their game

    Occasionally, review the quality of statistical evidence in published papers
    cf: Maindonald and Cox:  NZ J Agr Res, 1984. "Use of statistical evidence . . ."

    To study the production of knowledge …

    Sabine Hoffmann drew to our attention to this web page

    https://halffman.org/

    I joined the reform movement for Dutch Universities (H.NU) and co-wrote this call to action, the Academic Manifesto, and discovered that many more people all over the world are very unhappy about what is happening to our dear universities.
    [Willem Halffman]

    JM's collection of ~100 pages of "educating gossip"
    github.com/jhmaindonald/dataThoughts/blob/master/_book/dataSense-book.pdf
    This is a work in progress.

    John Maindonald.

    Last modified: 29 Sep 2021 6:02 AM | John Maindonald
  • 28 Sep 2021 11:32 AM
    Reply # 11125848 on 11098393
    Chris Lloyd wrote:

    Hi John, I recently read a book called Science Fictions about replication and incentives for fraud in science. I highly recommend it.

    Best

    CL

    Yes, I strongly recommend it.  You might have noticed that I mentioned it in my messages on Aug 26 and Aug 29.  It has many things to say that are highly relevant to the discussion at at the Queensland Branch meeting tonight (AEST 5pm)

    https://qut.zoom.us/j/81374460558?pwd=eC9XVUc2NTlPZmhiU0NGNTNva1U0UT09

    John

    Last modified: 28 Sep 2021 11:43 AM | John Maindonald
  • 18 Sep 2021 12:21 PM
    Reply # 11098393 on 10957924

    Hi John, I recently read a book called Science Fictions about replication and incentives for fraud in science. I highly recommend it.

    Best

    CL

  • 15 Sep 2021 4:52 PM
    Reply # 11089807 on 10957924

    The Science of Science --- many numbers, what do they mean?

    I have now taken a more careful look that "The Science of Science" book, and revised my comments accordingly.

    One might hope that a book with the title "The Science of Science" would address the concerns that the literature on the reproducibility/replication crisis highlights.   It is then disappointing that a recent book with that title, by Dashun Wang and Albert-László Barabási, give the matter scant attention.  I found nothing on the important role that the sharing of data and code has played in advances in genomics, climate science, earthquake science, etc.  Areas where the gains are less obvious (my comment) need to follow suit.  The authors do comment on benefits that flow from having larger groups of scientists working together.  Where the effect is to bring together diverse skills (including analysis skills) and data sources, I'd judge that the critique that really matters mostly happens before papers are submitted for publication.

    The blurb on the cover claims

    This is the first comprehensive overview of the 'science of science,' an emerging interdisciplinary field that relies on big data to unveil the reproducible patterns that govern individual scientific careers and the workings of science. It explores the roots of scientific impact, the role of productivity and creativity, when and what kind of collaborations are effective, the impact of failure and success in a scientific career, and what metrics can tell us about the fundamental workings of science. The book relies on data to draw actionable insights, which can be applied by individuals to further their career or decision makers to enhance the role of science in society. With anecdotes and detailed, easy-to-follow explanations of the research, this book is accessible to all scientists and graduate students, policymakers, and administrators with an interest in the wider scientific enterprise.

    The attention is on papers published citations, patents, together with commentary on very high impact work from scientists whose achievements were exceptional.  There is scant attention to what these counts might mean.  Is more really better, or would be public benefit be better served, at least in laboratory experimental work, by fewer and more carefully considered papers?   Some points of consequence do emerge.  All publicity benefits citation counts, even where papers are identified as seriously flawed.  Those who narrowly miss out on US NIH funding and remain in the field publish papers, in the long run, that make greater "impact".  Papers that are initially rejected by referees end up making greater impact.  Little or nothing is known of what insights and ideas may have been lost because of early vareer failures. The US spends around 55% of its biomedical funding on genetic approaches, even though genome based variation can explain only 15-30% of disease causation.  Environmental effects and diet get even lesss.  There is a suggestion that incentives and institutions are needed that encourage researchers to focus their work more towards societal benefit.  After Covid-19, would they still say this?

    For experimental work, the authors do note the file drawer problem.  "Instead of being discarded, negative results should be shared, compiled, and analyzed."  The only attention to fraud is a comment, incidental to discussion of the media's role, on the Wakefield MMR scandal.

    There is a warning of the potential of algorithms (AI) to amplify and perpetuate human biases, and yes, that comment does apply to the tools and metrics that the science community (and funding agencies?) build.

    Last modified: 15 Sep 2021 5:07 PM | John Maindonald
  • 29 Aug 2021 9:04 AM
    Reply # 10965067 on 10957924

    Do science funders and administrators pay any attention to the now very well documented issues with reproducibility, driven or certanly exacerbated by the mindless drive to publish to which Teresa has alluded, with (almost?) no attention to quality issues.  Have they read Ritchie's book, or anything of thart ilk?  Do they care about the quality of what results?  The public good would be much better served by a much smaller number of papers whose results had (where this is relevant) been replicated, and/or been pondered over for a bit longer.

    Too many of those who might promote change are comfortable with current systems.  Pressure is needed from the outside.  It would be good to see high quality science journalism from writers  who understand the system, the evidence, and the waste that is generated, with columns that loudly (and, one hopes, fairly) attack the problems.  As the blurb for the "Science Fictions" book says:

    the current system of funding and publising science not only fails to safeguard against scientists' inescapable biases and foibles, it actively encourages them.One does not want to tar all areas with the same brush.  As noted, the chief problem is in areas where there is not extensive cooperation to critique work before is is submitted for publication.

    Last modified: 15 Sep 2021 3:07 PM | John Maindonald
  • 28 Aug 2021 11:53 AM
    Reply # 10962652 on 10957924

    Thanks for making the preprint available, Adrian! 

    It seems like we all need to do our part within our institution. Here at ANU we are fortunate to have a small group of biostatisticians sitting within the Joint Colleges of Science. Our courses and workshops for HDR students and postdocs focus on reproducibility, statistical thinking and using R to create reproducible workflows. 

    The challenges we face are "educating" labs to adopt best practice in experimental design, FAIR data, good data management and reproducible analysis workflows. Whilst many lab leader agree in principle that these are good practices to adopt, time pressures and lack of pressure by university leadership make it difficult for us to help labs create infrastructure that will benefit them and their research.

    It's frustrating to me when the university executive run their "town hall" meetings in the Science faculties and promote grant writing and increasing publications, but never mention reproducibility. I feel like we're standing on the sidelines waiting for that opportunity to promote statistical thinking in science whilst the university executive is pressuring its researchers for short term profits. 

  • 27 Aug 2021 6:45 AM
    Reply # 10959154 on 10957924

    And this paper is fittingly also available as a preprint here.

    Statisticians have been involved in the reproducibility crisis before it was even called that, especially the work of Doug Altman in highlighting bad practice and on encouraging journals to improve the reports of statistical methods and results. Sadly there are still massive amounts of bad practice and even fraud, so more work from statisticians is needed. But who is willing to fund this important work? Doug struggled for years to get funding for the EQUATOR network. With current budgets so tight it's hard to convince funders to spend money on this kind of "negative" research.

  • 26 Aug 2021 6:56 PM
    Message # 10957924

    The latest issue of Significance has the article:
    Statisticians
    , roll up your sleeves! There's a crisis to be solved

    Heidi Seibold
    ,
    Alethea Charlton, Anne-Laure Boulesteix, Sabine Hoffman
    SignificanceVolume 18, Issue 4  First published: 28 July 2021

    The authors are concerned about how to deal with the sorts of issues that are raised in Ritchie's "Science Fictions".  It is a bit ironic that that the ARC is so down on references to pre-prints, which (at least in some places) allows some preliminary scrutiny of what may finally appear, given the sorts of issues that Ritchie, and these authors, highlight for a lot of what does get published.

    SSAI members should be able to access it via the SSAI website, but it is a pain to get in and then to negotiate to find the article.

    Last modified: 27 Aug 2021 12:23 AM | John Maindonald
Powered by Wild Apricot Membership Software