replication studies – Artifex.News https://artifexnews.net Stay Connected. Stay Informed. Thu, 26 Oct 2023 05:00:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://artifexnews.net/wp-content/uploads/2023/08/cropped-Artifex-Round-32x32.png replication studies – Artifex.News https://artifexnews.net 32 32 Reproduce or it didn’t happen: why replicable science is better science https://artifexnews.net/article67457227-ece/ Thu, 26 Oct 2023 05:00:00 +0000 https://artifexnews.net/article67457227-ece/ Read More “Reproduce or it didn’t happen: why replicable science is better science” »

]]>

Since I was a little boy, like many Bengalis of my generation, I have been obsessed with Satyajit Ray’s tales about the mythical scientist Professor Shonku. Among his other magical inventions are “Miracurall,” a drug that cures all illnesses except the common cold; “Annihillin,” a pistol that can exterminate any living thing; “Shonkoplane,” a small hovercraft built on anti-gravity technology; and “Omniscope,” which combined the telescope, microscope, and X-ray-scope. Evidently, Prof. Shonku was a brilliant scientist and inventor.

Or was he?

Reproducible research

The fact that none of Shonku’s powerful and useful inventions could be produced in a factory and that only he was capable of manufacturing them was a genuinely disheartening feature of his innovations. Later, after being exposed to the scientific community, I understood that Prof. Shonku couldn’t be considered a ‘scientist’ in the strictest sense of the word for this precise reason. The reproducibility of research is the essence of scientific truth and inventions.

In his 1934 book The Logic of Scientific Discovery, the Austrian-British philosopher Karl Popper wrote: “Non-reproducible single occurrences are of no significance to science.” This said, in some fields, especially observational sciences, where inferences are drawn from events and processes beyond the observer’s control, irreproducible one-time events can still be a significant source of scientific information, so reproducibility is not a critical requirement.

Consider the 1994 collision of Comet Shoemaker-Levy with Jupiter. It offered a wealth of knowledge on the dynamics of the Jovian atmosphere as well as preliminary proof of the danger posed by meteorite and comet impacts. One may recall the famous observation made by Stephen Jay Gould in his brilliant 1989 book Wonderful Life: The Burgess Shale and the Nature of History, that if one were to “rewind the tape of life,” the consequences would surely be different, with the likelihood that nothing resembling us would exist.”

“We’re all biased”

However, scientists working in most disciplines do not have that kind of leverage, for sure. In fact, reproducibility – or the lack thereof – has become a very pressing issue in more recent years.

In a 2011 study, researchers evaluated 67 medical research projects and found that just 6% were fully repeatable whereas 65% showed inconsistencies when evaluated again. An article in Nature on October 12, 2023, reported that 246 researchers examined a common pool of ecological data but came to significantly different conclusions. The effort echoes a 2015 attempt to replicate 100 research findings in psychology, but managed to do so for less than half.

In 2019, the British Journal of Anaesthesia conducted a novel study to address the “over-interpretation, spin, and subjective bias” of researchers. One paper had disregarded the potential link between higher anaesthetic doses and earlier deaths among elderly patients. However, by analysing the same data in another 2019 paper in the same journal, different researchers found different death rates. The new paper also argued that there weren’t enough trial participants present to reach that conclusion, or any conclusion at all, about mortality.

The purpose of such an analysis – publishing two articles based on the same experimental data – was to broaden the scope of replication attempts beyond just techniques and findings. The lead author of the original paper, Frederick Sieber, commended the methodology saying, “We’re all biased and this gives a second pair of eyes.”

Affirming the method

Replicating other people’s scientific experiments appears messy. But could trying to replicate one’s own findings be chaotic as well? According to one intriguing paper published in 2016, more than 70% of researchers have failed to replicate the experiments of other scientists, and more than half have attempted and failed to replicate their own experiments. The analysis was based on an online survey of 1,576 researchers conducted by Nature.

The Oxford English Dictionary’s definition of “reproducibility” is “the extent to which consistent results are obtained when produced repeatedly.” It is thus a fundamental tenet of science and an affirmation of the scientific method. In theory, researchers should be able to replicate experiments, get the same outcomes, and draw the same conclusions, thus helping to validate and strengthen the original work. Reproducibility is significant not because it checks for the ‘correctness’ of outcomes but because it ensures the transparency of exactly what was done in a particular area of study.

Axiomatically, the inability to reproduce a study could have a variety of causes. The main factors are likely to be pressure to publish and selective reporting. Other factors include inadequate lab replication, poor management, low statistical power, reagent variability, or the use of specialised techniques that are challenging to replicate.

Our responsibility

In this milieu, how can we improve the reproducibility of research?

Some obvious solutions include more robust experimental design, better statistics, robust sharing of data, materials, software, and other tools, the use of authenticated biomaterials, publishing negative data, and better mentorship. All of these, however, are difficult to guarantee in this age of “publish or perish” – where a researcher’s mere survival in the academic setting depends on their performance in publishing.

Funding organisations and publishers can also do more to enhance reproducibility. Researchers are increasingly being advised to publish their data alongside their papers and to make public the full context of their analyses. The ‘many analysts’ method – which essentially employs many pairs of eyes in which different researchers are given the same data and the same study questions – was pioneered by psychologists and social scientists in the middle 2010s.

All this said, today, it seems that we simply can’t depend on any one outcome or one study to tell us the complete story because of the pervasive reproducibility issue. We are more acutely experiencing this awful state. Maybe we will have to understand that it is our responsibility to ensure reproducibility in our research – more so to avoid risking becoming a fictitious scientist like Prof. Shonku.

Atanu Biswas is Professor of Statistics, Indian Statistical Institute, Kolkata.



Source link

]]>
The Gino data scandal in behavioural science and research misconduct https://artifexnews.net/article67448971-ece/ Mon, 23 Oct 2023 05:00:00 +0000 https://artifexnews.net/article67448971-ece/ Read More “The Gino data scandal in behavioural science and research misconduct” »

]]>

Allegations of fraud hit the behavioural sciences recently when a team of independent investigators published a series of articles detailing apparent data manipulation in more than four prominent papers in the field. Ironically, the papers described studies of morality and honesty, and so far, the accusations have landed at the feet of one author common to all these papers, Harvard University professor Francesca Gino.

Since the allegations were levelled, the papers have been retracted, but not without disagreement and controversy. While the university floated its own investigation into the claims before it placed Dr. Gino on administrative leave, she filed cases against the university and the authors of the original articles – researchers Leif Nelson, Joe Simmons, and Uri Simonsohn. Since then, with help from its peers, the trio has crowd-funded money to pay for its legal defence.

The rise of this scandal has spawned many questions – from the simpler one of Dr. Gino’s guilt to the more involved one of where it will leave the field of behavioural sciences itself. But underlying them all is an older, more familiar one: why does misconduct happen?

What are the effects of misconduct?

Outright fabrication, falsification, and plagiarism, plus some of their more benign variations constitute a tale almost as old as scientific inquiry itself. Beginning with the Piltdown Man in 1912 – a fraudulent attempt to fill in the missing link between primate and man – to more recent cases like that of Diederik Stapel, scientific misconduct has always been and continues to be around, to different degrees in different fields.

Even if one instance of misconduct is small in scope, it can have dire consequences for scientists and for the field – especially if those committing it are the field’s leaders. One way to identify leaders is by the extent to which their work has laid the foundation for that of others; this is considerable in Dr. Gino’s case.

Many other papers and findings free of misconduct that rely on the original but faulty work will also be brought into question, risking years of work.

Why do researchers commit misconduct?

There is some consensus that the leading contributors to misconduct today are the existing incentive structures for researchers and shortcomings in peer-reviews and replication studies.

Researchers have many incentives – including from grant-providers, editors, and academic institutions – to pursue more groundbreaking findings and results that support alternative hypotheses. Flashier results can elevate the researchers who obtain them to higher standing, make them and their employers more famous, and allow their funders to claim sufficient bang for the buck. But on the flip side, the size of the incentives may have encouraged many researchers to do work that is sloppy at best and outright manufactured at worst.

Some experts have backed the idea that incentive structures, manifesting as pressure to publish, affect researcher’s motivations. They also blame the low risk of detection by reviewers and research supervisors’ mentoring styles as probable motivators of misconduct. Some others have blamed cultural norms around criticism and the absence, or incompleteness, of policies at the national or at the institutional levels to penalise misconduct.

How should misconduct be dealt with?

One novel response to the challenges of dealing with misconduct is the Open Science Framework (OSF) to ensure scientific integrity. It promotes practices such as pre-registration (i.e. fixing a study’s hypotheses, methods, and analyses before it is conducted and agreeing to share the results, whatever they are) and making research data more accessible.

As such, OSF has tried to reduce the amount of misconduct by putting both researchers’ original intentions and the eventual data up for scrutiny. The team behind OSF has also launched the more ambitious ‘Systematizing Confidence in Open Research and Evidence’ (SCORE) project, which tries to make research more credible by developing automated tools to generate “rapid, scalable, and accurate confidence scores for research claims”.

This said, OSF still requires institutions and/or researchers to buy into abiding by it to be able to effectively eliminate misconduct. SCORE can work around this barrier but has its own drawbacks, such as a risk of uncritical use en masse to assess the ‘credibility’ of scientists – something that those developing SCORE have said isn’t its use case.

In addition, while there are methods at both small and large scales to handle fraud, they can be inconsistent across institutions. The result is for researchers who are willing to cooperate to still face significant ‘unofficial’ forms of punishment – or, as with the three researchers who reported concerns with the papers co-authored by Dr. Gino, for independent investigators to be at risk of facing expensive litigation.

What are the systematic causes of misconduct?

Less-novel ways to combat the incidence of misconduct include adequate funding and less pressure on researchers, support for replication studies (i.e. studies that check the results of other studies), and ‘detectives’ incentivised to check for fraud.

For example, setting aside a part of a grant sanctioned for a study for quality-control activities would go a long way to counter misconduct. Investigators could use these resources to make probes more thorough and also faster, which could help increase younger scientists’ confidence in the system. Similarly, providing financial aid for replication studies – such as in the form of cash rewards – could also help.

The ability of science to keep out misconduct and police itself partly comes down to the choices individual researchers make. Whether it’s the temptation to be a bit less rigorous when double-checking a result or the values they impart to one’s mentees, the willingness to stick to scientific norms regardless of the impact it has on one’s prospects ultimately decides how far misconduct spreads.

What is the role of scientific publishing?

This said, beyond research facilities and academia, the structure of scientific publishing is also implicated in the persistence of research misconduct. In particular, many journals – like grantors – prefer to publish sensational results and have been less than forthcoming to investigate or rectify signs of misconduct in published papers.

Recently, for example, Nature retracted a paper it had published last year after independent researchers reported that its data didn’t add up. But the journal hasn’t explained how it cleared the paper for publication in the first place.

What can, and must, scientists do?

Some scientists are doing the right thing. In the absence of similar institutional efforts, many of Dr. Gino’s co-authors have decided to examine work on which Dr. Gino had collaborated and provided the data, in order to separate ‘good’ papers from ‘bad’ instead of allowing all of them to be tarred with the same brush.

This said, scientists are aware of a much-needed rethink, especially by those who have power, regarding the methods and norms around science. The popular imagination of science is that it will always be rigorous and self-correcting, but this is naïve and unrealistic.

The contemporary scientific process needs to be enhanced with technology and incentives to make inquiries about scientific inquiry itself – and they should become standard practice, rather than requiring ‘special’ circumstances to kick in.

Abhishek V. is Research Assistant at the Department of Economics at Monk Prayogshala, Mumbai.



Source link

]]>