Karl Popper – Artifex.News https://artifexnews.net Stay Connected. Stay Informed. Wed, 18 Sep 2024 05:30:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://artifexnews.net/wp-content/uploads/2023/08/cropped-Artifex-Round-32x32.png Karl Popper – Artifex.News https://artifexnews.net 32 32 The Science Quiz | Today is the 30th death anniversary of Karl Popper https://artifexnews.net/article68651312-ece/ Wed, 18 Sep 2024 05:30:00 +0000 https://artifexnews.net/article68651312-ece/ Read More “The Science Quiz | Today is the 30th death anniversary of Karl Popper” »

]]>

Questions:

1. Karl Popper is best known for an influential philosophy of science he developed in the 20th century, particularly his concept of __________, which states that scientific theories must by definition be capable of being proven wrong. Fill in the blank.

2. While Popper believed science progresses through the demarcation of science from non-science, which historian of science argued that major changes in science occur when a prevailing scientific framework or a paradigm is overturned and replaced by a new one?

3. Popper attempted to challenge a particular interpretation of quantum mechanics put forth by scientists Neils Bohr, Werner Heisenberg, and Max Bohr. Popper argued against the idea that particles have definite properties only when they’re measured. What is the interpretation called?

4. In a 1934 book, Popper introduced the idea that scientific theories should be testable and refutable to be considered valid in an influential 1934 book. Name it. It laid the foundation for his philosophy and later influenced the work of several scientists and philosophers.

5. Popper was a critic of __________, the belief that science could determine universal truths just by repeated observation and empirical data. His view made him laud the work of Albert Einstein, which incorporated theoretical innovation alongside empirical testing.

Visual:

Name this German astronomer whose theory of planetary motion helped Popper develop his philosophy of science.

Answers:

1. Falsifiability

2. Thomas Kuhn

3. Copenhagen interpretation

4. The Logic of Scientific Discovery

5. Inductivism

Visual: Johannes Kepler



Source link

]]>
Reproduce or it didn’t happen: why replicable science is better science https://artifexnews.net/article67457227-ece/ Thu, 26 Oct 2023 05:00:00 +0000 https://artifexnews.net/article67457227-ece/ Read More “Reproduce or it didn’t happen: why replicable science is better science” »

]]>

Since I was a little boy, like many Bengalis of my generation, I have been obsessed with Satyajit Ray’s tales about the mythical scientist Professor Shonku. Among his other magical inventions are “Miracurall,” a drug that cures all illnesses except the common cold; “Annihillin,” a pistol that can exterminate any living thing; “Shonkoplane,” a small hovercraft built on anti-gravity technology; and “Omniscope,” which combined the telescope, microscope, and X-ray-scope. Evidently, Prof. Shonku was a brilliant scientist and inventor.

Or was he?

Reproducible research

The fact that none of Shonku’s powerful and useful inventions could be produced in a factory and that only he was capable of manufacturing them was a genuinely disheartening feature of his innovations. Later, after being exposed to the scientific community, I understood that Prof. Shonku couldn’t be considered a ‘scientist’ in the strictest sense of the word for this precise reason. The reproducibility of research is the essence of scientific truth and inventions.

In his 1934 book The Logic of Scientific Discovery, the Austrian-British philosopher Karl Popper wrote: “Non-reproducible single occurrences are of no significance to science.” This said, in some fields, especially observational sciences, where inferences are drawn from events and processes beyond the observer’s control, irreproducible one-time events can still be a significant source of scientific information, so reproducibility is not a critical requirement.

Consider the 1994 collision of Comet Shoemaker-Levy with Jupiter. It offered a wealth of knowledge on the dynamics of the Jovian atmosphere as well as preliminary proof of the danger posed by meteorite and comet impacts. One may recall the famous observation made by Stephen Jay Gould in his brilliant 1989 book Wonderful Life: The Burgess Shale and the Nature of History, that if one were to “rewind the tape of life,” the consequences would surely be different, with the likelihood that nothing resembling us would exist.”

“We’re all biased”

However, scientists working in most disciplines do not have that kind of leverage, for sure. In fact, reproducibility – or the lack thereof – has become a very pressing issue in more recent years.

In a 2011 study, researchers evaluated 67 medical research projects and found that just 6% were fully repeatable whereas 65% showed inconsistencies when evaluated again. An article in Nature on October 12, 2023, reported that 246 researchers examined a common pool of ecological data but came to significantly different conclusions. The effort echoes a 2015 attempt to replicate 100 research findings in psychology, but managed to do so for less than half.

In 2019, the British Journal of Anaesthesia conducted a novel study to address the “over-interpretation, spin, and subjective bias” of researchers. One paper had disregarded the potential link between higher anaesthetic doses and earlier deaths among elderly patients. However, by analysing the same data in another 2019 paper in the same journal, different researchers found different death rates. The new paper also argued that there weren’t enough trial participants present to reach that conclusion, or any conclusion at all, about mortality.

The purpose of such an analysis – publishing two articles based on the same experimental data – was to broaden the scope of replication attempts beyond just techniques and findings. The lead author of the original paper, Frederick Sieber, commended the methodology saying, “We’re all biased and this gives a second pair of eyes.”

Affirming the method

Replicating other people’s scientific experiments appears messy. But could trying to replicate one’s own findings be chaotic as well? According to one intriguing paper published in 2016, more than 70% of researchers have failed to replicate the experiments of other scientists, and more than half have attempted and failed to replicate their own experiments. The analysis was based on an online survey of 1,576 researchers conducted by Nature.

The Oxford English Dictionary’s definition of “reproducibility” is “the extent to which consistent results are obtained when produced repeatedly.” It is thus a fundamental tenet of science and an affirmation of the scientific method. In theory, researchers should be able to replicate experiments, get the same outcomes, and draw the same conclusions, thus helping to validate and strengthen the original work. Reproducibility is significant not because it checks for the ‘correctness’ of outcomes but because it ensures the transparency of exactly what was done in a particular area of study.

Axiomatically, the inability to reproduce a study could have a variety of causes. The main factors are likely to be pressure to publish and selective reporting. Other factors include inadequate lab replication, poor management, low statistical power, reagent variability, or the use of specialised techniques that are challenging to replicate.

Our responsibility

In this milieu, how can we improve the reproducibility of research?

Some obvious solutions include more robust experimental design, better statistics, robust sharing of data, materials, software, and other tools, the use of authenticated biomaterials, publishing negative data, and better mentorship. All of these, however, are difficult to guarantee in this age of “publish or perish” – where a researcher’s mere survival in the academic setting depends on their performance in publishing.

Funding organisations and publishers can also do more to enhance reproducibility. Researchers are increasingly being advised to publish their data alongside their papers and to make public the full context of their analyses. The ‘many analysts’ method – which essentially employs many pairs of eyes in which different researchers are given the same data and the same study questions – was pioneered by psychologists and social scientists in the middle 2010s.

All this said, today, it seems that we simply can’t depend on any one outcome or one study to tell us the complete story because of the pervasive reproducibility issue. We are more acutely experiencing this awful state. Maybe we will have to understand that it is our responsibility to ensure reproducibility in our research – more so to avoid risking becoming a fictitious scientist like Prof. Shonku.

Atanu Biswas is Professor of Statistics, Indian Statistical Institute, Kolkata.



Source link

]]>