Careers and reputations are unmade in a moment, yet scientists and researchers plagiarise, fudge data and hide facts on research funding.
It’s the urge to publish, or perish
Bad science is rampantly practised all over the world and reflects incompetence more than intent. Poor experimental design and conclusions which do not relate to the body of the argument are some of the examples.
Any discussion of scientific fraud must begin with Andrew Wakefield’s 1998 paper in the Lancet which claimed that the measles, mumps and rubella (MMR) vaccine may predispose children to autism. It received wide publicity and, although epidemiological studies done soon afterwards refuted its conclusions, vaccination rates dropped. The Lancet retracted the paper in 2010 citing incomplete financial disclosures, ethical issues and misrepresentation of data.
Science is the most powerful tool we possess to understand our world. People often mistake it for a body of knowledge, a storehouse of information contained within textbooks and journals. But science is not merely a repository; it is also a methodology.
The weakest link in the chain of scientific methodology is the human factor. We think of scientists at work as dispassionate and neutral but this is a romantic ideal. The scientist, like any other human being, is blinkered by his or her privileges and conditionings. Checks and balances exist within the methodology to reduce the effects of such unintentional biases. But fraudulent practices are not unintentional, they are by definition deliberate.
The scientific method is dispassionate and neutral but its practitioners are not necessarily so. Deci-sions to begin a particular line of enquiry almost always have an ulterior motive. They are often politically motivated. The development of the atomic bomb and the Apollo space programme are easy examples. Major scientific enquiries may, even today, be occasionally driven by “pure” motives but usually there are more practical forces at work. There is a tension at work here, the detached search for scientific truth versus a human tendency for self-promotion. It is here, where baser instincts win, that fraudulent practices begin.
Fraudulent practices in science can be classified for our purpose into major and minor categories. Major practices, like the Wakefield paper, are obvious and indefensible. Minor ones can be subjective and, for this reason, too often tolerated. This is a grey area; one person’s minor fraudulent practice may be another’s standard practice. Obvious examples of fraud are data manufacturing, manipulation of experimental results, unethical practices and plagiarism. Data manufacturing is the art of creating data from thin air to reach a conclusion. This is what the tobacco industry infamously did when they funded epidemiological studies to prove that smoking was healthy.
Manipulating experimental results is easy. One performs the same experiment ten times and reports the one time it produced a result favourable for a drug, say, or intervention. Negative results are kept out of sight. The sugar industry in the Fifties and Sixties used their research foundation and sponsored scientists to suppress findings which linked dietary sugars with cardiovascular disease. The amount of money and effort involved in the deception is staggering.
Scientists employed by industry work within narrow frameworks. They are expected to publish results favourable to the company. An example from the pharmaceutical industry is Hyaluronic acid injections. This molecule is promoted by several companies as an effective treatment for degenerative joint disease. To date there has been no independent study which has validated this claim.
Much scientific research involves inventing and testing new drugs. Pharma-ceutical companies directly fund these studies. Important scientific problems are those which the companies think are important. Developing effective medicines for tropical diseases thus becomes less urgent than producing a medicine which helps remove unwanted hair in American women. This is not strictly speaking a fraudulent practice but such grossly misplaced priorities do not reflect well on the scientific community as a whole. Cherry-picking results and doctoring of data are relatively minor but common unscrupulous practices. A department head is named as one of the authors of a paper even when he has not contributed anything.
Bad science is rampantly practised all over the world and reflects incompetence more than intent. This often gets published in journals and authors benefit from it. Apparently in the era of “publish or perish” even shoddy science is better than none at all.
Data manipulation is another common research practice. Behind any published scientific paper lies a huge body of work, most of which will not get into the finished article. It is impossible to include all data in the final work. Data must be selected. This is routine practice but the scientist can, and often does, manipulate the data selection process to alter final results. There are several incentives for scientists to indulge in these practices. Money, prestige and fame are obvious drivers. Association with billion dollar industries can be an attractive proposition to the scientist for obvious reasons. Prestige within the scientific community cements one’s position as an opinion leader.
Science is hierarchical. The few in the top echelons decide the nature of work and direction of research. These leaders may even be involved in deciding state scientific policies. They can get their works published in prestigious journals at will. People in these positions mentor future generations of scientists. Thought leaders find it easy to impose their ideas on the rest of the scientific community. Most people fall in line or keep quiet about unethical practices because the risks involved in whistle blowing are too great.
The mathematician Alexandre Grothendiek declined the Crafoord prize in 1988 saying that “the ethics of the scientific profession (at least among mathematicians) has declined to such a degree that pure and simple plundering among colleagues (especially at the expense of those who are not in a position to defend themselves) has almost become the rule, and in any case is tolerated by all, even in the most flagrant and iniquitous cases”. Sadly, this is as true today as when Grothendiek wrote it, and not only among mathematicians.
More than 500 scientific papers are retracted every year. These are some of them from the past ten years.
Scott Reuben, anesthesiologist —Influential researcher in pain management never conducted any of the clinical trials on which his conclusions were based. Sentenced to prison in 2009 for health care fraud.
Sezen Bengu, Columbian grad student. Brilliant researcher who had coauthored six papers based on her research results. It turned out that all were fabricated.
Victor Ninov, Bulgaria, researcher in nuclear chemistry, fabricated the evidence used to claim the creation of elements 118 and 116.
Dipak Kumar Das, director
of the Cardiovascular Research Centre at the University of Connecticut Health in Farmington. Known for his work on the beneficial properties of resveratrol in red wine but 20 of his papers have been retracted.
Konstantin Meyl, Germany. Described how cells communicate among themselves by
Piero Anversa, cardiologist. The journal Circulation retracted a study by a group of Harvard heart specialists over concerns of corrupt data.
Kevin Corbit described how grizzly bears become diabetics during hibernation, which enables them to survive on the fat reserves they accumulate in the summer and fall.
Federico Infascelli, an animal nutrition researcher at the University of Naples, who showed modified genes could end up in the bodies of baby goats whose mothers ate GMOs. Retracted over charges of data fabrication.
Springer retracts 107 papers from the Tumour Biology journal in one day. All had fake peer reviews. Tumor Biology publisher, Springer Publishing is owned by the International Society of Oncology and BioMarkers (ISOBM), and was published by Springer until last year.
Henry WC Leung, oncologist: A paper studying the cost utility analysis of two drugs for metastatic cancer was retracted because some of the data was “incorrect”.
(The author is an orthopaedic surgeon. He is also a columnist and prolific bloggervizdom44.wordpress.com)