JICDRO is a UGC approved journal (Journal no. 63927)

   Table of Contents      
Year : 2016  |  Volume : 8  |  Issue : 1  |  Page : 8-13

In how many ways may a health research go wrong?

Department of Public Health Dentistry, Institute of Dental Sciences, Bareilly, Uttar Pradesh, India

Date of Web Publication12-Feb-2016

Correspondence Address:
Dr. Nagesh Lakshminarayan
Department of Public Health Dentistry, Institute of Dental Sciences, Bareilly, Uttar Pradesh
Login to access the Email id

Source of Support: None, Conflict of Interest: None

DOI: 10.4103/2231-0754.176245

Rights and Permissions

Research, in the broadest sense, includes gathering of data, information and facts for the advancement of knowledge. It is a systematic investigation done in order to establish or confirm facts, reaffirm the results of previous work, solve new or existing problems, support theorems, or develop new theories. A good research is an offshoot of a good design. Research (study or experimental) design is the backbone of good research. It is the framework that has been created to seek answers to research questions. Designing a research project takes time, skill and knowledge. If designing is not done scrupulously, errors may creep into the research at various stages of planning, designing, conducting, analyzing, reporting, and publishing of research output. These errors may distort the results leading to invalid conclusions. The only way to not let these errors occur is to avoid them to the maximum possible extent by gaining comprehensive knowledge about each error and applying measures to control and minimize them. Perfect health research does not exist but a high quality research certainly exists.

Keywords: Health research, error, bias, confounding, research gone wrong

How to cite this article:
Lakshminarayan N. In how many ways may a health research go wrong?. J Int Clin Dent Res Organ 2016;8:8-13

How to cite this URL:
Lakshminarayan N. In how many ways may a health research go wrong?. J Int Clin Dent Res Organ [serial online] 2016 [cited 2020 Sep 22];8:8-13. Available from: http://www.jicdro.org/text.asp?2016/8/1/8/176245

Research is a tool for generating knowledge. When it is done related to health issues, it is called health research. There is no dearth of health-related research in the 21 st century. The knowledge base in the field of health sciences is incessantly expanding, and more so in the current times. Handling and systematically managing such expansive knowledge is increasingly becoming difficult. This has been termed as "information overload."

Information generated by research should translate into knowledge and in turn, into wisdom. Very often, the information gathered from research may not be equivocal and free of contradiction. The conclusions may be hazy and the evidence may be divisive. Why does every researcher researching on the same specific research question not arrive at the same answer? What drives researchers to arrive at divergent conclusions although the research question is the same?

There may be two possible reasons for this aberration:

  1. Heterogeneity with respect to sample characteristics such as age, gender, race, ethnicity, occupation, educational status, dietary practices, geographical distribution, lifestyle, and many more.
  2. Errors creeping into research at various stages of planning, conducting, analyzing, reporting, and publishing of research. [1]

This article tries to focus on the second reason, which pertains to possible errors that shift the results and conclusions away from the truth. There is only one way of doing research perfectly but there are several ways of doing research erroneously. Poor research is a result of poor scientific rigors followed in the study. Poor quality research leads to invalid conclusions. Such conclusions have dangerous implications when imported into day-to-day clinical practice. Medical science or dental science should be based on sound scientific research, definitely not on poor quality research. In health sciences, the stakes are very high as the output may either save a life or kill a life.

Errors while planning, designing, analyzing, reporting, and publishing leave an indelible mark on the research output. Every error that goes into the research creates a chance for the research to go wrong. Research aims to arrive at the truth, whereas every error shifts the research away from the truth. Hence, it is rightly said that research is all about applying systematic measures, which eliminate the chances of missing the truth. This article is an attempt to throw light on those errors, which when unchecked turn the research output invalid.

   Errors at Planning Stage Top

Planning a research should be scrupulous and foolproof. Ideally, considerable time should be expended by the researchers on framing a right research question. Extensive exploration of all the available literature is very essential to know the relevance of the research question. Replicating a known and proven fact again and again is of little use, and that does not add substantially to the knowledge base. It also cannot contribute to a change in practice. A research question needs to be feasible, interesting, novel, ethical, and relevant (FINER). Framing a wrong research question is akin to setting a wrong objective because the question implicitly reflects the objective of the study. Such an error misleads the whole research and also the researchers.

Translating the research question into a research hypothesis (hypothesis-testing research) is an important step. The "research hypothesis - H 1" and its converse version "null hypothesis - H 0" should be clearly stated because they provide sound foundation for planned research. The hypothesis should reflect the research question and be in line with the set objectives. If not, the resultant error may lead to an unintended research or misdirected research. The hypothesis should clearly specify the primary variables of the study and the population that it involves.

Hypothesis testing is an interesting and meaningful scientific procedure of arriving at the interpretation and conclusion. In science, refuting a hypothesis is the approved and acceptable method (modus tollens) for many decades ever since Karl Popper demonstrated that for the first time. The first step is to frame a "research hypothesis" (alternative hypothesis), which reflects the assumption of the researcher or what the researcher wants to prove. Science has a peculiar way to prove 'something' (Research hypothesis). It constructs the converse of what needs to be proved (Null hypothesis). Thus "research hypothesis" is translated to a converse version called "null hypothesis". Then it will try to prove the converse. In this process if it fails then that 'something'(Research hypothesis) is accepted as truth.

Hypothesis testing has to be scrupulously done. The level at which the null hypothesis is rejected (Significance level or 'p value') has to be explicitly mentioned. It is usually fixed at <0.05. This level is also called the level of significance. This has to be fixed beforehand, and the sample size needs to be calculated based on this value. Any errors in this stage may totally alter the results and conclusions.


Research hypothesis (H 0 )

Chronic periodontitis is associated with elevation in serum C-reactive protein (CRP).

Null hypothesis (H 0 )

Chronic periodontitis is not associated with elevated serum CRP.

The researcher will set out on a study to refute that chronic periodontitis is not associated with elevated serum CRP (null hypothesis). In this process, if the evidence does not support the null hypothesis, only then the research hypothesis is accepted.

The directionality of the hypothesis determines the kind of statistics that needs to be used. If the hypothesis is directional, as in the above case, where the researcher commits saying that chronic periodontitis does result in "elevation" of serum CRP, one-tailed statistics has to be used. If the hypothesis is nondirectional, two-tailed statistics needs to be used. The table values of these two statistics are different and interpretations also greatly differ depending on the table values. Errors in selecting appropriate statistics can result in wrong interpretations and conclusions.

Human research and animal research have serious ethical implications. Unethical research is also said to be unscientific and invalid research. Adhering to ethical principles and maintaining ethical integrity are a must. Obtaining ethical approval from a recognized institutional review board (IRB), obtaining informed consent from the participant before the intervention or data collection, and obtaining assent from every child participant should be rigorously done. Establishing clinical equipoise, debriefing, and respecting human rights are mandatory. Compromises in these procedural protocols results in not only an unethical research but also an invalid research.

Feasibility is an important issue. Fiscal implications, availability of required equipment and materials, and availability of manpower and expertise should be assessed beforehand by conducting a pilot study. It is also called piloting, by which problems may be addressed and any deficit may be set right. If due attention is not given to these issues, the researcher is forced to either make compromises or abandon the research. Compromises always threaten the validity of the research. Abandoning the research results in wastage of resources and also inconvenience to participants.

   Errors at Design Stage Top

All errors at the design and execution stage are broadly classified as: [1]

  1. Random error (unsystematic error).
  2. Bias (systematic error).
  3. Confounding.

Random error

This is a nonsystematic error of an unpredictable nature. It occurs as health research is done on a selected sample of subjects or specimens, and not on the whole population. It arises because of sampling variability, that is, every sample that is selected is different when compared to every other. Increasing the sample size or taking adequate-sized sample can check this error to an extent. Measurement error also contributes to this. Whenever erratic or extreme data are observed, the measurement has to be repeated and rechecked. The presence of this error beyond a permissible limit increases the chances of false-positive associations. The significance level (P value) that is fixed usually at 0.05 (5%) is referred to as the maximum allowable random error.

Measures to minimize random error

Random error can be controlled and reduced by three measures:

  1. Selection of adequate sample size.
  2. Repeating the measurement when the reading is extreme or erratic.
  3. Using efficient study design.

Failure to take these measures results in significant random error and the results are said to be invalid.


It is a systematic error, which has a predictable impact on the results. This may either systematically inflate or deflate the results. Bias always needs to be restricted at the planning and design stages. Adhering to strict scientific rigors, especially in methodology can profoundly reduce bias. There are many potential sources of bias in medical research. Although hundreds of biases are presented in the literature, all of these may belong to basically six categories. [2]

Selection bias

Deliberate selection of subjects or specimens into multiple groups (no random allocation) results in such groups, which are dissimilar at the baseline (baseline comparability is compromised). This is akin to "comparing apples with oranges." Such a research is far from the truth because the tenet of epidemiology is to compare the comparable. This bias can be eliminated by a random allocation procedure where the subjects are randomly divided into multiple groups. In this procedure, allocation happens by chance and thus, there is no selection bias.

Detection bias

Measurements or observations in one group are not as rigorously documented as in the other. This results in ascertainment bias and misclassification bias.

Observer bias

The observer or examiner who is measuring the variable (clinical sign, symptom, level of certain biochemical) may commit mistakes if not trained properly and also if not blinded. The observer may favor a group while measuring if he/she knows the group's affiliation (test group or control group). To avoid this, wherever blinding is possible one should follow blinding.

Reporting bias (recall bias)

It occurs majorly in epidemiological studies where the participant is instructed to remember and retrieve data related to exposure (some event, circumstance, incident, or episode) from a past time in his/her life. All participants may not be able to recall happenings from the past uniformly. Inaccurate recall and poor memory may distort the validity of the data. This error contributes to either misclassification bias or measurement bias. False-positive associations or chances of missing true associations may result from this type of error.

Response bias

Those participants who participate are systematically different from those who do not consent to participate. If those who did not consent were to participate in the study, the results would be systematically different. This error reduces the generalizability and the ability to extrapolate the results to larger population. It is better to have less nonresponse in any research.

Publication bias

It is a systematic error, which occurs at the reporting and publication stage. Studies which show negative results (no association or difference) are less likely to be reported or published. [3] This results in a literature, which is overwhelmed by positive results although these may be untrue.

There is a wide array of biases; a long list could be compiled. Nevertheless, all of these belong to any one of the above categories. The point to be taken seriously is that biases can be greatly marginalized by following stringent measures in methodology. That improves the validity and reliability of the research.

Bias can be greatly reduced by standardizing the methodology. Standardization of the methods, instruments, techniques, materials, procedures, and calibrating the examiners are a must for addressing bias. Procedures such as blinding, masking, random selection of participants, random allocation of participants, management of missing data, adherence to a protocol, and application of the right kind of statistics are some of the measures, which can profoundly minimize bias and maximize validity.


It is a type of error, which is also called "third variable effect." [1] It happens because of some extraneous variables (third variables). A third variable is the one other than the independent and dependent variables in the study. Confounding is a confusion of effects, which happens because some third variable distorts the relationship between the independent (cause) variable and the dependent (effect) variable.


Smoking is the suspected exposure (cause) and carcinoma of the palate is the expected outcome (effect). The strength of association is high as reflected by odds ratio (OR) = 8. Further analyses also revealed that a majority of the smokers were also alcohol consumers. Alcohol consumption is the third variable because independently, on its own, it can cause carcinoma of the palate. In this case, alcohol consumption is prevalent among smokers; hence, the effect of smoking in causing carcinoma of the palate is confounded by alcohol. The hidden variable in this relationship is alcohol consumption, and it is the confounder [Figure 1].
Figure 1: an example - role of confounding variable

Click here to view

It is imperative to control such confounders in a research in order to understand the exact nature of the relationship between the cause and effect. When unchecked, confounders play spoilsport in establishing a valid relationship between the cause and the effect. In oral health-related research, the most common confounders are age, gender, socioeconomic status, dietary practices, and lifestyle. It is like a triangular love story where the entry of a third person would distort the relationship between two persons.

Confounding should be controlled either at the design stage or at the analysis stage. It is always recommended that confounding be controlled at the design stage, and for some reasons when it cannot be addressed at the design stage, it should be statistically addressed at the analysis stage. At the design stage, it can be controlled by.


It refers to recruiting the subjects for research based on strict eligibility criteria. This reduces the chances of creating two or more heterogeneous groups for comparison. It is to be remembered that apples should not be compared with oranges. Restrictions that are too strict (exercising very strict eligibility criterion) reduce the generalizability.


Pair-matching or group-matching refers to matching the subjects in one group with the other group with respect to possible confounders such as age, gender, and socioeconomic status. This results in equalizing the distribution of these confounders in the two groups that are being compared. The confounders get evenly distributed and have no influence on the output. This method is applied in epidemiological studies and in experimental trials, the process of randomization takes care of this issue.


Randomly allocating the subjects into test and control groups in experimental trials generates comparable groups at the baseline so that any difference found between the groups at the end of the study may be attributed solely to the respective intervention. Random allocation is central to randomized controlled trials (RCTs).

This eliminates selection bias and also confounding. Randomization addresses not only known confounders but also unknown confounders.

A scrupulously done RCT generates high quality evidence, which is rated highly in evidence-based hierarchy. RCT is the gold standard for generating good evidence. A nonrandomized trial is as good as a case series or a natural experiment.

At the analysis stage, confounding can be controlled by: [1]

  1. Stratified analysis.
  2. Multivariable analysis (multiple regression analysis).

These statistical methods control the influence of confounders and reveal the true relationship between the independent and dependent variables. These are complex statistical models used in many epidemiological studies. To use these models, the researchers must earmark the possible confounders in their study and collect the data about them while conducting research. These data is fed into the model so that the confounding influence is controlled while the analysis is done. It is always recommended that the confounders be controlled at the design stage with all possible measures rather than addressing these statistically. It is beyond the scope of this article to discuss these statistical models and the readers can refer to appropriate books of biostatistics.

The presence of any of the abovesaid errors in any piece of research distorts the results, thereby shifting the conclusions away from the truth. When these errors are not marginalized, spurious associations are discovered (false associations) leading to erroneous conclusions. Such a research suffers from the lack of internal validity, and for the same reason it also lacks external validity or generalizability. Maintaining scientific rigors stringently in the methodology section is crucial to limit the noise created by these errors so that the signal (truth) could be heard.

   Two Inevitable Errors in Health Research: Type 1 Error and Type 2 Error Top

The sample size should be determined scientifically (a priori) unless there are other valid reasons. There is so much of valid science supporting sample size determination. Even now, many articles are published in journals where there is absolutely no mention of sample size estimation or justification. The sample size determination has to be made transparent in the article because that gives a clear mandate about how many Type 1 and Type 2 errors are set as limits in the given research and how much one can believe the results and conclusions amid allowable errors. It also aids in taking decisions about generalizability. It is said that whether the results of a research are positive or negative does not matter unless the sample size is scientifically determined. There are specific formulae to determine the sample size for a given research. The selection of sample size formulae depends on some factors such as:

  1. The nature of data (categorical or continuous).
  2. The nature of data distribution (normal or nonnormal).
  3. The study design.

There are several software also designed for estimating the sample size. After appropriate inputs are given, a single command will show the required sample size. The values for Zα (Z variate for Type 1 error) and Zβ (Z variate for Type 2error), along with standard deviation (SD) of the variable being measured from a key article, and d (clinically significant difference) are supposed to be fed into the program to obtain an appropriate sample size for the study. Rarely does a manuscript mention these values in the methodology. Consolidated Standards of Reporting Trials (CONSORT), a checklist used to critically appraise the quality of an RCT, expects an article to justify the sample size with a clear description.

Appropriate sample size (adequate) can control Type 1(α) and Type 2(β) errors. These are also called false-positive and false-negative errors and are conventionally fixed at 5% and 20%, respectively. [2] The power of the study would be 80% in this situation (power = 1-β); this is the minimum required power. Power analysis provides a clear idea of the ability of the study to detect a difference or association, in case if it is present. Type 1 and Type 2 errors are a function of the sample size and thus, these can be controlled and minimized by taking an appropriate sample size for the research.

Apart from all the errors discussed above, there are some statistical errors, which are very common in the published literature. These arise because of statistical ignorance. Every researcher should have a cursory knowledge of biostatistics and also seek the expertise of a biostatistician. There is no point in calling a biostatistician at the end of research for doing analysis; he/she can at best say why the experiment is dead. Statistical designing is as important as research designing. A list of those mistakes is given below. [2]

  1. No control group (uncontrolled trial).
  2. No randomization.
  3. Lack of blinding.
  4. Misleading analysis of baseline characters (confounding).
  5. Inadequate sample size (Type 2 error).
  6. Overlarge sample size (Type 1 error).
  7. Multiple testing (data dredging).
  8. Misuse of Student's t-test.
  9. Misuse of chi-square test.
  10. Presenting standard error (SE) with sample mean instead of SD.
  11. Misuse of correlation and linear regression analysis.
  12. Preoccupation with P values.
  13. Not presenting confidence intervals apart from the summary statistic.
  14. Overlapping diagnostic tests and predictive equations.

Since 1989, there has been a substantial increase in the quantity and complexity of statistics seen in dental journals. [4] A detailed description of these statistical flaws, which are often noticed in health research is beyond the scope of this article. Readers may refer to appropriate books in biostatistics. Unfortunately, statistical texts are often unhelpful as they tend to be filled with technical details. One can refer to a series of articles published in "British Dental Journal (BDJ)" by Petrie et al. in 2002 titled "Further Statistics in Dentistry."

It is observed that some people have a poor level of trust in statistical methods. Statistical models have been built based on sound scientific principles. Only when one learns biostatistics there is a possibility of admiring the scientific or mathematical beauty of it. One should know when and how to use statistical models, otherwise there are chances of misuse and abuse.

It was aptly said,
"It is easy to lie with statistics, but it is impossible to tell truth without it"

-Indian Statistical Institute

   Bottomline Top

There are innumerable ways in which health research may go wrong. Every intentional or unintentional error incorporates the distortion of data and results. Perfect health research does not exist. Eliminating and minimizing errors to the best extent is a must. As all errors cannot be eliminated completely, the results are to be considered keeping in mind that a certain amount of error is always there in health research. [5]

Financial support and sponsorship


Conflicts of interest

There are no conflicts of interest.

   References Top

Aschengrau A, Seage GR 3 rd . Essentials of Epidemiology in Public Health. 3 rd ed. Burlington, MA. Jones & Bartlett Learning Publishers; 2008. p. 252-4, 270, 272, 275, 276, 281-3, 287-90, 299-310.  Back to cited text no. 1
Myles PS, Gin T. Statistical Methods for Anaesthesia and Intensive Care. Edinburgh: Butterworth-Heinemann Publishers; 2000. p. 21-6.  Back to cited text no. 2
Gordis L. Epidemiology. 4 th ed. Pennsylvania. Elsevier Publications; 2013. p. 251-6.  Back to cited text no. 3
Moles D. Further statistics in dentistry: Introduction. Br Dent J 2012;193:375.  Back to cited text no. 4
Nagesh L. Handbook on Journal Club and Critical Evaluation of Scientific Article. Bangalore: Swapra Jyothi Publications; 2011. p. 34, 49, 53.  Back to cited text no. 5

   Authors Top

Dr. Nagesh Lakshminarayan is an alumnus of Government Dental College, Bangalore, Karnataka, India. He completed Bachelor of Dental Surgery (BDS) in 1986 and Master of Dental Surgery (MDS) (Preventive and Social Dentistry) in 1992. He is at present affiliated to the Institute of Dental Sciences, Bareilly, Uttar Pradesh, India as Professor and Head of the Department of Public Health Dentistry. He has 65 national and international publications to his credit.
He completed his postgraduate (PG) diploma in Health Professions Education offered by KLE University in collaboration with the University of Illinois at Chicago, Illinois, USA in 2014. He has been an invited expert committee member of the World Health Organization [South-East Asia Regional Office (SEARO) region] on two occasions. He served as oral health research consultant in 2008 for Glaxo Smith Kline (GSK), a pharmaceutical company for a field project in India.
He is a keen learner and a sought-after speaker in research methodology, biostatistics, bioethics, journal clubs, critical appraisal of scientific articles, health professions' education, practice management, and spiritual science topics. He has conducted many workshops on these topics for faculties and PG students in various medical and dental colleges across India. He has authored a manual "What, Why, and How of Journal Clubs — critical evaluation of a scientific article." The manual is popular among faculties and PG students and used as a handbook to hone critical appraisal skills.
He is a member of the Theosophical Society, Karnataka State Theosophical Federation for the last 15 years. He is a national speaker of the Indian Section of Theosophical Society for the last 2 years. He has delivered many lectures on science, philosophy, and spirituality from the Theosophical Society platform. He is an ardent believer of theosophical principles. He integrates science, spirituality, and philosophy in his lectures.


  [Figure 1]


    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

  In this article
    Errors at Planni...
    Errors at Design...
    Two Inevitable E...
    Article Figures

 Article Access Statistics
    PDF Downloaded236    
    Comments [Add]    

Recommend this journal