Here is a selection of other scientific biases often encountered in the work of research Attrition bias, Ceiling effect, Demand characteristics, Hawthorne effect, Omitted variable bias, Placebo effect, Publication bias, Pygmalion effect, Self-fulfilling prophecy
Attrition is the dropping out of research studies over time. It is also referred to as subject mortality, but this does not always mean the death of the participants.
Attrition bias poses a threat to internal validity. In experiments, differential attrition rates between treatment and control groups can skew the results.
This type of research bias can affect the relationship between your independent and dependent variables. This can make variables appear to be correlated when they are not, or vice versa.
Attrition bias can skew your sample so that your final sample is significantly different from your original sample. Your sample is biased because certain groups within your population are underrepresented.
With a biased final sample, you may not be able to generalize your results to the original population from which you sampled, which will compromise your external validity.
It is best to try to account for attrition bias in your study to obtain valid results. If you have a small bias, you can select a method statistics to try to compensate for it.
These methods help you recreate as much missing data as possible, without sacrificing accuracy.
Multiple imputation involves using simulations to replace missing data with probable values. You insert several possible values in place of each missing value, thus creating many complete datasets.
These values, called multiple imputations, are generated repeatedly using a simulation model to account for variability and uncertainty. You analyze all your complete datasets and combine the results to obtain estimates of your mean, standard deviation, or other parameters.
You can use sample weighting to compensate for the unequal balance of participants in your sample.
You adjust your data so that the overall composition of the sample reflects that of the population. Data from participants similar to those who left the study are overweighted to compensate for attrition bias.
A ceiling effect occurs when too large a percentage of participants achieve the highest score on a test. In other words, when test takers' scores are all clustered near the best possible score, or the "ceiling," the metric loses its value. This phenomenon is problematic because it defeats the purpose of testing, which is to measure something accurately.
In medicine and pharmacology, a ceiling effect refers to the phenomenon where a drug reaches a maximum effect, such that increasing the dosage does not increase its effectiveness. For example, researchers sometimes observe that there is a threshold above which an analgesic has no further effect. Even if they increase the dose, there is no additional benefit in terms of pain relief. In this context, the ceiling effect is due to human biology.
A ceiling effect associated with statistics in the social sciences refers to the phenomenon in which the majority of data points are close to the upper limit or the highest possible score on a test. This means that (almost) all test participants obtained the highest score (or very close to the highest score).
Ceiling effects can impact the quality of your data collection. It is really important to take the necessary measures to prevent this phenomenon. There are a few strategies you can use to avoid ceiling effects in your research:
Use previously validated instruments, such as pre-existing questionnaires measuring the concept of interest. This way you can ensure that the questionnaire will allow you to collect a wide range of responses.
If no such instrument exists, conduct a pilot survey or experiment to verify ceiling effects. Performing a small-scale test of your survey will give you the opportunity to adjust your questions in case you notice a ceiling effect.
When your survey includes sensitive or personal topics, such as questions about income or drug use, ensure anonymity and do not set artificial limits on responses. Instead, you can let participants enter the highest value themselves.
In research, demand characteristics are cues that can indicate to participants the research objectives. These cues can lead participants to change their behaviors or responses based on how they feel about the research.
Demand characteristics are problematic because they can bias your search results. They typically occur in psychology experiments and social science studies because they involve human participants.
These sources include: Study title on recruitment materials; Rumors about the study; Researcher interactions with the participant (e.g., a smile or frown after a response); Study procedure (e.g., order of tasks); Study setting (e.g., laboratory environment); Tools and instruments (e.g., video cameras, skin conductance measurements).
The Hawthorne effect refers to the tendency for people to behave differently when they realize they are being observed. Consequently, what is observed may not represent "normal" behavior, threatening the internal and external validity of your research.
When you have demand characteristics, the internal validity of your experiment is not guaranteed. You cannot say with certainty that manipulating your independent variable alone caused the change in your dependent variable.
The external validity of your experiment is also compromised by demand characteristics. The presence of these cues may mean that your results cannot be generalized to people or contexts outside of your study.
You can control demand characteristics by taking some precautions in your research design and materials. These methods will help minimize the risk of demand characteristics affecting your study.
You may use deception to hide the purpose of the study from participants. Deception may mean hiding certain information from participants or actively misleading them about the tasks, materials, or goals of the study.
From an ethical standpoint, deception can be used in research when it is justifiable and there is no risk of harm. You must always inform participants of the true objectives of the study once they have completed it.
In quantitative research, you typically use a between-groups or within-groups design. While participants receive only one independent variable treatment in a between-groups design, they receive all independent variable treatments in an within-groups design.
When using blinding or masking in medicine, you conceal from participants whether they are in a treatment group or a control group. In a single-blind design, you know the condition assigned to the participant, whereas in a double-blind design, neither you nor the participants know the assigned condition.
In psychology, implicit (hidden) measures help you record cognitive abilities, traits, or behaviors that people may not be open about or be able to report. These measures indirectly assess attitudes or character traits without explicitly asking participants to report their experiences.
Omitted variable bias occurs when a statistical model fails to include one or more relevant variables. In other words, it means you have omitted an important factor from your analysis.
Consequently, the model mistakenly attributes the effect of the missing variable to the included variables. Excluding important variables can limit the validity of your study results.
An omitted variable is a confounding variable related to both the hypothesized cause and the hypothesized effect of a study. In other words, it is related to both the independent variable and the dependent variable.
Even if a variable can be omitted because you do not know it exists, it is also possible to omit variables that you cannot measure, even if you know they exist.
An omitted variable introduces endogeneity. Endogeneity occurs when a variable in the error term is also correlated with an independent variable. When this happens, the causal effect of the omitted variable becomes entangled in the coefficient of the variable with which it is correlated. This, in turn, undermines our ability to infer causality and has serious consequences for our results.
Omitting a variable can lead to an overestimation (upward bias) or an underestimation (downward bias) of the coefficient of your independent variable(s). Since the coefficient becomes unreliable, the regression model also becomes unreliable.
If the required data is unavailable, as in the case of capacity, you can use control variables. If you don't have the data, use proxies for the omitted variables. These are variables similar enough to the omitted variable to give you an idea of its value, but which you can measure.
If you can't resolve search bias, try to predict in which direction your estimates are biased. This is called “signing” bias. You can sign it as positive or negative, which helps you estimate omitted variable bias.
The placebo effect is a phenomenon in which people report real improvement after taking a sham or nonexistent treatment, called a placebo. Since a placebo cannot actually cure any disease, any reported beneficial effects are due to a person's belief or expectation that their condition is being treated. The placebo effect is often observed in experimental designs where participants are randomly assigned to a control group or a treatment group.
A placebo can be a sugar pill, a salt water injection, or even a fake surgery. In other words, a placebo has no therapeutic properties. Placebos are often used in medical research and clinical trials to help scientists evaluate the effects of new drugs.
In these clinical trials, participants are randomly assigned to either a placebo or the experimental drug. Crucially, they do not know which treatment they are receiving. The results of the two groups are compared to see if they differ. In double-blind studies, researchers also do not know who received the actual treatment or the placebo. This is to prevent them from transmitting characteristics of the demand to the participants that could influence the study results.
The response of people assigned to the placebo control group may not always be positive. They may experience what is called a "nocebo effect," or a result negative, when they take a placebo. The same explanation applies here. If you expect a negative result, you are more likely to get a negative result.
For example, in a clinical trial, participants receive a placebo but are informed of the side effects that the "treatment" may cause. They may experience the same side effects as participants receiving the active treatment, simply because they expect them to occur.
Numerous studies examine the placebo and nocebo effects and how to account for them in clinical outcomes. We will not present all the theories on this site. We encourage you to review the latest research on this topic to understand how to consider these two effects in your work.
Publication bias refers to the selective publication of research studies based on their results. In this context, studies with positive results are more likely to be published than studies with negative results.
Positive results are also likely to be published more quickly than negative results. As a result, a bias is introduced: the results of published studies systematically differ from the results of unpublished studies.
A number of factors can lead to publication bias:
– Often, researchers do not submit their negative results because they feel their research has «failed» or is not interesting enough.
– In some cases, researchers may suppress negative results from clinical trials for fear of losing their funding. This can happen, for example, when for-profit companies sponsor medical research.
Researchers themselves are aware of publication bias. They know that if they submit positive results, they are more likely to have their work published in prestigious journals. This, in turn, can increase their reputation among their peers, the number of citations generated by their articles, their chances of obtaining a grant, and so on. It could even lead them to withhold further results.
The financial situation of academic journals also depends on the number and frequency of citations generated by their published studies. These are an indication of the degree to which a journal is noticed or respected. Since studies with negative results are less likely to be cited than those with positive results, it is more advantageous for journals to publish positive findings.
In other words, researchers and editors introduce research bias into the process of determining which results are worthy of publication.
Publication bias can cause problems in your research for several reasons:
This increases the likelihood that published results reflect type I errors. These bias effects are amplified and suggest greater effects on future studies, which could actually be due to chance. For example, this can lead to an overestimation of the effectiveness of a new drug.
– Researchers may be wasting their efforts and resources by conducting studies that have already been done but not published because the treatment or intervention has not proven effective.
– This affects the quality of the reviews literature. A literature review limited to published studies is very selective and may lead to overestimated effects.
– The failure to publish null results because they «didn’t work» limits our ability to fully understand all aspects of a scientific subject under study. While robust results indicate effective treatments or interventions, the failure to publish null results means that much of the subject remains hidden or unknown.
This means that published studies no longer constitute a representative sample of available knowledge. This bias can skew the results of systematic reviews using meta-analyses or statistical analyses that combine the results of several studies focused on the same topic. When it is not taken into account, publication bias compromises the results.
This can lead some researchers to manipulate their results to ensure statistically significant outcomes. One example of this is resorting to data dredging or running statistical tests on a dataset until something statistically significant occurs.
The Pygmalion effect refers to situations in which high expectations lead to improved performance and low expectations lead to impaired performance. If a researcher has high expectations for the patients in the treatment group, these patients may achieve better results than the control group. In this example, the Pygmalion effect takes the form of an (unconscious) bias on the part of the researcher.
Let's explain this with an example: You are leading a longitudinal research on the effectiveness of several bank branch managers over a one-year period.
Every few months, each manager receives a performance review. Those who fail to meet their revenue targets automatically receive a negative assessment. To avoid further negative feedback, you observe that these managers are more likely to offer safe but less profitable loans.
This leads to a loss of customers to competitors and further negative criticism of these managers. To reverse this trend, branch managers then begin accepting as many loans as possible, even the riskiest ones. This also results in decreased profits for the branches, as tenants are more likely to default.
After conducting semi-structured interviews with branch managers, you realize that their erratic behavior was an effort to avoid further damaging their careers and self-esteem, rather than a lack of judgment. You observe that managers who receive negative feedback, in particular, become less effective over time.
You conclude that the Pygmalion effect played a role in the behavior of managers.
Branch managers viewed the negative performance reviews as a sign of failure and a lack of confidence in their ability to perform. They internalized this belief, and it affected their actions. Their poor decisions led to even worse performance, confirming their superiors' expectations.
The Pygmalion effect is often linked to a self-fulfilling prophecy. This is a belief about a future outcome that contributes to its own fulfillment. This occurs because the unconscious expectations we hold can influence our actions and ultimately cause the initial prediction to come true.
Self-imposed prophecies occur when the beliefs we have about ourselves influence our own behaviors and actions. Our beliefs can be either negative and limiting, or positive and motivating.
Imposed prophecies occur when the expectations of others influence our behavior and actions. For example, a parent who views their child as “bright” or “lazy” may also treat them accordingly. As a result, the child's behavior may be influenced positively or negatively by his parents' expectations.
Self-fulfilling prophecies and the Pygmalion effect are an excellent example of how our thoughts and beliefs can lead to consequences we expected or feared (even if this is at the subconscious level).