Scientific biases

Here is a selection of other scientific biases often encountered in research work. research : Attrition bias, Ceiling effect, Demand characteristics, Hawthorne effect, Omitted variable bias, Placebo effect, Publication bias, Pygmalion effect, Self-fulfilling prophecy

biais scientifiques

Attrition bias

Attrition is the dropping of participants over time in research studies. We also talk about subject mortality, but this does not always refer to the death of the participants!

Attrition bias poses a threat to internal validity. In experiments, differential attrition rates between treatment and control groups may confound results.

This type of research bias can affect the relationship between your independent and dependent variables. This can make variables appear to be correlated when they are not, or vice versa.

Attrition bias can skew your sample so that your final sample is significantly different from your original sample. Your sample is biased because certain groups in your population are underrepresented.

With a biased final sample, you may not be able to generalize your results to the original population you sampled from, compromising your external validity.

It is best to try to account for attrition bias in your study to obtain valid results. If you have a small bias, you can select a method statistics to try to compensate for it.

These methods help you recreate as much missing data as possible, without sacrificing accuracy.

Multiple imputation involves using simulations to replace missing data with probable values. You insert several possible values in place of each missing value, creating many complete data sets.

These values, called multiple imputations, are generated repeatedly using a simulation model to account for variability and uncertainty. You analyze all of your complete data sets and combine the results to get estimates of your mean, standard deviation, or other parameters.

You can use sample weighting to compensate for the unequal balance of participants in your sample.

You adjust your data so that the overall composition of the sample reflects that of the population. Data from participants similar to those who left the study are overweighted to compensate for attrition bias.

Ceiling effect

A ceiling effect occurs when too large a percentage of participants achieve the highest score on a test. In other words, when test takers' scores are all clustered near the best possible score, or the "ceiling," the metric loses its value. This phenomenon is problematic because it defeats the purpose of testing, which is to measure something accurately.

In medicine and pharmacology, a ceiling effect refers to the phenomenon in which a drug reaches maximum effect, such that an increase in dosage does not increase its effectiveness. For example, researchers sometimes observe that there is a threshold above which a painkiller no longer has any additional effect. Even if they increase the dose, there is no additional benefit in terms of pain relief. In this context, the ceiling effect is due to human biology.

A ceiling effect associated with social science statistics refers to the phenomenon in which the majority of data is close to the upper limit or highest possible score of a test. This means that (almost) all test takers got the highest score (or very close to the highest score).

Ceiling effects can impact the quality of your data collection. It is really important to take the necessary measures to prevent this phenomenon. There are a few strategies you can use to avoid ceiling effects in your research:

Use previously validated instruments, such as pre-existing questionnaires measuring the concept of interest. This way you can ensure that the questionnaire will allow you to collect a wide range of responses.

If no such instrument exists, conduct a pilot survey or experiment to check for ceiling effects. Conducting a small-scale test of your survey will give you the opportunity to adjust your questions in case you notice a ceiling effect.

When your survey includes sensitive or personal topics, such as questions about income or drug use, ensure anonymity and do not set artificial limits on responses. Instead, you can let participants fill in the highest value themselves.

Demand characteristics and the Hawthorne effect

In research, demand characteristics are cues that can indicate to participants the research objectives. These cues can lead participants to change their behaviors or responses based on how they feel about the research.

Demand characteristics are problematic because they can bias your search results. They typically occur in psychology experiments and social science studies because they involve human participants.

These sources include: Title of study on recruitment materials; Rumors about the study; Interactions of the researcher with the participant (e.g., a smile or frown after a response); Study procedure (e.g., order of tasks); Study setting (e.g. laboratory environment); Tools and instruments (e.g. video cameras, skin conductance measurements).

The Hawthorne effect refers to the tendency of people to behave differently when they realize they are being observed. As a result, what is observed may not represent “normal” behavior, threatening the internal and external validity of your research.

When you have demand characteristics, the internal validity of your experiment is not guaranteed. You cannot say with certainty that manipulating your independent variable alone caused your dependent variable to change.

The external validity of your experiment is also compromised by demand characteristics. The presence of these cues may mean that your results cannot be generalized to people or contexts outside of your study.

You can control demand characteristics by taking some precautions in your research design and materials. These methods will help minimize the risk of demand characteristics affecting your study.

You may use deception to hide the purpose of the study from participants. Deception may mean hiding certain information from participants or actively misleading them about the tasks, materials, or goals of the study.

From an ethical perspective, deception can be used in research when it is justifiable and there is no risk of harm. You should always inform participants of the true objectives of the study after they have completed it.

In quantitative research, you typically use a between- or within-groups design. While participants receive only one independent variable treatment in a between-groups design, they receive all independent variable treatments in a within-groups design.

When you use blinding or masking in medicine, you conceal from participants whether they are in a treatment group or a control group. In a single-blind design, you know the condition assigned to the participant, whereas in a double-blind design, neither you nor the participants know the condition assigned.

In psychology, implicit (hidden) measures help you record cognitive abilities, traits, or behaviors that people may not be open about or be able to report. These measures indirectly assess attitudes or character traits without explicitly asking participants to report their experiences.

Omitted variable bias

Omitted variable bias occurs when a statistical model fails to include one or more relevant variables. In other words, this means you missed an important factor in your analysis.

As a result, the model erroneously attributes the effect of the missing variable to the included variables. Excluding important variables can limit the validity of your study results.

An omitted variable is a confounding variable related to both the hypothesized cause and the hypothesized effect of a study. In other words, it is related to both the independent variable and the dependent variable.

Although a variable can be omitted because you don't know it exists, it is also possible to omit variables that you cannot measure, even if you know they exist.

An omitted variable is a source of endogeneity. Endogeneity occurs when a variable in the error term is also correlated with an independent variable. When this happens, the causal effect of the omitted variable becomes entangled in the coefficient of the variable with which it is correlated. This, in turn, undermines our ability to infer causality and has serious consequences for our results.

Omitting a variable may result in overestimation (upward bias) or underestimation (downward bias) of the coefficient of your independent variable(s). Since the coefficient becomes unreliable, the regression model also becomes unreliable.

If the required data is not available, such as in the case of capacity, you can use control variables. If you don't have the data, use proxies for omitted variables. These are variables that are similar enough to the omitted variable to give you an idea of its value, but that you can measure.

If you can't resolve search bias, try to predict in which direction your estimates are biased. This is called “signing” bias. You can sign it as positive or negative, which helps you estimate omitted variable bias.

Placebo and Nocebo effect

The placebo effect is a phenomenon in which people report real improvement after taking a fake or nonexistent treatment, called a placebo. Since placebo cannot actually cure any disease, any reported beneficial effects are due to a person's belief or expectation that their disease is being treated. The placebo effect is often observed in experimental designs where participants are randomly assigned to a control group or a treatment group.

A placebo can be a sugar pill, a salt water injection, or even a fake surgery. In other words, a placebo has no therapeutic properties. Placebos are often used in medical research and clinical trials to help scientists evaluate the effects of new drugs.

In these clinical trials, participants are randomly assigned to either the placebo or the experimental drug. Above all, they do not know what treatment they are receiving. The results of the two groups are compared to see if they differ. In double-blind studies, researchers also don't know who received the actual treatment or the placebo. This is to prevent them from transmitting to participants demand characteristics that could influence the results of the study.

The response of people assigned to the placebo control group may not always be positive. They may experience what is called a "nocebo effect," or a result negative, when they take a placebo. The same explanation applies here. If you expect a negative result, you are more likely to get a negative result.

For example, in a clinical trial, participants receive a placebo but are informed of the side effects that the “treatment” may cause. They may have the same side effects as participants who receive the active treatment, only because they expect them to occur.

Many studies are looking at the placebo, nocebo effect and how to take it into account in clinical results. We will not expose all the theories on this site. We invite you to see the latest research work in this area in order to understand how to take these two effects into account in your work.

Publication bias

Publication bias refers to the selective publication of research studies based on their results. Here, studies with positive results are more likely to be published than studies with negative results.

Positive results are also likely to be published more quickly than negative results. As a result, a bias is introduced: the results of published studies systematically differ from the results of unpublished studies.

A number of factors can lead to publication bias:

– Often, researchers do not submit their negative results because they feel that their research has “failed” or is not interesting enough.

– In some cases, researchers may suppress negative results from clinical trials for fear of losing funding. This can happen, for example, when for-profit companies sponsor medical research.

– Researchers themselves are aware of publication bias. They know that if they submit positive results, they will have a greater chance of having their work published in prestigious journals. This, in turn, can increase their reputation among their peers, the number of citations their papers generate, their chances of getting a grant, etc. This might even lead them to not submit further results.

– The financial situation of academic journals also depends on the number and frequency of citations generated by published studies. These are an indication of the extent to which a newspaper is noticed or respected. Since studies with negative results are less likely to be cited than studies with positive results, it is more attractive for journals to publish positive results.

In other words, researchers and editors introduce research bias into the process of determining which results are worthy of publication.

Publication bias can cause problems in your research for several reasons:

– This increases the likelihood that published results reflect Type I errors. These biasing effects become more pronounced and suggest larger effects on future studies, which may actually be due to chance. For example, it can lead to overestimation of the effectiveness of a new drug.

– Researchers may be wasting their efforts and resources by conducting previously completed but unpublished studies because the treatment or intervention has not been shown to be effective.

– This affects the quality of reviews literature. A literature review limited to published studies is very selective and may lead to overestimated effects.

– Failure to publish null results because they “didn’t work” limits our ability to deeply understand all aspects of a scientific topic being studied. Even though strong results mean effective treatments or interventions, failure to publish null results means that much of the topic remains hidden or unknown.

– This means that published studies no longer constitute a representative sample of available knowledge. This bias can distort the results of systematic reviews using meta-analyses or statistical analyzes combining the results of several studies focused on the same topic. When not taken into account, publication bias compromises results.

– This may lead some researchers to manipulate their results to ensure statistically significant results. An example of this is data dredging, or running statistical tests on a set of data until something statistically significant happens.

Pygmalion Effect and Self-Fulfilling Prophecy

The Pygmalion effect refers to situations in which high expectations lead to improved performance and low expectations lead to degraded performance. If a researcher has a high expectation that patients assigned to the treatment group will do well, those patients may perform better than the control group. In this example, the Pygmalion effect takes the form of (unconscious) researcher bias.

Let's explain this with an example: You are conducting longitudinal research on the effectiveness of several bank branch managers over a one-year period.

Every few months, each manager receives a performance review. Those who fail to meet their revenue goals automatically receive a negative review. To avoid further negative feedback, you observe that these managers are more likely to offer safe but less profitable loans.

This leads to a loss of customers to competitors and further negative criticism of these managers. To reverse the situation, agency directors then began to accept as many loans as possible, even the riskiest ones. This also leads to lower profits for branches, as tenants are more likely to default.

After conducting semi-structured interviews with agency managers, you realize that their erratic behavior was an effort to avoid further damage to their careers and self-esteem, rather than a lapse in judgment. You notice that managers who receive negative feedback in particular become less effective over time.

You conclude that the Pygmalion effect played a role in the behavior of managers.

Branch managers viewed negative performance evaluation as a sign of failure and distrust in their abilities to perform. They internalized this belief and it affected their actions. Their poor decisions led to even worse performance, confirming the expectations of their superiors.

Often the Pygmalion effect is linked to a self-fulfilling prophecy. It is a belief about a future outcome that contributes to one's own fulfillment. This happens because the unconscious expectations we hold can influence our actions and ultimately cause the initial prediction to become true.

Self-imposed prophecies occur when the beliefs we have about ourselves influence our own behaviors and actions. Our beliefs can be either negative and limiting, or positive and motivating.

Imposed prophecies occur when the expectations of others influence our behavior and actions. For example, a parent who views their child as “bright” or “lazy” may also treat them accordingly. As a result, the child's behavior may be influenced positively or negatively by his parents' expectations.

Self-fulfilling prophecies and the Pygmalion Effect are a great example of how our thoughts and beliefs can lead to consequences that we expected or feared (even if it is on a subconscious level).

en_USEN