Experimental Research Design Explained: Types, Examples, and How to Choose


Quick Answer: What Experimental Research Design Is

The definition.
Experimental research design is a research methodology in which the researcher deliberately manipulates an independent variable and measures its effect on a dependent variable, while controlling for other factors that could influence the outcome. It's the only design that can establish a causal relationship between variables.

The three core elements.
Manipulation of an independent variable, random assignment of participants to conditions, and control over extraneous variables that might confound the results.

The three main types.
True experimental design (random assignment, control group), quasi-experimental design (no random assignment, but a comparison group), and pre-experimental design (limited controls, used for early-stage exploration).


An experimental research design is the gold standard for establishing whether one variable causes a change in another. By manipulating an independent variable and observing the effect on a dependent variable while holding other factors constant, researchers can draw causal conclusions that descriptive and correlational designs cannot support. This guide covers what experimental research design is, how it differs from non-experimental approaches, the main types and classic designs, the strengths and limitations of each, and how to choose the right experimental design for your study.


For a broader overview of research methodologies and where experimental design fits among them, see our research methodology guide for graduate students. For the foundational comparison between quantitative and qualitative approaches, see our article on quantitative vs qualitative research.


What Is Experimental Research Design?

Experimental research design is a structured approach to research in which the investigator actively manipulates one or more variables to observe their effect on an outcome. The variable being manipulated is called the independent variable. The variable being measured for change is the dependent variable. Other variables that might influence the outcome are either held constant, randomized across conditions, or measured so their effects can be statistically controlled.


The defining feature of an experimental design is the deliberate manipulation of the independent variable by the researcher. This distinguishes it from observational and correlational research, in which the researcher records what happens naturally without intervening. Only experimental designs can support causal inference. If you observe that students who study more get higher grades, you have a correlation. If you randomly assign students to study more or less and then observe a difference in grades, you have a causal claim grounded in an experiment.


The Three Core Elements of Experimental Research Design

Every well-constructed experimental study includes three core elements. Together they create the conditions under which a causal claim can be defended.


1. Manipulation of an Independent Variable

The researcher must actively change something. If the independent variable is a drug dosage, participants receive different doses by design. If the independent variable is a teaching method, classrooms are deliberately taught using different methods. Manipulation distinguishes an experiment from an observational study, where the researcher simply records what naturally occurs.


2. Random Assignment to Conditions

In a true experiment, participants are randomly assigned to the different conditions of the independent variable. Random assignment ensures that any pre-existing differences between participants are distributed evenly across conditions, so they can't systematically bias the results. This is the mechanism that gives true experimental designs their causal authority.


Random assignment isn't always possible. When researchers can't assign participants randomly, perhaps because they're studying naturally occurring groups like schools or clinics, the design becomes quasi-experimental rather than truly experimental. Quasi-experimental designs are still useful, but the causal claims they support are weaker. For more on threats to causal inference, see our research bias guide.


3. Control of Extraneous Variables

Extraneous variables are anything other than the independent variable that might influence the dependent variable. In an experiment testing whether a new teaching method improves learning, extraneous variables might include the time of day classes are taught, the experience level of the instructors, or differences in the textbooks used. The researcher controls these by holding them constant across conditions, by counterbalancing, or by including them as covariates in the analysis.


When extraneous variables aren't controlled, they can become confounding variables, factors that influence the dependent variable in ways that get mistakenly attributed to the independent variable. Confounding is one of the most common threats to validity in experimental research and a frequent source of weakness in published studies.


The Three Types of Experimental Research Design

Experimental research designs fall into three categories based on how rigorously they control for confounding and how strongly they support causal inference.


True Experimental Design

A true experimental design includes all three core elements: manipulation of the independent variable, random assignment of participants to conditions, and a control group that doesn't receive the intervention. Because random assignment evenly distributes pre-existing differences across conditions, any difference in outcomes between the experimental and control groups can be attributed to the manipulation itself.


True experimental designs are common in psychology, medicine, and education research. Randomized controlled trials in clinical research are the most familiar example. Patients are randomly assigned to receive either the new treatment or a placebo, and the outcome is measured at the end of the trial period. True experiments offer the strongest evidence for causal claims, which is why regulatory agencies like the FDA require them for new drug approvals.


Quasi-Experimental Design

A quasi-experimental design includes manipulation of the independent variable and a comparison group, but participants aren't randomly assigned to conditions. This usually happens when random assignment is impractical or unethical. For example, a researcher studying the effect of a new curriculum can't randomly reassign students between schools, so they compare schools that adopted the curriculum to similar schools that didn't.


Quasi-experimental designs are widely used in education research, public health, policy evaluation, and economics. They're more practical than true experiments in real-world settings, but the lack of random assignment means pre-existing differences between groups can confound the results. Researchers using quasi-experimental designs often use statistical techniques like matching, regression discontinuity, or difference-in-differences to strengthen causal claims.


Pre-Experimental Design

A pre-experimental design has the weakest controls. There's no random assignment and often no comparison group at all. Common pre-experimental designs include the one-shot case study (a single group is measured after an intervention) and the one-group pretest-posttest (a single group is measured before and after an intervention). These designs are sometimes used in exploratory research or pilot studies, but they can't rule out alternative explanations for any observed change. A pretest-posttest improvement might reflect the intervention, or it might reflect maturation, practice effects, or random fluctuation. Pre-experimental designs are useful for generating hypotheses, not for testing them rigorously.


Submitting an experimental study for journal review or graduate committee approval?

Editor World's native English research paper editors specialize in experimental research methodologies across disciplines. Choose your own editor by subject expertise, receive your manuscript back with Track Changes, and get an optional certificate of editing for journals that require one. BBB A+ accredited since 2010. 100% human editing, no AI at any stage.

Get Your Manuscript Edited

Classic Experimental Designs Within the True Experimental Framework

Within true experimental design, several specific structures are widely used. Each addresses a different methodological priority.


Posttest-Only Control Group Design

The simplest true experimental design. Participants are randomly assigned to either the experimental or control group, the experimental group receives the intervention, and both groups are measured on the outcome at the end. There's no pretest. This design avoids any influence the pretest itself might have on the outcome (a problem known as testing effects), but it also can't show how individual participants changed from before to after the intervention.


Pretest-Posttest Control Group Design

Participants are randomly assigned to either the experimental or control group, both groups are measured before the intervention, the experimental group receives the intervention, and both groups are measured again afterward. The pretest lets the researcher verify that the groups were equivalent at baseline and measure change over time within each group. This is one of the most widely used designs in educational and psychological research.


Solomon Four-Group Design

A combination of the two designs above. Participants are randomly assigned to one of four groups: pretest plus intervention, pretest plus no intervention, no pretest plus intervention, no pretest plus no intervention. This design lets the researcher detect any interaction between the pretest and the intervention. It's more rigorous than either design alone, but it requires four times as many participants and is rarely used outside well-funded experimental studies.


Factorial Design

A factorial design manipulates two or more independent variables simultaneously, with participants randomly assigned to every possible combination of conditions. For example, a study might test the effect of two teaching methods (A or B) and two class sizes (small or large), creating four conditions: method A in small classes, method A in large classes, method B in small classes, and method B in large classes. Factorial designs let researchers test main effects (the effect of each independent variable on its own) and interaction effects (whether the effect of one variable depends on the level of another).


Between-Subjects vs Within-Subjects Designs

A between-subjects design assigns different participants to each condition. Each participant experiences only one condition of the independent variable. A within-subjects design has the same participants experience all conditions in sequence. Within-subjects designs require fewer participants and control for individual differences (since each participant serves as their own control), but they introduce order effects: practice, fatigue, or carryover from earlier conditions can influence performance in later ones. Researchers using within-subjects designs typically counterbalance the order of conditions to distribute these effects evenly.


Strengths of Experimental Research Design

Experimental designs have specific advantages over other research approaches.


  • Causal inference. When properly executed, experimental designs are the only research approach that can establish causal relationships between variables. Correlational studies, no matter how large or sophisticated, can only describe associations.
  • Control over confounders. Random assignment and deliberate control of extraneous variables minimize the risk that observed effects are due to factors other than the independent variable.
  • Replicability. Experimental procedures are typically documented in enough detail that other researchers can reproduce the study, which is essential for building cumulative scientific knowledge.
  • Statistical power. The structured nature of experimental designs makes them well-suited to inferential statistics, including t-tests, ANOVA, and regression methods.
  • Clear hypothesis testing. Experimental designs are built around testing specific predictions, which forces researchers to think rigorously about what they expect and why.

Limitations and Threats to Validity

Experimental research isn't appropriate for every research question, and even well-designed experiments face threats to validity that researchers need to anticipate.


  • External validity. Tightly controlled laboratory experiments can produce results that don't generalize to real-world settings. This trade-off between internal validity (rigorous control) and external validity (real-world applicability) is a constant tension in experimental research.
  • Ethical constraints. Many important research questions can't be studied experimentally because manipulating the independent variable would be unethical. You can't randomly assign people to smoke or not smoke for decades to measure cancer risk. For these questions, quasi-experimental and observational designs are necessary.
  • Practical constraints. Experimental designs can be expensive, time-consuming, and logistically complex. Studies that span multiple years or require specialized equipment may not be feasible.
  • Demand characteristics and observer bias. Participants who know they're in an experiment may behave differently than they otherwise would. Researchers may also unconsciously influence participants in ways that bias the results. Blinding (single-blind or double-blind procedures) is often used to address these threats. For more on the cognitive and procedural threats that affect experimental research, see our article on research bias.
  • Hawthorne effect. Participants may improve their performance simply because they know they're being observed, regardless of the intervention itself. This can inflate the apparent effect of an experimental manipulation.

Experimental Research Design Examples

Three realistic examples show how experimental research designs are applied in practice.


Example 1: A Randomized Controlled Trial in Medicine

A pharmaceutical researcher tests whether a new medication reduces blood pressure in adults with hypertension. Participants are randomly assigned to one of two groups: the experimental group receives the new medication, and the control group receives an identical-looking placebo. Neither participants nor the clinicians measuring blood pressure know which group each participant is in (a double-blind procedure). Blood pressure is measured at baseline, then at 4, 8, and 12 weeks after the intervention begins. This is a true experimental design (pretest-posttest control group design with double-blinding), and it produces the kind of evidence regulatory agencies require for drug approval.


Example 2: An Educational Intervention Study

An education researcher wants to know whether a new reading curriculum improves comprehension scores among third graders. The researcher can't randomly assign students between schools, but they can randomly assign classrooms within a single large school district to either the new curriculum or the existing curriculum. Comprehension is measured before the intervention and again at the end of the school year. This is a quasi-experimental design (cluster randomization at the classroom level rather than individual randomization), and it provides useful evidence about curriculum effectiveness while respecting the practical constraints of school-based research.


Example 3: A Graduate Student's Within-Subjects Experiment

A psychology graduate student investigates whether background music affects performance on a working memory task. Each participant completes the task under three conditions: silence, instrumental music, and music with lyrics. The order of conditions is counterbalanced across participants to control for order effects. This is a within-subjects experimental design. It requires fewer participants than a between-subjects equivalent and controls for individual differences, but the student needs to plan carefully to avoid fatigue and practice effects influencing the results.


How to Choose an Experimental Research Design

The right experimental design depends on the research question, the resources available, and the constraints of the setting. A few principles help with the choice.


  1. Start with the research question. If the question is causal (does X cause Y?), an experimental design is appropriate. If the question is descriptive or exploratory (what is the relationship between X and Y?), a correlational or qualitative design may fit better. For an introduction to choosing between approaches, see our article on quantitative vs qualitative research.
  2. Assess whether random assignment is possible. If yes, a true experimental design produces the strongest causal evidence. If not, a quasi-experimental design is the next-best option, with careful attention to threats to internal validity.
  3. Consider the population and sample. The number of participants you can recruit constrains which designs are feasible. A Solomon four-group design needs four times as many participants as a simple pretest-posttest design. For more on this, see our article on population vs sample in research.
  4. Weigh internal vs external validity. Tightly controlled laboratory studies maximize internal validity at the cost of generalizability. Field experiments increase external validity but introduce more potential confounders. The right balance depends on what the research is for.
  5. Plan for replication and reporting. Document the design in enough detail that another researcher could reproduce it. Pre-registration of the design and analysis plan, increasingly expected in many fields, requires the design to be specified in advance.

Reporting Experimental Research in Your Manuscript

When you write up an experimental study for a journal article, dissertation, or thesis, the methodology section needs to address every element of the design. Reviewers and committee members look for specific information.


  • A clear statement of the independent and dependent variables, with operational definitions
  • The type of design used (true experimental, quasi-experimental, or pre-experimental) and the specific structure (e.g., pretest-posttest control group, factorial)
  • Sample size, recruitment procedures, and how participants were assigned to conditions
  • A detailed description of the intervention, including who delivered it, where, when, and for how long
  • Measures used, including their psychometric properties (reliability and validity)
  • Procedures for controlling extraneous variables, including blinding where applicable
  • The statistical analysis plan, ideally specified before data collection began
  • Any deviations from the original design and how they were handled

Reviewers expect precision and transparency in this section. Vague methodology descriptions are a frequent cause of revision requests and rejections, particularly at higher-impact journals.



Frequently Asked Questions

What is experimental research design?

Experimental research design is a research methodology in which the researcher deliberately manipulates an independent variable and measures its effect on a dependent variable, while controlling for other factors that could influence the outcome. The defining feature is active manipulation of the independent variable rather than passive observation. Experimental designs are the only research approach that can establish causal relationships between variables, because random assignment of participants to conditions controls for pre-existing differences that might otherwise confound the results.


What are the three types of experimental research design?

The three main types are true experimental design, quasi-experimental design, and pre-experimental design. True experimental design includes random assignment of participants to conditions, a control group, and manipulation of an independent variable, producing the strongest evidence for causal claims. Quasi-experimental design includes manipulation and a comparison group but lacks random assignment, often because random assignment is impractical or unethical in the research setting. Pre-experimental design has the weakest controls, with no random assignment and often no comparison group, making it suitable for exploratory or pilot research rather than rigorous hypothesis testing.


What is the difference between true experimental and quasi-experimental design?

The critical difference is random assignment. In a true experimental design, participants are randomly assigned to the conditions of the independent variable, which controls for pre-existing differences between groups and supports strong causal inference. In a quasi-experimental design, participants aren't randomly assigned, usually because the groups already exist (such as students in different schools or patients at different clinics). Quasi-experimental designs are more practical in real-world settings but support weaker causal claims because pre-existing differences between groups can confound the results.


Why is random assignment important in experimental research?

Random assignment evenly distributes pre-existing differences between participants across the conditions of the independent variable. Without random assignment, the experimental and control groups might differ at baseline in ways that influence the outcome, making it impossible to know whether any observed effect is due to the manipulation or to those pre-existing differences. Random assignment is the mechanism that gives true experimental designs their causal authority and is the single most important feature distinguishing them from other research designs.


What is the difference between between-subjects and within-subjects experimental designs?

A between-subjects design assigns different participants to each condition of the independent variable, so each participant experiences only one condition. A within-subjects design has the same participants experience all conditions in sequence. Within-subjects designs require fewer participants and control for individual differences (since each participant serves as their own control), but they introduce order effects such as practice, fatigue, and carryover from one condition to the next. Researchers using within-subjects designs typically counterbalance the order of conditions to distribute these effects evenly.


When should you not use an experimental research design?

Experimental designs aren't appropriate when the research question is descriptive or exploratory rather than causal, when the independent variable can't be ethically manipulated (such as smoking or trauma exposure), when the population of interest can't be randomly assigned (such as different countries or naturally occurring groups), or when the practical constraints of the setting make manipulation infeasible. In these cases, observational, correlational, qualitative, or quasi-experimental designs are more appropriate. The choice of design should follow from the research question rather than from a preference for any particular methodology.


What are the most common threats to validity in experimental research?

The most common threats include confounding variables that aren't controlled, selection bias when random assignment isn't used or fails, demand characteristics and observer bias that influence participants and researchers, the Hawthorne effect where participants change behavior because they know they're being observed, attrition that reduces sample size unevenly across conditions, and testing effects from repeated measurement. External validity threats include unrepresentative samples and overly artificial laboratory settings that don't generalize to real-world contexts. Researchers address these threats through careful design choices including blinding, counterbalancing, control groups, and pre-registration of analysis plans. For a comprehensive guide to the cognitive and procedural biases that affect research, see our research bias guide.


How do you report experimental research design in a manuscript?

The methodology section should include a clear statement of the independent and dependent variables with operational definitions, the type of design used and its specific structure, sample size and recruitment procedures, the assignment procedure (random or otherwise), a detailed description of the intervention including who delivered it and when, the measures used with their psychometric properties, procedures for controlling extraneous variables including blinding where applicable, the statistical analysis plan, and any deviations from the original design. Reviewers and committee members expect precision and transparency in this section, and vague methodology descriptions are a frequent cause of revision requests.


Further Reading

For more on research methodology and the connected topics that affect experimental research, see our companion articles. The research methodology guide for graduate students is the foundational overview. The quantitative vs qualitative research article covers when an experimental approach is appropriate and when it isn't. The population vs sample in research article addresses sample size and sampling considerations that constrain experimental design choices. The research bias guide covers the cognitive and procedural biases that threaten experimental validity, including selection bias, observer bias, and confounding.


When you're ready to submit your manuscript, Editor World's research paper editing service, dissertation editing service, and journal article editing service are available 24/7 with native English editors who specialize in experimental research methodologies across disciplines.



Page last reviewed: May 2026. Content reviewed and edited by Debra F., PhD, Professional Editor with 30+ years of academic editing experience. Editor World, founded in 2010 by Patti Fisher, PhD, provides professional human-only editing services for researchers, academics, and graduate students worldwide. BBB A+ accredited since 2010 with 5.0/5 Google Reviews and 5.0/5 Facebook Reviews. More than 100 million words edited for over 8,000 clients in 65+ countries. Recommended by the Boston University Economics Department.