Choosing the Instrument and the Design

Scientists collect data in systematic ways and analyze them in order to draw inferences and conclusions. In modern research, one of the ways scientists collect data is through the use of surveys (also known as instruments) that allow for the objective compilation of responses and their tabulation for future analysis. But simply having an instrument is not enough—there must also be a systematic way of analyzing the data that takes full advantage of the responses acquired and enables insights and inferences to be drawn from the responses and from the research process itself. Research design is the system of data collection and analysis, and it can take many shapes and forms. The two main types of research are the experimental and the quasi-experimental research. Researchers must choose from an abundance of options and find the best fit for their particular study: what kind of instrument and what type of research design would best allow for a proper study of a particular problem?

The Instrument

One of the first questions facing the researcher is whether to create his or her own instrument or to either use an established instrument or modify an already existing one for the purposes of the researcher’s project. The first step in addressing this question needs to be the clear definition of the purpose of the research: the investigator must clearly articulate what he or she seeks to accomplish then look for the best way to do so (Creswell, 2009). During this process, the researcher will establish whether a survey is the best course of action and define how data will be collected.

If the researcher determines that an instrument is needed for the research, then he or she must choose: should an existing instrument be used (even if modified) or should a new one be created? Northcentral University discourages students from either modifying or creating their own instruments, but it is open to allowing them to do so if they choose to (Tests and measurements, 2013). The reason for this position is the fact that by modifying an existing instrument or creating a new one, the researcher will need to test the instrument for internal validity. This process can take a long time and requires much effort on the part of the researcher. On the other hand, using an established instrument may require special permission or even the purchase of the rights to use the instrument from the copyright holder.

An instrument’s internal validity must be demonstrated and clearly articulated because it determines whether the conclusions reached through its use are meaningful. Three main types of validity must be of concern to the researcher at this point: content validity, concurrent validity, and construct validity. If an instrument has content validity, it measures what it is supposed to measure; that is, the questions in the survey are strongly related to the purpose of the research and help the researcher gather meaningful and relevant data. When an instrument has concurrent validity, it provides the researcher with information on “the relationship between the measure and a criterion” at the same time (Cozby & Bates, 2012, p. 104). And if an instrument has construct validity, the variables are well defined, the methods are appropriate and the instrument “measures the construct it is intended to measure” (Cozby & Bates, 2012, p. 101).

When choosing an existing instrument, the researcher must also determine if the instrument is reliable (Creswell, 2009). This reliability test should include investigations into internal consistency—whether the answers are consistent even if the instrument is being applied to a different construct. This is a critical step because the instrument’s validity may be affected by any modifications and application in a new study.

When designing a new instrument, the researcher must have a clearly delineated plan and solid answers to key questions. The researcher must define the purpose for the survey and the reasons for choosing the specific design; he or she must also determine whether the study will be cross-sectional or longitudinal; define the population and whether it will be stratified; and establish sampling procedures, scales, pilot testing procedures, and the timeline for the study. In addition, the researcher must define the variables (independent and dependent) and how the data collected will be analyzed.

The Design

Once the researcher has chosen the instrument, he or she must determine the design that will be used to gather and analyze the data. The two main research design types are experimental and quasi-experimental. According to Trochim and Donnelly (2008), the main difference between experimental and quasi-experimental designs is the fact that in experimental design research the participants are randomly assigned, whereas in quasi-experimental design research the participants are allocated according to some other criterion.

In experimental design, randomization can take place both at the sampling and the assignment levels. Randomized sampling is the random selection of participants, whereas randomized assignment means that participants are randomly allocated to specific groups or receive treatment according to a randomly generated list (Cozby & Bates, 2012). In quasi-experimental design, participants are allocated or assigned according to criteria defined by the researcher. This may include a cut-off score of some kind, or may be based on need or some other characteristic as required by the study (Trochim & Donnelly, 2008). Randomization is considered key to make the groups similar and ensure internal validity; however, quasi-experimental designs are also internally valid despite the fact that they do not incorporate randomized assignment.

Experimental design can take many forms and shapes. Among the different types of experimental design, the researcher can choose from two major classifications: signal-enhancing (or factorial) and noise-reducing designs. The most common factorial designs are named after the number of factors and levels involved in the research. For instance, a 3×4 factorial design would involve two factors, where one factor has three levels and the other has four levels. When analyzing data from factorial designs, the researcher must carefully study the null outcome, the main effects and the interaction effects. The researcher may also opt not to examine every combination of factors; in this situation, the design is known as an incomplete factorial design. Among noise-reducing designs, randomized block designs provide the researcher with a system to group the sample into blocks. If the blocks are homogeneous, the researcher can more clearly see the treatment effect. Covariance designs also allow for the reduction of noise because the researcher co-varies the pretest or a pre-program measure with the post-test or outcome variable (Trochim & Donnelly, 2008). Experimental designs can also be combined or changed to become another design: the Solomon Four-Group design (a design with four groups in which only the treatment and one control group receive a pretest) and the Switching-Replications design (a design with two groups in which measurement is done in three phases) are two examples of ways in which an experimental design can be modified to suit the needs of the study.

Quasi-experimental designs lack the randomization found in experimental designs, but are otherwise very similar to them. Just as experimental designs, quasi-experimental designs include treatment groups, control groups, and pretest or posttest measurements. Even though they may appear to have a lower level of internal validity, they are useful and provide insights that experimental designs may lack (Trochim & Donnelly, 2008).

There are many types of quasi-experimental designs. The Nonequivalent-Groups design includes two groups that are not randomly assigned; this design is ideal for researchers who are able to establish comparison and program groups that are very similar. This research design is particularly susceptible to selection threats, including selection bias, selection-maturation, selection-history, selection regression, selection testing, selection instrumentation, and selection mortality threats. The Regression-Discontinuity design is one in which participants are assigned to groups through the use of a cutoff system; this design is recommended for investigating cause and effect because its internal validity is as strong as in experimental designs. As with the Nonequivalent-Groups design, the Regression-Discontinuity design also suffers from potential selection threats, but unlike with the former, if well modeled the latter can address those threats and be “as strong in internal validity as its randomized experimental alternative” (Trochim & Donnelly, 2008, p. 221). Other quasi-experimental designs include: the Proxy Pretest design (a design applicable when a pretest was not used a the beginning of the study), the Separate Pre-Post Samples design (a design where posttest takers are not the same as the pretest takers), the Double-Pretest design (pretests are administered twice to the same group), the Switching-Replications design (a design in which the groups have their roles exchanged), the Nonequivalent Dependent Variables design (a design that uses a single group to measure two outcomes), and the Regression Point Displacement design (a design where only one individual in the group is given the treatment).

Conclusion

Research is a complex endeavor and must be approached with caution and much preparation. The researcher must clearly define the purpose of the study, select an appropriate instrument to collect the data, and carefully plan the way in which the data will be collected and analyzed. Whether the researcher chooses to create a new instrument or use an established one, internal validity must be a major concern and be clearly demonstrated. The researcher may choose from an array of research designs based on the study’s purpose and population; and the choice of design will guide how the study is implemented and have a direct bearing on the strength of the conclusions reached by the researcher (Cozby & Bates, 2012). Experimental designs have strong internal validity, but quasi-experimental designs can serve the purpose of certain types of study as well.

__________

References

Cozby, P. C., & Bates, S. C. (2012). Methods in behavioral research. Boston: McGraw Hill Higher Education.

Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage Publications.

Tests and measurements. (2009). Retrieved from http://library.ncu.edu/dw_template.aspx?parent_id=223

Trochim, W., & Donnelly, J. (2008). The research methods knowledge base. Mason, OH: Cengage.

Leave a comment