Homework Exercise in University of Phoenix Material
Type of document Essay
5 Pages Double Spaced (approx 275 words per page)
Subject area Psychology
Academic Level Master
Style APA
Number of sources/references 1
Order description:
University of Phoenix Material
Week Seven Homework Exercise
Answer the following questions, covering material from Ch. 13 of Methods in Behavioral Research:
- Define inferential statistics and how researchers use inferential statistics to draw conclusions from sample data.
- Define probability and discuss how it relates to the concept of statistical significance.
- A researcher is studying the effects of yoga on depression. Participants are randomly assigned to one of two groups: yoga and medication (experimental group); or support group and medication (control group). What is the null hypothesis? What is the research hypothesis?
- In the scenario described in the previous question, the researcher implements two programs simultaneously: a 6-week yoga program coupled with medication management and a 6-week support group program coupled with medication management. At the end of the 6 weeks, participants complete a questionnaire measuring depression. The researcher compares the mean score of the experimental group with the mean score of the control group. What statistical test would be most appropriate for this purpose and why? What is the role of probability in this statistical test?
- In the scenario described in the previous questions, the researcher predicted that participants in the experimental group—yoga plus medication—would score significantly lower on measures of depression than would participants in the control group—support group plus medication. True or false: A two-tailed test of significance is most appropriate in this case. Explain your response.
- Explain the relationship between the alpha level (or significance level) and Type I error. What is a Type II error? How are Type I and Type II errors different?
- A researcher is studying the effects of sex—male and female—and dietary sugar on energy level. Male and female participants agree to follow either a high sugar or low sugar diet for eight weeks. The researcher asks the participants to complete a number of questionnaires, including one assessing energy level, before and after the program. The researcher is interested in determining whether a high or low sugar diet affects reported energy levels differently for men and women. At the end of the program, the researcher examines scores on the energy level scale for the following groups: Men – low sugar diet; Men – high sugar diet; Women – low sugar diet; Women – high sugar diet. What statistic could the researcher use to assess the data? What criteria did you use to determine the appropriate statistical test?
Instruction: 200 words each question
Complete the Week Seven Homework Exercise assignment.
Click the Assignment Files tab to submit your assignment.
Please answer each question using the material I attach. Also use the reference (Cozby, P. C. and Bates, S. C. (2015). Methods in behavioral research (12th ed.). New York: McGraw-Hill).
Please be careful with grammar and spelling.
Put answer below each question so I know what number you are discussing.
Running Head is not needed. You do not have to place also the title page. Just use the material sheet I provided.
Instructions: Please read and follow instruction carefully. Use the Reference (each. Cozby, P. C. and Bates, S. C. (2015). Methods in behavioral research (12th ed.). New York: McGraw-Hill). Answer 2 question for 275 words
Complete the Correlation assignment.
Click the Assignment Files tab to submit your assignment.
Do not use running head or title page just answer the question using the material I provided and the scatter plot.
Only 2 questions to answer this week, see below. I created the scatter plot (see attached). Please choose a part, I will do whatever is leftover if everyone doesn’t contribute. Thanks!
Correlation:
A researcher is interested in investigating the relationship between viewing time (in seconds) and ratings of aesthetic appreciation. Participants are asked to view a painting for as long as they like. Time (in seconds) is measured. After the viewing time, the researcher asks the participants to provide a ‘preference rating’ for the painting on a scale ranging from 1-10.
- Create a scatter plot depicting the following data – I created this, see attached
- What does the scatter plot suggest about the relationship between viewing time and aesthetic preference?
- Is it accurate to state that longer viewing times are the result of greater preference for paintings? Explain.
*****Please use the scatter plot provided and the material sheet I attach.
LEARNING OBJECTIVES
- Explain how researchers use inferential statistics to evaluate sample data.
- Distinguish between the null hypothesis and the research hypothesis.
- Discuss probability in statistical inference, including the meaning of statistical significance.
- Describe the ttest and explain the difference between one-tailed and two-tailed tests.
- Describe the Ftest, including systematic variance and error variance.
- Describe what a confidence interval tells you about your data.
- Distinguish between Type I and Type II errors.
- Discuss the factors that influence the probability of a Type II error.
- Discuss the reasons a researcher may obtain nonsignificant results.
- Define powerof a statistical test.
- Describe the criteria for selecting an appropriate statistical test.
Page 267IN THE PREVIOUS CHAPTER, WE EXAMINED WAYS OF DESCRIBING THE RESULTS OF A STUDY USING DESCRIPTIVE STATISTICS AND A VARIETY OF GRAPHING TECHNIQUES.In addition to descriptive statistics, researchers use inferential statistics to draw more general conclusions about their data. In short, inferential statistics allow researchers to (a) assess just how confident they are that their results reflect what is true in the larger population and (b) assess the likelihood that their findings would still occur if their study was repeated over and over. In this chapter, we examine methods for doing so.
SAMPLES AND POPULATIONS
Inferential statistics are necessary because the results of a given study are based only on data obtained from a single sample of research participants. Researchers rarely, if ever, study entire populations; their findings are based on sample data. In addition to describing the sample data, we want to make statements about populations. Would the results hold up if the experiment were conducted repeatedly, each time with a new sample?
In the hypothetical experiment described in Chapter 12 (see Table 12.1), mean aggression scores were obtained in model and no-model conditions. These means are different: Children who observe an aggressive model subsequently behave more aggressively than children who do not see the model. Inferential statistics are used to determine whether the results match what would happen if we were to conduct the experiment again and again with multiple samples. In essence, we are asking whether we can infer that the difference in the sample means shown in Table 12.1 reflects a true difference in the population means.
Recall our discussion of this issue in Chapter 7 on the topic of survey data. A sample of people in your state might tell you that 57% prefer the Democratic candidate for an office and that 43% favor the Republican candidate. The report then says that these results are accurate to within 3 percentage points, with a 95% confidence level. This means that the researchers are very (95%) confident that, if they were able to study the entire population rather than a sample, the actual percentage who preferred the Democratic candidate would be between 60% and 54% and the percentage preferring the Republican would be between 46% and 40%. In this case, the researcher could predict with a great deal of certainty that the Democratic candidate will win because there is no overlap in the projected population values. Note, however, that even when we are very (in this case, 95%) sure, we still have a 5% chance of being wrong.
Inferential statistics allow us to arrive at such conclusions on the basis of sample data. In our study with the model and no-model conditions, are we confident that the means are sufficiently different to infer that the difference would be obtained in an entire population?
Page 268
INFERENTIAL STATISTICS
Much of the previous discussion of experimental design centered on the importance of ensuring that the groups are equivalent in every way except the independent variable manipulation. Equivalence of groups is achieved by experimentally controlling all other variables or by randomization. The assumption is that if the groups are equivalent, any differences in the dependent variable must be due to the effect of the independent variable.
This assumption is usually valid. However, it is also true that the difference between any two groups will almost never be zero. In other words, there will be some difference in the sample means, even when all of the principles of experimental design are rigorously followed. This happens because we are dealing with samples, rather than populations. Random or chance error will be responsible for some difference in the means, even if the independent variable had no effect on the dependent variable.
Therefore, the difference in the sample means does show any true difference in the population means (i.e., the effect of the independent variable) plus any random error. Inferential statistics allow researchers to make inferences about the true difference in the population on the basis of the sample data. Specifically, inferential statistics give the probability that the difference between means reflects random error rather than a real difference.
NULL AND RESEARCH HYPOTHESES
Statistical inference begins with a statement of the null hypothesis and a research (or alternative) hypothesis. The null hypothesis is simply that the population means are equal—the observed difference is due to random error. The research hypothesis is that the population means are, in fact, not equal. The null hypothesis states that the independent variable had no effect; the research hypothesis states that the independent variable did have an effect. In the aggression modeling experiment, the null and research hypotheses are:
H0 (null hypothesis): The population mean of the no-model group is equal to the population mean of the model group.
H1 (research hypothesis): The population mean of the no-model group is not equal to the population mean of the model group.
The logic of the null hypothesis is this: If we can determine that the null hypothesis is incorrect, then we accept the research hypothesis as correct. Acceptance of the research hypothesis means that the independent variable had an effect on the dependent variable.
The null hypothesis is used because it is a very precise statement—the population means are exactly equal. This permits us to know precisely the Page 269probability of obtaining our results if the null hypothesis is correct. Such precision is not possible with the research hypothesis, so we infer that the research hypothesis is correct only by rejecting the null hypothesis. We reject the null hypothesis when we find a very low probability that the obtained results could be due to random error. This is what is meant by statistical significance: A significant result is one that has a very low probability of occurring if the population means are equal. More simply, significance indicates that there is a low probability that the difference between the obtained sample means was due to random error. Significance, then, is a matter of probability.
PROBABILITY AND SAMPLING DISTRIBUTIONS
Probability is the likelihood of the occurrence of some event or outcome. We all use probabilities frequently in everyday life. For example, if you say that there is a high probability that you will get an A in this course, you mean that this outcome is likely to occur. Your probability statement is based on specific information, such as your grades on examinations. The weather forecaster says there is a 10% chance of rain today; this means that the likelihood of rain is very low. A gambler gauges the probability that a particular horse will win a race on the basis of the past records of that horse.
Probability in statistical inference is used in much the same way. We want to specify the probability that an event (in this case, a difference between means in the sample) will occur if there is no difference in the population. The question is: What is the probability of obtaining this result if only random error is operating? If this probability is very low, we reject the possibility that only random or chance error is responsible for the obtained difference in means.
Probability: The Case of ESP
The use of probability in statistical inference can be understood intuitively from a simple example. Suppose that a friend claims to have ESP (extrasensory perception) ability. You decide to test your friend with a set of five cards commonly used in ESP research; a different symbol is presented on each card. In the ESP test, you look at each card and think about the symbol, and your friend tells you which symbol you are thinking about. In your actual experiment, you have 10 trials; each of the five cards is presented two times in a random order. Your task is to know whether your friend’s answers reflect random error (guessing) or whether they indicate that something more than random error is occurring. The null hypothesis in your study is that only random error is operating. In this case, the research hypothesis is that the number of correct answers shows more than random or chance guessing. (Note, however, that accepting the research hypothesis could mean that your friend has ESP ability, but it could also mean that the cards were marked, that you had somehow cued your friend when thinking about the symbols, and so on.)
Page 270You can easily determine the number of correct answers to expect if the null hypothesis is correct. Just by guessing, 1 out of 5 answers (20%) should be correct. On 10 trials, 2 correct answers are expected under the null hypothesis. If, in the actual experiment, more (or less) than 2 correct answers are obtained, would you conclude that the obtained data reflect random error or something more than merely random guessing?
Suppose that your friend gets 3 correct. Then you would probably conclude that only guessing is involved, because you would recognize that there is a high probability that there would be 3 correct answers even though only 2 correct are expected under the null hypothesis. You expect that exactly 2 answers in 10 trials would be correct in the long run, if you conducted this experiment with this subject over and over again. However, small deviations away from the expected 2 are highly likely in a sample of 10 trials.
Suppose, though, that your friend gets 7 correct. You might conclude that the results indicate more than random error in this one sample of 10 observations. This conclusion would be based on your intuitive judgment that an outcome of 70% correct when only 20% is expected is very unlikely. At this point, you would decide to reject the null hypothesis and state that the result is significant. A significant result is one that is very unlikely if the null hypothesis is correct.
A key question then becomes: How unlikely does a result have to be before we decide it is significant? A decision rule is determined prior to collecting the data. The probability required for significance is called the alpha level. The most common alpha level probability used is .05. The outcome of the study is considered significant when there is a .05 or less probability of obtaining the results; that is, there are only 5 chances out of 100 that the results were due to random error in one sample from the population. If it is very unlikely that random error is responsible for the obtained results, the null hypothesis is rejected.
Sampling Distributions
You may have been able to judge intuitively that obtaining 7 correct on the 10 trials is very unlikely. Fortunately, we do not have to rely on intuition to determine the probabilities of different outcomes. Table 13.1 shows the probability of actually obtaining each of the possible outcomes in the ESP experiment with 10 trials and a null hypothesis expectation of 20% correct. An outcome of 2 correct answers has the highest probability of occurrence. Also, as intuition would suggest, an outcome of 3 correct is highly probable, but an outcome of 7 correct is highly unlikely.
The probabilities shown in Table 13.1 were derived from a probability distribution called the binomial distribution; all statistical significance decisions are based on probability distributions such as this one. Such distributions are called sampling distributions. The sampling distribution is based on the assumption that the null hypothesis is true; in the ESP example, the null hypothesis is that the person is only guessing and should therefore get 20% correct. Such a distribution assumes that if you were to conduct the study with the same number of observations over and over again, the most frequent finding would be 20%. However, because of the random error possible in each sample, there is a certain probability associated with other outcomes. Outcomes that are close to the expected null hypothesis value of 20% are very likely. However, outcomes farther from the expected result are less and less likely if the null hypothesis is correct. When your obtained results are highly unlikely if you are, in fact, sampling from the distribution specified by the null hypothesis, you conclude that the null hypothesis is incorrect. Instead of concluding that your sample results reflect a random deviation from the long-run expectation of 20%, you decide that the null hypothesis is incorrect. That is, you conclude that you have not sampled from the sampling distribution specified by the null hypothesis. Instead, in the case of the ESP example, you decide that your data are from a different sampling distribution in which, if you were to test the person repeatedly, most of the outcomes would be near your obtained result of 7 correct answers.
Page 271
TABLE 13.1 Exact probability of each possible outcome of the ESP experiment with 10 trials
All statistical tests rely on sampling distributions to determine the probability that the results are consistent with the null hypothesis. When the obtained data are very unlikely according to null hypothesis expectations (usually a .05 probability or less), the researcher decides to reject the null hypothesis and therefore to accept the research hypothesis.
Sample Size
The ESP example also illustrates the impact of sample size—the total number of observations—on determinations of statistical significance. Suppose you had tested your friend on 100 trials instead of 10 and had observed 30 correct answers. Just as you had expected 2 correct answers in 10 trials, you would now expect 20 of 100 answers to be correct. However, 30 out of 100 has a much Page 272lower likelihood of occurrence than 3 out of 10. This is because, with more observations sampled, you are more likely to obtain an accurate estimate of the true population value. Thus, as the size of your sample increases, you are more confident that your outcome is actually different from the null hypothesis expectation.
EXAMPLE: THE t AND F TESTS
Different statistical tests allow us to use probability to decide whether to reject the null hypothesis. In this section, we will examine the t test and the Ftest. The t test is commonly used to examine whether two groups are significantly different from each other. In the hypothetical experiment on the effect of a model on aggression, a t test is appropriate because we are asking whether the mean of the no-model group differs from the mean of the model group. The F test is a more general statistical test that can be used to ask whether there is a difference among three or more groups or to evaluate the results of factorial designs (discussed in Chapter 10).
To use a statistical test, you must first specify the null hypothesis and the research hypothesis that you are evaluating. The null and research hypotheses for the modeling experiment were described previously. You must also specify the significance level that you will use to decide whether to reject the null hypothesis; this is the alpha level. As noted, researchers generally use a significance level of .05.
t Test
The sampling distribution of all possible values of t is shown in Figure 13.1. (This particular distribution is for the sample size we used in the hypothetical experiment on modeling and aggression; the sample size was 20 with 10 participants in each group.) This sampling distribution has a mean of 0 and a standard deviation of 1. It reflects all the possible outcomes we could expect if we compare the means of two groups and the null hypothesis is correct.
To use this distribution to evaluate our data, we need to calculate a value of t from the obtained data and evaluate the obtained t in terms of the sampling distribution of t that is based on the null hypothesis. If the obtained t has a low probability of occurrence (.05 or less), then the null hypothesis is rejected.
The t value is a ratio of two aspects of the data, the difference between the group means and the variability within groups. The ratio may be described as follows:
The group difference is simply the difference between your obtained means; under the null hypothesis, you expect this difference to be zero. The value of t increases as the difference between your obtained sample means increases. Note that the sampling distribution of t assumes that there is no difference in the population means; thus, the expected value of t under the null hypothesis is zero. The within-group variability is the amount of variability of scores about the mean. The denominator of the t formula is essentially an indicator of the amount of random error in your sample. Recall from Chapter 12 that s, the standard deviation, and s2, the variance, are indicators of how much scores deviate from the group mean.
Page 273
FIGURE 13.1
Sampling distributions of t values with 18 degrees of freedom
A concrete example of a calculation of a t test should help clarify these concepts. The formula for the t test for two groups with equal numbers of participants in each group is:
Page 274The numerator of the formula is simply the difference between the means of the two groups. In the denominator, we first divide the variance ( and ) of each group by the number of subjects in that group (n1 and n2) and add these together. We then find the square root of the result; this converts the number from a squared score (the variance) to a standard deviation. Finally, we calculate our obtained t value by dividing the mean difference by this standard deviation. When the formula is applied to the data in Table 12.1, we find:
Thus, the t value calculated from the data is 4.02. Is this a significant result? A computer program analyzing the results would immediately tell you the probability of obtaining a t value of this size with a total sample size of 20. Without such a program, there are Internet resources to find a table of “critical values” of t (http://www.statisticsmentor.com/category/statstables/) or to calculate the probability for you (http://vassarstats.net/tabs.html). Before going any farther, you should know that the obtained result is significant. Using a significance level of .05, the critical value from the sampling distribution of t is 2.101. Any t value greater than or equal to 2.101 has a .05 or less probability of occurring under the assumptions of the null hypothesis. Because our obtained value is larger than the critical value, we can reject the null hypothesis and conclude that the difference in means obtained in the sample reflects a true difference in the population.
Degrees of Freedom
You are probably wondering how the critical value was selected from the table. To use the table, you must first determine the degrees of freedom for the test (the term degrees of freedom is abbreviated as df). When comparing two means, you assume that the degrees of freedom are equal to n1 + n2 − 2, or the total number of participants in the groups minus the number of groups. In our experiment, the degrees of freedom would be 10 + 10 − 2 = 18. The degrees of freedom are the number of scores free to vary once the means are known. For example, if the mean of a group is 6.0 and there are five scores in the group, there are 4 degrees of freedom; once you have any four scores, the fifth score is known because the mean must remain 6.0.
One-Tailed Versus Two-Tailed Tests
In the table, you must choose a critical t for the situation in which your research hypothesis either (1) specified a direction of difference between the Page 275groups (e.g., group 1 will be greater than group 2) or (2) did not specify a predicted direction of difference (e.g., group 1 will differ from group 2). Somewhat different critical values of t are used in the two situations: The first situation is called a one-tailed test, and the second situation is called a two-tailed test.
The issue can be visualized by looking at the sampling distribution of t values for 18 degrees of freedom, as shown in Figure 13.1. As you can see, a value of 0.00 is expected most frequently. Values greater than or less than zero are less likely to occur. The first distribution shows the logic of a two-tailed test. We used the value of 2.101 for the critical value of t with a .05 significance level because a direction of difference was not predicted. This critical value is the point beyond which 2.5% of the positive values and 2.5% of the negative values of t lie (hence, a total probability of .05 combined from the two “tails” of the sampling distribution). The second distribution illustrates a one-tailed test. If a directional difference had been predicted, the critical value would have been 1.734. This is the value beyond which 5% of the values lie in only one “tail” of the distribution. Whether to specify a one-tailed or two-tailed test will depend on whether you originally designed your study to test a directional hypothesis.
F Test
The analysis of variance, or F test, is an extension of the t test. The analysis of variance is a more general statistical procedure than the t test. When a study has only one independent variable with two groups, F and t are virtually identical—the value of F equals t2 in this situation. However, analysis of variance is also used when there are more than two levels of an independent variable and when a factorial design with two or more independent variables has been used. Thus, the F test is appropriate for the simplest experimental design, as well as for the more complex designs discussed in Chapter 10. The t test was presented first because the formula allows us to demonstrate easily the relationship of the group difference and the within-group variability to the outcome of the statistical test. However, in practice, analysis of variance is the more common procedure. The calculations necessary to conduct an F test are provided in Appendix C.
The F statistic is a ratio of two types of variance: systematic variance and error variance (hence the term analysis of variance). Systematic variance is the deviation of the group means from the grand mean, or the mean score of all individuals in all groups. Systematic variance is small when the difference between group means is small and increases as the group mean differences increase. Error variance is the deviation of the individual scores in each group from their respective group means. Terms that you may see in research instead of systematic and error variance are between-group variance and within-group variance. Systematic variance is the variability of scores between groups, and error variance is the variability of scores within groups. The larger the F ratio is, the more likely it is that the results are significant.
Page 276
Calculating Effect Size
The concept of effect size was discussed in Chapter 12. After determining that there was a statistically significant effect of the independent variable, researchers will want to know the magnitude of the effect. Therefore, we want to calculate an estimate of effect size. For a t test, the calculation is
where df is the degrees of freedom. Thus, using the obtained value of t, 4.02, and 18 degrees of freedom, we find:
This value is a type of correlation coefficient that can range from 0.00 to 1.00; as mentioned in Chapter 12, .69 is considered a large effect size. For additional information on effect size calculation, see Rosenthal (1991). The same distinction between r and r2 that was made in Chapter 12 applies here as well.
Another effect size estimate used when comparing two means is called Cohen’s d. Cohen’s d expresses effect size in terms of standard deviation units. A d value of 1.0 tells you that the means are 1 standard deviation apart; a d of .2 indicates that the means are separated by .2 standard deviation.
You can calculate the value of Cohen’s d using the means (M) and standard deviations (SD) of the two groups:
Note that the formula uses M and SD instead of and s. These abbreviations are used in APA style (see Appendix A).
The value of d is larger than the corresponding value of r, but it is easy to convert d to a value of r. Both statistics provide information on the size of the relationship between the variables studied. You might note that both effect size estimates have a value of 0.00 when there is no relationship. The value of r has a maximum value of 1.00, but d has no maximum value.
Confidence Intervals and Statistical Significance
Confidence intervals were described in Chapter 7. After obtaining a sample value, we can calculate a confidence interval. An interval of values defines the most likely range of actual population values. The interval has an associated confidence interval: A 95% confidence interval indicates that we are 95% sure that the population value lies within the range; a 99% interval would provide greater certainty but the range of values would be larger.
Page 277A confidence interval can be obtained for each of the means in the aggression experiment. The 95% confidence intervals for the two conditions are:
A bar graph that includes a visual depiction of the confidence interval can be very useful. The means from the aggression experiment are shown in Figure 13.2. The shaded bars represent the mean aggression scores in the two conditions. The confidence interval for each group is shown with a vertical I-shaped line that is bounded by the upper and lower limits of the 95% confidence interval. It is important to examine confidence intervals to obtain a greater understanding of the meaning of your obtained data. Although the obtained sample means provide the best estimate of the population values, you are able to see the likely range of possible values. The size of the interval is related to both the size of the sample and the confidence level. As the sample size increases, the confidence interval narrows. This is because sample means obtained with larger sample sizes are more likely to reflect the population mean. Second, higher confidence is associated with a larger interval. If you want to be almost certain that the interval contains the true population mean (e.g., a 99% confidence interval), you will need to include more possibilities. Note that the 95% confidence intervals for the two means do not overlap. This should be a clue to you that the difference is statistically significant. Indeed, examining confidence intervals is an alternative way of thinking about statistical significance. The null hypothesis is that the difference in population means is 0.00. However, if you were to subtract all the means in the 95% confidence interval for the no-model condition from all the means in the model condition, none of these differences would include the value of 0.00. We can be very confident that the null hypothesis should be rejected.
FIGURE 13.2
Mean aggression scores from the hypothetical modeling experiment including the 95% confidence intervals
Page 278
Statistical Significance: An Overview
The logic underlying the use of statistical tests rests on statistical theory. There are some general concepts, however, that should help you understand what you are doing when you conduct a statistical test. First, the goal of the test is to allow you to make a decision about whether your obtained results are reliable; you want to be confident that you would obtain similar results if you conducted the study over and over again. Second, the significance level (alpha level) you choose indicates how confident you wish to be when making the decision. A .05 significance level says that you are 95% sure of the reliability of your findings; however, there is a 5% chance that you could be wrong. There are few certainties in life! Third, you are most likely to obtain significant results when you have a large sample size because larger sample sizes provide better estimates of true population values. Finally, you are most likely to obtain significant results when the effect size is large, i.e., when differences between groups are large and variability of scores within groups is small.
In the remainder of the chapter, we will expand on these issues. We will examine the implications of making a decision about whether results are significant, the way to determine a significance level, and the way to interpret nonsignificant results. We will then provide some guidelines for selecting the appropriate statistical test in various research designs.
TYPE I AND TYPE II ERRORS
The decision to reject the null hypothesis is based on probabilities rather than on certainties. That is, the decision is made without direct knowledge of the true state of affairs in the population. Thus, the decision might not be correct; errors may result from the use of inferential statistics.
A decision matrix is shown in Figure 13.3. Notice that there are two possible decisions: (1) Reject the null hypothesis or (2) accept the null hypothesis. There are also two possible truths about the population: (1) The null hypothesis is true or (2) the null hypothesis is false. In sum, as the decision matrix shows, there are two kinds of correct decisions and two kinds of errors.
Correct Decisions
One correct decision occurs when we reject the null hypothesis and the research hypothesis is true in the population. Here, our decision is that the population means are not equal, and in fact, this is true in the population. This is the decision you hope to make when you begin your study.
Page 279
FIGURE 13.3
Decision matrix for Type I and Type II errors
The other correct decision is to accept the null hypothesis, and the null hypothesis is true in the population: The population means are in fact equal.
Type I Errors
A Type I error is made when we reject the null hypothesis but the null hypothesis is actually true. Our decision is that the population means are not equal when they actually are equal. Type I errors occur when, simply by chance, we obtain a large value of t or F. For example, even though a t value of 4.025 is highly improbable if the population means are indeed equal (less than 5 chances out of 100), this can happen. When we do obtain such a large t value by chance, we incorrectly decide that the independent variable had an effect.
The probability of making a Type I error is determined by the choice of significance or alpha level (alpha may be shown as the Greek letter alpha—α). When the significance level for deciding whether to reject the null hypothesis is .05, the probability of a Type I error (alpha) is .05. If the null hypothesis is rejected, there are 5 chances out of 100 that the decision is wrong. The probability of making a Type I error can be changed by either decreasing or increasing the significance level. If we use a lower alpha level of .01, for example, there is less chance of making a Type I error. With a .01 significance level, the null hypothesis is rejected only when the probability of obtaining the results is .01 or less if the null hypothesis is correct.
Type II Errors
A Type II error occurs when the null hypothesis is accepted although in the population the research hypothesis is true. The population means are not equal, but the results of the experiment do not lead to a decision to reject the null hypothesis.
Research should be designed so that the probability of a Type II error (this probability is called beta, or β) is relatively low. The probability of making a Page 280Type II error is related to three factors. The first is the significance (alpha) level. If we set a very low significance level to decrease the chances of a Type I error, we increase the chances of a Type II error. In other words, if we make it very difficult to reject the null hypothesis, the probability of incorrectly accepting the null hypothesis increases. The second factor is sample size. True differences are more likely to be detected if the sample size is large. The third factor is effect size. If the effect size is large, a Type II error is unlikely. However, a small effect size may not be significant with a small sample.
The Everyday Context of Type I and Type II Errors
The decision matrix used in statistical analyses can be applied to the kinds of decisions people frequently must make in everyday life. For example, consider the decision made by a juror in a criminal trial. As is the case with statistics, a decision must be made on the basis of evidence: Is the defendant innocent or guilty? However, the decision rests with individual jurors and does not necessarily reflect the true state of affairs: that the person really is innocent or guilty.
The juror’s decision matrix is illustrated in Figure 13.4. To continue the parallel to the statistical decision, assume that the null hypothesis is the defendant is innocent (i.e., the dictum that a person is innocent until proven guilty). Thus, rejection of the null hypothesis means deciding that the defendant is guilty, and acceptance of the null hypothesis means deciding that the defendant is innocent. The decision matrix also shows that the null hypothesis may actually be true or false. There are two kinds of correct decisions and two kinds of errors like those described in statistical decisions. A Type I error is finding the defendant guilty when the person really is innocent; a Type II error is finding the defendant innocent when the person actually is guilty. In our society, Type I errors by jurors generally are considered to be more serious than Type II errors. Thus, before finding someone guilty, the juror is asked to make sure that the person is guilty “beyond a reasonable doubt” or to consider that “it is better to have a hundred guilty persons go free than to find one innocent person guilty.”
The decision that a doctor makes to operate or not operate on a patient provides another illustration of how a decision matrix works. The matrix is shown in Figure 13.5. Here, the null hypothesis is that no operation is necessary. The decision is whether to reject the null hypothesis and perform the operation or to accept the null hypothesis and not perform surgery. In reality, the surgeon is faced with two possibilities: Either the surgery is unnecessary (the null hypothesis is true) or the patient will die without the operation (a dramatic case of the null hypothesis being false). Which error is more serious in this case? Most doctors would believe that not operating on a patient who really needs the operation—making a Type II error—is more serious than making the Type I error of performing surgery on someone who does not really need it.
FIGURE 13.4
Decision matrix for a juror
Page 281
FIGURE 13.5
Decision matrix for a doctor
One final illustration of the use of a decision matrix involves the important decision to marry someone. If the null hypothesis is that the person is “wrong” for you, and the true state is that the person is either “wrong” or “right,” you must decide whether to go ahead and marry the person. You might try to construct a decision matrix for this particular problem. Which error is more costly: a Type I error or a Type II error?
CHOOSING A SIGNIFICANCE LEVEL
Researchers traditionally have used either a .05 or a .01 significance level in the decision to reject the null hypothesis. If there is less than a .05 or a .01 probability that the results occurred because of random error, the results are said to be significant. However, there is nothing magical about a .05 or a .01 significance level. The significance level chosen merely specifies the probability of a Type I error if the null hypothesis is rejected. The significance level chosen by the researcher usually is dependent on the consequences of making a Type I versus a Type II error. As previously noted, for a juror, a Type I error is more serious than a Type II error; for a doctor, however, a Type II error may be more serious.
Researchers generally believe that the consequences of making a Type I error are more serious than those associated with a Type II error. If the null hypothesis is rejected, the researcher might publish the results in a journal, and the results might be reported by others in textbooks or in newspaper or magazine articles. Page 282Researchers do not want to mislead people or risk damaging their reputations by publishing results that are not reliable and so cannot be replicated. Thus, they want to guard against the possibility of making a Type I error by using a very low significance level (.05 or .01). In contrast to the consequences of publishing false results, the consequences of a Type II error are not seen as being very serious.
Thus, researchers want to be very careful to avoid Type I errors when their results may be published. However, in certain circumstances, a Type I error is not serious. For example, if you were engaged in pilot or exploratory research, your results would be used primarily to decide whether your research ideas were worth pursuing. In this situation, it would be a mistake to overlook potentially important data by using a very conservative significance level. In exploratory research, a significance level of .25 may be more appropriate for deciding whether to do more research. Remember that the significance level chosen and the consequences of a Type I or a Type II error are determined by what the results will be used for.
INTERPRETING NONSIGNIFICANT RESULTS
Although “accepting the null hypothesis” is convenient terminology, it is important to recognize that researchers are not generally interested in accepting the null hypothesis. Research is designed to show that a relationship between variables does exist, not to demonstrate that variables are unrelated.
More important, a decision to accept the null hypothesis when a single study does not show significant results is problematic, because negative or nonsignificant results are difficult to interpret. For this reason, researchers often say that they simply “fail to reject” or “do not reject” the null hypothesis. The results of a single study might be nonsignificant even when a relationship between the variables in the population does in fact exist. This is a Type II error. Sometimes, the reasons for a Type II error lie in the procedures used in the experiment. For example, a researcher might obtain nonsignificant results by providing incomprehensible instructions to the participants, by having a very weak manipulation of the independent variable, or by using a dependent measure that is unreliable and insensitive. Rather than concluding that the variables are not related, researchers may decide that a more carefully conducted study would find that the variables are related.
We should also consider the statistical reasons for a Type II error. Recall that the probability of a Type II error is influenced by the significance (alpha) level, sample size, and effect size. Thus, nonsignificant results are more likely to be found if the researcher is very cautious in choosing the alpha level. If the researcher uses a significance level of .001 rather than .05, it is more difficult to reject the null hypothesis (there is not much chance of a Type I error). However, that also means that there is a greater chance of accepting an incorrect null hypothesis (i.e., a Type II error is more likely). In other words, a meaningful result is more likely to be overlooked when the significance level is very low.
Page 283A Type II error may also result from a sample size that is too small to detect a real relationship between variables. A general principle is that the larger the sample size is, the greater the likelihood of obtaining a significant result. This is because large sample sizes give more accurate estimates of the actual population than do small sample sizes. In any given study, the sample size may be too small to permit detection of a significant result.
A third reason for a nonsignificant finding is that the effect size is small. Very small effects are difficult to detect without a large sample size. In general, the sample size should be large enough to find a real effect, even if it is a small one.
The fact that it is possible for a very small effect to be statistically significant raises another issue. A very large sample size might enable the researcher to find a significant difference between means; however, this difference, even though statistically significant, might have very little practical significance. For example, if an expensive new psychiatric treatment technique significantly reduces the average hospital stay from 60 to 59 days, it might not be practical to use the technique despite the evidence for its effectiveness. The additional day of hospitalization costs less than the treatment. There are other circumstances, however, in which a treatment with a very small effect size has considerable practical significance. Usually this occurs when a very large population is affected by a fairly inexpensive treatment. Suppose a simple flextime policy for employees reduces employee turnover by 1% per year. This does not sound like a large effect. However, if a company normally has a turnover of 2,000 employees each year and the cost of training a new employee is $10,000, the company saves $200,000 per year with the new procedure. This amount may have practical significance for the company.
The key point here is that you should not accept the null hypothesis just because the results are nonsignificant. Nonsignificant results do not necessarily indicate that the null hypothesis is correct. However, there must be circumstances in which we can accept the null hypothesis and conclude that two variables are, in fact, not related. Frick (1995) describes several criteria that can be used in a decision to accept the null hypothesis. For example, we should look for well-designed studies with sensitive dependent measures and evidence from a manipulation check that the independent variable manipulation had its intended effect. In addition, the research should have a reasonably large sample to rule out the possibility that the sample was too small. Further, evidence that the variables are not related should come from multiple studies. Under such circumstances, you are justified in concluding that there is in fact no relationship.
CHOOSING A SAMPLE SIZE: POWER ANALYSIS
We noted in Chapter 9 that researchers often select a sample size based on what is typical in a particular area of research. An alternative approach is to select a sample size on the basis of a desired probability of correctly rejecting the null hypothesis. This probability is called the power of the statistical test. It is obviously related to the probability of a Type II error:
Page 284
TABLE 13.2 Total sample size needed to detect a significant difference for a t test
We previously indicated that the probability of a Type II error is related to significance level (alpha), sample size, and effect size. Statisticians such as Cohen (1988) have developed procedures for determining sample size based on these factors. Table 13.2 shows the total sample size needed for an experiment with two groups and a significance level of .05. In the table, effect sizes range from .10 to .50, and the desired power is shown at .80 and .90. Smaller effect sizes require larger samples to be significant at the .05 level. Higher desired power demands a greater sample size; this is because you want a more certain “guarantee” that your results will be statistically significant. Researchers usually use a power between .70 and .90 when using this method to determine sample size. Several computer programs have been developed to allow researchers to easily make the calculations necessary to determine sample size based on effect size estimates, significance level, and desired power.
You may never need to perform a power analysis. However, you should recognize the importance of this concept. If a researcher is studying a relationship with an effect size correlation of .20, a fairly large sample size is needed for statistical significance at the .05 level. An inappropriately low sample size in this situation is likely to produce a nonsignificant finding.
THE IMPORTANCE OF REPLICATIONS
Throughout this discussion of statistical analysis, the focus has been on the results of a single research investigation. What were the means and standard deviations? Was the mean difference statistically significant? If the results are significant, you conclude that they would likely be obtained over and over again if the study were repeated. We now have a framework for understanding the results of the study. Be aware, however, that scientists do not attach Page 285too much importance to the results of a single study. A rich understanding of any phenomenon comes from the results of numerous studies investigating the same variables. Instead of inferring population values on the basis of a single investigation, we can look at the results of several studies that replicate previous investigations (see Cohen, 1994). The importance of replications is a central concept in Chapter 14.
SIGNIFICANCE OF A PEARSON r CORRELATION COEFFICIENT
Recall from Chapter 12 that the Pearson r correlation coefficient is used to describe the strength of the relationship between two variables when both variables have interval or ratio scale properties. However, there remains the issue of whether the correlation is statistically significant. The null hypothesis in this case is that the true population correlation is 0.00—the two variables are not related. What if you obtain a correlation of .27 (plus or minus)? A statistical significance test will allow you to decide whether to reject the null hypothesis and conclude that the true population correlation is, in fact, greater than 0.00. The technical way to do this is to perform a t test that compares the obtained coefficient with the null hypothesis correlation of 0.00. The procedures for calculating a Pearson r and determining significance are provided in Appendix C.
COMPUTER ANALYSIS OF DATA
Although you can calculate statistics with a calculator using the formulas provided in this chapter, Chapter 12, and Appendix C, most data analysis is carried out via computer programs. Sophisticated statistical analysis software packages make it easy to calculate statistics for any data set. Descriptive and inferential statistics are obtained quickly, the calculations are accurate, and information on statistical significance is provided in the output. Computers also facilitate graphic displays of data.
Some of the major statistical programs include SPSS, SAS, SYSTAT, and freely available R and MYSTAT. Other programs may be used on your campus. Many people do most of their statistical analyses using a spreadsheet program such as Microsoft Excel. You will need to learn the specific details of the computer system used at your college or university. No one program is better than another; they all differ in the appearance of the output and the specific procedures needed to input data and have the program perform the test. However, the general procedures for doing analyses are quite similar in all of the statistics programs.
The first step in doing the analysis is to input the data. Suppose you want to input the data in Table 12.1, the modeling and aggression experiment. Data Page 286are entered into columns. It is easiest to think of data for computer analysis as a matrix with rows and columns. Data for each research participant are the rows of the matrix. The columns contain each participant’s scores on one or more measures, and an additional column may be needed to indicate a code to identify which condition the individual was in (e.g., Group 1 or Group 2). A data matrix in SPSS for Windows is shown in Figure 13.6. The numbers in the “group” column indicate whether the individual is in Group 1 (model) or Group 2 (no model), and the numbers in the “aggscore” column are the aggression scores from Table 12.1.
Other programs may require somewhat different methods of data input. For example, in Excel, it is usually easiest to set up a separate column for each group, as shown in Figure 13.6.
The next step is to provide instructions for the statistical analysis. Again, each program uses somewhat different steps to perform the analysis; most require you to choose from various menu options. When the analysis is completed, you are provided with the output that shows the results of the statistical procedure you performed. You will need to learn how to interpret the output. Figure 13.6 shows the output for a t test using Excel.
When you are first learning to use a statistical analysis program, it is a good idea to practice with some data from a statistics text to make sure that you get the same results. This will ensure that you know how to properly input the data and request the statistical analysis.
SELECTING THE APPROPRIATE STATISTICAL TEST
We have covered several types of designs and the variables that we study may have nominal, ordinal, interval, or ratio scale properties. How do you choose the appropriate statistical test for analyzing your data? Fortunately, there are a number of online guides and tutorials such as http://www.socialresearch-methods.net/selstat/ssstart.htm and http://wise.cgu.edu/choosemod/opening.htm; SPSS even has its own Statistics Coach to help with the decision.
We cannot cover every possible analysis. Our focus will be on variables that have either (1) nominal scale properties—two or more discrete values such as male and female or (2) interval/ratio scale properties with many values such as reaction time or rating scales (also called continuous variables). We will not address variables with ordinal scale values.
Research Studying Two Variables (Bivariate Research)
In these cases, the researcher is studying whether two variables are related. In general, we would refer to the first variable as the independent variable (IV) and the second variable as the dependent variable (DV). However, because it does not matter whether we are doing experimental or nonexperimental research, we could just as easily refer to the two variables as Variable X and Variable Y or Variable A and Variable B.
Page 287
FIGURE 13.6
Sample computer input and output using data from Table 12.1 (modeling experiment)
Page 288
Research with Multiple Independent Variables
In the following situations, we have more complex research designs with two or more independent variables that are studied with a single outcome or dependent variable.
These research design situations have been described in previous chapters. There are of course many other types of designs. Designs with multiple variables (multivariate statistics) are described in detail by Tabachnick and Fidell (2007). Procedures for research using ordinal level measurement may be found in a book by Siegel and Castellan (1988).
You have now considered how to generate research ideas, conduct research to test your ideas, and evaluate the statistical significance of your results. In the final chapter, we will examine issues of generalizing research findings beyond the specific circumstances in which the research was conducted.
Study Terms
Alpha level (p. 270)
Analysis of variance (F test) (p. 275)
Confidence interval (p. 276)
Degrees of freedom (p. 274)
Page 289Error variance (p. 275)
Inferential statistics (p. 267)
Null hypothesis (p. 268)
Power (p. 284)
Probability (p. 269)
Research hypothesis (p. 268)
Sampling distribution (p. 270)
Statistical significance (p. 269)
Systematic variance (p. 275)
t test (p. 272)
Type I error (p. 279)
Type II error (p. 279)
Review Questions
- Distinguish between the null hypothesis and the research hypothesis. When does the researcher decide to reject the null hypothesis?
- What is meant by statistical significance?
- What factors are most important in determining whether obtained results will be significant?
- Distinguish between a Type I and a Type II error. Why is your significance level the probability of making a Type I error?
- What factors are involved in choosing a significance level?
- What influences the probability of a Type II error?
- What is the difference between statistical significance and practical significance?
- Discuss the reasons that a researcher might obtain nonsignificant results.
Activities
- In an experiment, one group of research participants is given 10 pages of material to proofread for errors. Another group proofreads the same material on a computer screen. The dependent variable is the number of errors detected in a 5-minute period. A .05 significance (alpha) level is used to evaluate the results.
- What statistical test would you use?
- What is the null hypothesis? The research hypothesis?
- What is the Type I error? The Type II error?
- What is the probability of making a Type I error?
- In Professor Dre’s study, the average number of errors detected in the print and computer conditions was 38.4 and 13.2, respectively; this difference was not statistically significant. When Professor Seuss conducted the same experiment, the means of the two groups were 21.1 and 14.7, but the difference was statistically significant. Explain how this could happen.
- Suppose that you work for the child social services agency in your county. Your job is to investigate instances of possible child neglect or abuse. After collecting your evidence, which may come from a variety of sources, you must decide whether to leave the child in the home or place the child in protective custody. Specify the null and research hypotheses in this situation. What constitutes a Type I and a Type II error? Is a Type I or Type II error the more serious error in this situation? Why?Page 290
- A researcher investigated attitudes toward individuals in wheelchairs. The question was: Would people react differently to a person they perceived as being temporarily confined to the wheelchair than to a person who had a permanent disability? Participants were randomly assigned to two groups. Individuals in one group each worked on various tasks with a confederate in a wheelchair; members of the other group worked with the same confederate in a wheelchair, but this time the confederate wore a leg cast. After the session was over, participants filled out a questionnaire regarding their reactions to the study. One question asked, “Would you be willing to work with your test partner in the future on a class assignment?” with “yes” and “no” as the only response alternatives. What would be the appropriate significance test for this experiment? Can you offer a critique of the dependent variable? If you changed the dependent variable, would it affect your choice of significance tests? If so, how?
- Are you interested in learning more about using statistical software? If so, start by finding out whether your campus has one or more statistical software packages available in your computer labs. Now conduct a Web search for information on one of these—you may find it most useful to search for tutorials (including videos). We also encourage you to explore R, an increasingly popular statistical analysis and graphing program. It is free for download to your computer (http://www.r-project.org). Again, you may search for tutorials on using R; a good place to start is http://personality-project.org/r/r.guide.html.