Poster

Poster

Type of document      Other (Not listed)

1 Page Single Spaced (approx 550 words per page)

Subject area    Law

Academic Level          Undergraduate

Style    Harvard

Number of sources/references          6

Order description:

This assignment is abit different, requires a poster, will upload screenshots of the intructions and marking criteria.

Please reference and in text cite as much as possible, here are some readings:

Textbook

Thiroux, J. and Krasemann, K. (2012) Ethics Theory and Practice 11th ed, Pearson-Prentice Hall. USA. [170 THIR:4]

 

Essential Reading

Carmody, M 2015, ’Negotiating Sex’, in Sex, Ethics & Young People, Palgrave Macmillan, Melbourne, Australiapp 39-58

 

McAuliffe, D 2014, ’Regulation of the professions: codes of ethics and standards of practice’, in InterprofessionalEthics: Collaboration in the social, health and human services, Cambridge University Press, Port Melbourne,Australia, pp 66-84

 

 

 

Education in Dominican Republic

Education in Dominican Republic

Type of document      Essay

3 Pages Double Spaced (approx 275 words per page)

Subject area    Economics

Academic Level          Undergraduate

Style    MLA

Number of sources/references          3

Order description:

The paper/report must be 4 pages

Using the Human Development Reports, available online at http://hdr.undp.org/en/ and other sources from national and international organizations, you will write a short report (about 1000 words). In this report, for Dominican Republic, your team will consider progress or setbacks for women in terms of education, for the last 20 years. The data from the Statistical Annex of the 2016 Human Development Report to take into consideration are: tables 1, 3, 4, 5, 6, and 9. Use similar tables from other reports, as appropriate. Other useful source as a guide to update information: Neft, N. and A.D. Levine. 1997. Where Women Stand: An International Report on the Status of Women in 140 Countries. New York: Random House.

Aspects to include in your team report:

Laws governing education in your selected country: is elementary education free? Is it compulsory?

Public vs. private spending in education?

Percentage of GDP and of government budget assigned to education?

Differences between men and women in terms of literacy rates, enrollment figures by levels, average years of schooling

Majors selected by females vs. males at the university level

Education quality

Primary school dropout

Population at least with secondary education

Other relevant facts, as appropriate

 

CCJ 4934 Module 6 – Social Movements Part 2

 

CCJ 4934 Module 6 – Social Movements Part 2

Discipline: – Criminal Justice

Type of service: Essay

Spacing: Double spacing

Paper format: APA

Number of pages: 2 pages

Number of sources: 2 sources

Paper details:

Watch and then answer below questions:

Here are your questions:

1a. What strategies did the Weathermen (Weather Underground) use when recruiting people into their organization?

1b.Do you think the strategies of the Weathermen (Weather Underground) were effective?

1c. Why or why not?

2a. Identify the process and factors of how the Weathermen (Weather Underground) group fell apart (became extinct). Be sure to cover as many factors of their dissolution as possible.

2b. Which ones are things we can influence (with prevention or counter-terrorism work) and things that occur naturally in the social dynamics of the group?

2c. How can society use these elements to take down these types of groups?

 

APPLIED INDUSTRIAL TECHNOLOGIES company overview

APPLIED INDUSTRIAL TECHNOLOGIES company overview

Discipline: Accounting

Type of service: Other

Spacing: Double spacing

Paper format: APA

Number of pages: 2 pages

Number of sources: 0 source

Paper details:

NAME OF THE COMPANY: APPLIED INDUSTRIAL TECHNOLOGIES

SECTION 1 – Company Overview Section. This section will include discussions pertaining to:

Company History – when did they start, who started the company, what did they originally do as a business?

Current Operations – What does the company do, where do they make their money? You may include:

Significant Operational Divisions and/orSignificant Products, and/or Significant Ownership stakes – who owns the company?

Current Financial Stability – Are they making money or do they have a positive cash flow?

Ratio Analysis – REQUIRED RATIOS- Price Earnings, Earnings per Share, and Dividend Yield with a comparison to Competitors and/or Industry for each of these ratios.

This section must be at least 2 pages in length.

There must be a separate paragraph for each of the 3 elements identified above.

DO NOT INCLUDE TABLES IN THE BODY OF THE PAPER. ADD THEM AS ATTACHMENTS. THEY DO NOT COUNT TOWARD PAPER LENGTH.

CITE THE SOURCES IN APA FORMAT CORRECTLY.

USE SIMPLE AND CLEAR SENTENCE STRUCTURE.

 

 

Legal Reasoning and Positive Law Essay

Legal Reasoning and Positive Law Essay

Discipline: – Legal Issues

Type of service: Essay

Spacing: Single spacing

Paper format: APA

Number of pages: 2 pages

Number of sources: 3 sources

Paper details:

In his book Natural Law, the noted European legal philosopher A.P. d’Entreves asserted the following concerning the Nuremberg trial of Nazi leaders after World War II:

“No doubt the provisions for the Nuremberg Tribunal were based, or purported to be based, on existing or ‘positive’ international law.

“But I strongly suspect that the boundaries of legal positivism were overstepped, and had to be overstepped, the moment it was stated that the trials were a ‘question of justice.’ The principle nullum crimen sine poena (no crime without punishment), on which the sentences were grounded, was a flat contradiction of one of the most generally accepted principles of positive jurisprudence, the principle of nulla poena sine lege (no punishment without law).”

“Thus, after a century of effort to eliminate the dualism between what is and what ought to be from the field of legal and political experience, natural law seems to have taken its revenge upon the very champions of the pernicious doctrine that there is no law but positive law, or that might equals right, since for all practical purposes the two propositions are perfectly equivalent.” (pp. 106-107)

Directions for assignment:

Write an essay of 1,250 -1,500 words in which you answer the following questions:

Why is d’Entreves saying that the Nuremberg trial “overstepped” the boundaries of positive law?

How, according to d’Entreves, did that trial do so?

Who might be the “very champions” of positive law that d’Entreves is addressing here?

Is d’Entreves defending or attacking the tradition of natural law? What does he imply about law and society, natural versus positive law, if he is defending the principle nullum crimen sine poena (no crime without punishment)?

Locate three to four credible sources in support of your content.

 

Homework Exercise in University of Phoenix Material

Homework Exercise in University of Phoenix Material

Type of document       Essay

5 Pages Double Spaced (approx 275 words per page)

Subject area     Psychology

Academic Level            Master

Style    APA

Number of sources/references            1

Order description:

University of Phoenix Material

 

Week Seven Homework Exercise

 

Answer the following questions, covering material from Ch. 13 of Methods in Behavioral Research:

 

  1. Define inferential statistics and how researchers use inferential statistics to draw conclusions from sample data.

 

 

  1. Define probability and discuss how it relates to the concept of statistical significance.

 

 

  1. A researcher is studying the effects of yoga on depression. Participants are randomly assigned to one of two groups: yoga and medication (experimental group); or support group and medication (control group). What is the null hypothesis? What is the research hypothesis?

 

  1. In the scenario described in the previous question, the researcher implements two programs simultaneously: a 6-week yoga program coupled with medication management and a 6-week support group program coupled with medication management. At the end of the 6 weeks, participants complete a questionnaire measuring depression. The researcher compares the mean score of the experimental group with the mean score of the control group. What statistical test would be most appropriate for this purpose and why? What is the role of probability in this statistical test?

 

  1. In the scenario described in the previous questions, the researcher predicted that participants in the experimental group—yoga plus medication—would score significantly lower on measures of depression than would participants in the control group—support group plus medication. True or false: A two-tailed test of significance is most appropriate in this case. Explain your response.

 

  1. Explain the relationship between the alpha level (or significance level) and Type I error. What is a Type II error? How are Type I and Type II errors different?

 

  1. A researcher is studying the effects of sex—male and female—and dietary sugar on energy level. Male and female participants agree to follow either a high sugar or low sugar diet for eight weeks. The researcher asks the participants to complete a number of questionnaires, including one assessing energy level, before and after the program. The researcher is interested in determining whether a high or low sugar diet affects reported energy levels differently for men and women. At the end of the program, the researcher examines scores on the energy level scale for the following groups: Men – low sugar diet; Men – high sugar diet; Women – low sugar diet; Women – high sugar diet. What statistic could the researcher use to assess the data? What criteria did you use to determine the appropriate statistical test?

 

Instruction: 200 words each question

 

Complete the Week Seven Homework Exercise assignment.

Click the Assignment Files tab to submit your assignment.

 

Please answer each question using the material I attach. Also use the reference (Cozby, P. C. and Bates, S. C. (2015). Methods in behavioral research (12th ed.). New York: McGraw-Hill).

Please be careful with grammar and spelling.

Put answer below each question so I know what number you are discussing.

Running Head is not needed. You do not have to place also the title page. Just use the material sheet I provided.

Instructions: Please read and follow instruction carefully. Use the Reference (each. Cozby, P. C.  and Bates, S. C. (2015). Methods in behavioral research (12th ed.). New York: McGraw-Hill). Answer 2 question for 275 words

Complete the Correlation assignment.

Click the Assignment Files tab to submit your assignment.

Do not use running head or title page just answer the question using the material I provided and the scatter plot.

 

Only 2 questions to answer this week, see below. I created the scatter plot (see attached). Please choose a part, I will do whatever is leftover if everyone doesn’t contribute. Thanks!

 

Correlation:

 

A researcher is interested in investigating the relationship between viewing time (in seconds) and ratings of aesthetic appreciation. Participants are asked to view a painting for as long as they like. Time (in seconds) is measured. After the viewing time, the researcher asks the participants to provide a ‘preference rating’ for the painting on a scale ranging from 1-10.

 

  • Create a scatter plot depicting the following data – I created this, see attached
  • What does the scatter plot suggest about the relationship between viewing time and aesthetic preference?
  • Is it accurate to state that longer viewing times are the result of greater preference for paintings? Explain.

*****Please use the scatter plot provided and the material sheet I attach.

 

LEARNING OBJECTIVES

  • Explain how researchers use inferential statistics to evaluate sample data.
  • Distinguish between the null hypothesis and the research hypothesis.
  • Discuss probability in statistical inference, including the meaning of statistical significance.
  • Describe the ttest and explain the difference between one-tailed and two-tailed tests.
  • Describe the Ftest, including systematic variance and error variance.
  • Describe what a confidence interval tells you about your data.
  • Distinguish between Type I and Type II errors.
  • Discuss the factors that influence the probability of a Type II error.
  • Discuss the reasons a researcher may obtain nonsignificant results.
  • Define powerof a statistical test.
  • Describe the criteria for selecting an appropriate statistical test.

Page 267IN THE PREVIOUS CHAPTER, WE EXAMINED WAYS OF DESCRIBING THE RESULTS OF A STUDY USING DESCRIPTIVE STATISTICS AND A VARIETY OF GRAPHING TECHNIQUES.In addition to descriptive statistics, researchers use inferential statistics to draw more general conclusions about their data. In short, inferential statistics allow researchers to (a) assess just how confident they are that their results reflect what is true in the larger population and (b) assess the likelihood that their findings would still occur if their study was repeated over and over. In this chapter, we examine methods for doing so.

SAMPLES AND POPULATIONS

Inferential statistics are necessary because the results of a given study are based only on data obtained from a single sample of research participants. Researchers rarely, if ever, study entire populations; their findings are based on sample data. In addition to describing the sample data, we want to make statements about populations. Would the results hold up if the experiment were conducted repeatedly, each time with a new sample?

In the hypothetical experiment described in Chapter 12 (see Table 12.1), mean aggression scores were obtained in model and no-model conditions. These means are different: Children who observe an aggressive model subsequently behave more aggressively than children who do not see the model. Inferential statistics are used to determine whether the results match what would happen if we were to conduct the experiment again and again with multiple samples. In essence, we are asking whether we can infer that the difference in the sample means shown in Table 12.1 reflects a true difference in the population means.

Recall our discussion of this issue in Chapter 7 on the topic of survey data. A sample of people in your state might tell you that 57% prefer the Democratic candidate for an office and that 43% favor the Republican candidate. The report then says that these results are accurate to within 3 percentage points, with a 95% confidence level. This means that the researchers are very (95%) confident that, if they were able to study the entire population rather than a sample, the actual percentage who preferred the Democratic candidate would be between 60% and 54% and the percentage preferring the Republican would be between 46% and 40%. In this case, the researcher could predict with a great deal of certainty that the Democratic candidate will win because there is no overlap in the projected population values. Note, however, that even when we are very (in this case, 95%) sure, we still have a 5% chance of being wrong.

Inferential statistics allow us to arrive at such conclusions on the basis of sample data. In our study with the model and no-model conditions, are we confident that the means are sufficiently different to infer that the difference would be obtained in an entire population?

Page 268

INFERENTIAL STATISTICS

Much of the previous discussion of experimental design centered on the importance of ensuring that the groups are equivalent in every way except the independent variable manipulation. Equivalence of groups is achieved by experimentally controlling all other variables or by randomization. The assumption is that if the groups are equivalent, any differences in the dependent variable must be due to the effect of the independent variable.

This assumption is usually valid. However, it is also true that the difference between any two groups will almost never be zero. In other words, there will be some difference in the sample means, even when all of the principles of experimental design are rigorously followed. This happens because we are dealing with samples, rather than populations. Random or chance error will be responsible for some difference in the means, even if the independent variable had no effect on the dependent variable.

Therefore, the difference in the sample means does show any true difference in the population means (i.e., the effect of the independent variable) plus any random error. Inferential statistics allow researchers to make inferences about the true difference in the population on the basis of the sample data. Specifically, inferential statistics give the probability that the difference between means reflects random error rather than a real difference.

NULL AND RESEARCH HYPOTHESES

Statistical inference begins with a statement of the null hypothesis and a research (or alternative) hypothesis. The null hypothesis is simply that the population means are equal—the observed difference is due to random error. The research hypothesis is that the population means are, in fact, not equal. The null hypothesis states that the independent variable had no effect; the research hypothesis states that the independent variable did have an effect. In the aggression modeling experiment, the null and research hypotheses are:

H0 (null hypothesis): The population mean of the no-model group is equal to the population mean of the model group.

H1 (research hypothesis): The population mean of the no-model group is not equal to the population mean of the model group.

The logic of the null hypothesis is this: If we can determine that the null hypothesis is incorrect, then we accept the research hypothesis as correct. Acceptance of the research hypothesis means that the independent variable had an effect on the dependent variable.

The null hypothesis is used because it is a very precise statement—the population means are exactly equal. This permits us to know precisely the Page 269probability of obtaining our results if the null hypothesis is correct. Such precision is not possible with the research hypothesis, so we infer that the research hypothesis is correct only by rejecting the null hypothesis. We reject the null hypothesis when we find a very low probability that the obtained results could be due to random error. This is what is meant by statistical significance: A significant result is one that has a very low probability of occurring if the population means are equal. More simply, significance indicates that there is a low probability that the difference between the obtained sample means was due to random error. Significance, then, is a matter of probability.

PROBABILITY AND SAMPLING DISTRIBUTIONS

Probability is the likelihood of the occurrence of some event or outcome. We all use probabilities frequently in everyday life. For example, if you say that there is a high probability that you will get an A in this course, you mean that this outcome is likely to occur. Your probability statement is based on specific information, such as your grades on examinations. The weather forecaster says there is a 10% chance of rain today; this means that the likelihood of rain is very low. A gambler gauges the probability that a particular horse will win a race on the basis of the past records of that horse.

Probability in statistical inference is used in much the same way. We want to specify the probability that an event (in this case, a difference between means in the sample) will occur if there is no difference in the population. The question is: What is the probability of obtaining this result if only random error is operating? If this probability is very low, we reject the possibility that only random or chance error is responsible for the obtained difference in means.

Probability: The Case of ESP

The use of probability in statistical inference can be understood intuitively from a simple example. Suppose that a friend claims to have ESP (extrasensory perception) ability. You decide to test your friend with a set of five cards commonly used in ESP research; a different symbol is presented on each card. In the ESP test, you look at each card and think about the symbol, and your friend tells you which symbol you are thinking about. In your actual experiment, you have 10 trials; each of the five cards is presented two times in a random order. Your task is to know whether your friend’s answers reflect random error (guessing) or whether they indicate that something more than random error is occurring. The null hypothesis in your study is that only random error is operating. In this case, the research hypothesis is that the number of correct answers shows more than random or chance guessing. (Note, however, that accepting the research hypothesis could mean that your friend has ESP ability, but it could also mean that the cards were marked, that you had somehow cued your friend when thinking about the symbols, and so on.)

Page 270You can easily determine the number of correct answers to expect if the null hypothesis is correct. Just by guessing, 1 out of 5 answers (20%) should be correct. On 10 trials, 2 correct answers are expected under the null hypothesis. If, in the actual experiment, more (or less) than 2 correct answers are obtained, would you conclude that the obtained data reflect random error or something more than merely random guessing?

Suppose that your friend gets 3 correct. Then you would probably conclude that only guessing is involved, because you would recognize that there is a high probability that there would be 3 correct answers even though only 2 correct are expected under the null hypothesis. You expect that exactly 2 answers in 10 trials would be correct in the long run, if you conducted this experiment with this subject over and over again. However, small deviations away from the expected 2 are highly likely in a sample of 10 trials.

Suppose, though, that your friend gets 7 correct. You might conclude that the results indicate more than random error in this one sample of 10 observations. This conclusion would be based on your intuitive judgment that an outcome of 70% correct when only 20% is expected is very unlikely. At this point, you would decide to reject the null hypothesis and state that the result is significant. A significant result is one that is very unlikely if the null hypothesis is correct.

A key question then becomes: How unlikely does a result have to be before we decide it is significant? A decision rule is determined prior to collecting the data. The probability required for significance is called the alpha level. The most common alpha level probability used is .05. The outcome of the study is considered significant when there is a .05 or less probability of obtaining the results; that is, there are only 5 chances out of 100 that the results were due to random error in one sample from the population. If it is very unlikely that random error is responsible for the obtained results, the null hypothesis is rejected.

Sampling Distributions

You may have been able to judge intuitively that obtaining 7 correct on the 10 trials is very unlikely. Fortunately, we do not have to rely on intuition to determine the probabilities of different outcomes. Table 13.1 shows the probability of actually obtaining each of the possible outcomes in the ESP experiment with 10 trials and a null hypothesis expectation of 20% correct. An outcome of 2 correct answers has the highest probability of occurrence. Also, as intuition would suggest, an outcome of 3 correct is highly probable, but an outcome of 7 correct is highly unlikely.

The probabilities shown in Table 13.1 were derived from a probability distribution called the binomial distribution; all statistical significance decisions are based on probability distributions such as this one. Such distributions are called sampling distributions. The sampling distribution is based on the assumption that the null hypothesis is true; in the ESP example, the null hypothesis is that the person is only guessing and should therefore get 20% correct. Such a distribution assumes that if you were to conduct the study with the same number of observations over and over again, the most frequent finding would be 20%. However, because of the random error possible in each sample, there is a certain probability associated with other outcomes. Outcomes that are close to the expected null hypothesis value of 20% are very likely. However, outcomes farther from the expected result are less and less likely if the null hypothesis is correct. When your obtained results are highly unlikely if you are, in fact, sampling from the distribution specified by the null hypothesis, you conclude that the null hypothesis is incorrect. Instead of concluding that your sample results reflect a random deviation from the long-run expectation of 20%, you decide that the null hypothesis is incorrect. That is, you conclude that you have not sampled from the sampling distribution specified by the null hypothesis. Instead, in the case of the ESP example, you decide that your data are from a different sampling distribution in which, if you were to test the person repeatedly, most of the outcomes would be near your obtained result of 7 correct answers.

Page 271

TABLE 13.1 Exact probability of each possible outcome of the ESP experiment with 10 trials

All statistical tests rely on sampling distributions to determine the probability that the results are consistent with the null hypothesis. When the obtained data are very unlikely according to null hypothesis expectations (usually a .05 probability or less), the researcher decides to reject the null hypothesis and therefore to accept the research hypothesis.

Sample Size

The ESP example also illustrates the impact of sample size—the total number of observations—on determinations of statistical significance. Suppose you had tested your friend on 100 trials instead of 10 and had observed 30 correct answers. Just as you had expected 2 correct answers in 10 trials, you would now expect 20 of 100 answers to be correct. However, 30 out of 100 has a much Page 272lower likelihood of occurrence than 3 out of 10. This is because, with more observations sampled, you are more likely to obtain an accurate estimate of the true population value. Thus, as the size of your sample increases, you are more confident that your outcome is actually different from the null hypothesis expectation.

EXAMPLE: THE t AND F TESTS

Different statistical tests allow us to use probability to decide whether to reject the null hypothesis. In this section, we will examine the t test and the Ftest. The t test is commonly used to examine whether two groups are significantly different from each other. In the hypothetical experiment on the effect of a model on aggression, a t test is appropriate because we are asking whether the mean of the no-model group differs from the mean of the model group. The F test is a more general statistical test that can be used to ask whether there is a difference among three or more groups or to evaluate the results of factorial designs (discussed in Chapter 10).

To use a statistical test, you must first specify the null hypothesis and the research hypothesis that you are evaluating. The null and research hypotheses for the modeling experiment were described previously. You must also specify the significance level that you will use to decide whether to reject the null hypothesis; this is the alpha level. As noted, researchers generally use a significance level of .05.

t Test

The sampling distribution of all possible values of t is shown in Figure 13.1. (This particular distribution is for the sample size we used in the hypothetical experiment on modeling and aggression; the sample size was 20 with 10 participants in each group.) This sampling distribution has a mean of 0 and a standard deviation of 1. It reflects all the possible outcomes we could expect if we compare the means of two groups and the null hypothesis is correct.

To use this distribution to evaluate our data, we need to calculate a value of t from the obtained data and evaluate the obtained t in terms of the sampling distribution of t that is based on the null hypothesis. If the obtained t has a low probability of occurrence (.05 or less), then the null hypothesis is rejected.

The t value is a ratio of two aspects of the data, the difference between the group means and the variability within groups. The ratio may be described as follows:

The group difference is simply the difference between your obtained means; under the null hypothesis, you expect this difference to be zero. The value of t increases as the difference between your obtained sample means increases. Note that the sampling distribution of t assumes that there is no difference in the population means; thus, the expected value of t under the null hypothesis is zero. The within-group variability is the amount of variability of scores about the mean. The denominator of the t formula is essentially an indicator of the amount of random error in your sample. Recall from Chapter 12 that s, the standard deviation, and s2, the variance, are indicators of how much scores deviate from the group mean.

Page 273

FIGURE 13.1

Sampling distributions of t values with 18 degrees of freedom

A concrete example of a calculation of a t test should help clarify these concepts. The formula for the t test for two groups with equal numbers of participants in each group is:

Page 274The numerator of the formula is simply the difference between the means of the two groups. In the denominator, we first divide the variance (  and  ) of each group by the number of subjects in that group (n1 and n2) and add these together. We then find the square root of the result; this converts the number from a squared score (the variance) to a standard deviation. Finally, we calculate our obtained t value by dividing the mean difference by this standard deviation. When the formula is applied to the data in Table 12.1, we find:

Thus, the t value calculated from the data is 4.02. Is this a significant result? A computer program analyzing the results would immediately tell you the probability of obtaining a t value of this size with a total sample size of 20. Without such a program, there are Internet resources to find a table of “critical values” of t (http://www.statisticsmentor.com/category/statstables/) or to calculate the probability for you (http://vassarstats.net/tabs.html). Before going any farther, you should know that the obtained result is significant. Using a significance level of .05, the critical value from the sampling distribution of t is 2.101. Any t value greater than or equal to 2.101 has a .05 or less probability of occurring under the assumptions of the null hypothesis. Because our obtained value is larger than the critical value, we can reject the null hypothesis and conclude that the difference in means obtained in the sample reflects a true difference in the population.

Degrees of Freedom

You are probably wondering how the critical value was selected from the table. To use the table, you must first determine the degrees of freedom for the test (the term degrees of freedom is abbreviated as df). When comparing two means, you assume that the degrees of freedom are equal to n1 + n2 − 2, or the total number of participants in the groups minus the number of groups. In our experiment, the degrees of freedom would be 10 + 10 − 2 = 18. The degrees of freedom are the number of scores free to vary once the means are known. For example, if the mean of a group is 6.0 and there are five scores in the group, there are 4 degrees of freedom; once you have any four scores, the fifth score is known because the mean must remain 6.0.

One-Tailed Versus Two-Tailed Tests

In the table, you must choose a critical t for the situation in which your research hypothesis either (1) specified a direction of difference between the Page 275groups (e.g., group 1 will be greater than group 2) or (2) did not specify a predicted direction of difference (e.g., group 1 will differ from group 2). Somewhat different critical values of t are used in the two situations: The first situation is called a one-tailed test, and the second situation is called a two-tailed test.

The issue can be visualized by looking at the sampling distribution of t values for 18 degrees of freedom, as shown in Figure 13.1. As you can see, a value of 0.00 is expected most frequently. Values greater than or less than zero are less likely to occur. The first distribution shows the logic of a two-tailed test. We used the value of 2.101 for the critical value of t with a .05 significance level because a direction of difference was not predicted. This critical value is the point beyond which 2.5% of the positive values and 2.5% of the negative values of t lie (hence, a total probability of .05 combined from the two “tails” of the sampling distribution). The second distribution illustrates a one-tailed test. If a directional difference had been predicted, the critical value would have been 1.734. This is the value beyond which 5% of the values lie in only one “tail” of the distribution. Whether to specify a one-tailed or two-tailed test will depend on whether you originally designed your study to test a directional hypothesis.

F Test

The analysis of variance, or F test, is an extension of the t test. The analysis of variance is a more general statistical procedure than the t test. When a study has only one independent variable with two groups, F and t are virtually identical—the value of F equals t2 in this situation. However, analysis of variance is also used when there are more than two levels of an independent variable and when a factorial design with two or more independent variables has been used. Thus, the F test is appropriate for the simplest experimental design, as well as for the more complex designs discussed in Chapter 10. The t test was presented first because the formula allows us to demonstrate easily the relationship of the group difference and the within-group variability to the outcome of the statistical test. However, in practice, analysis of variance is the more common procedure. The calculations necessary to conduct an F test are provided in Appendix C.

The F statistic is a ratio of two types of variance: systematic variance and error variance (hence the term analysis of variance). Systematic variance is the deviation of the group means from the grand mean, or the mean score of all individuals in all groups. Systematic variance is small when the difference between group means is small and increases as the group mean differences increase. Error variance is the deviation of the individual scores in each group from their respective group means. Terms that you may see in research instead of systematic and error variance are between-group variance and within-group variance. Systematic variance is the variability of scores between groups, and error variance is the variability of scores within groups. The larger the F ratio is, the more likely it is that the results are significant.

Page 276

Calculating Effect Size

The concept of effect size was discussed in Chapter 12. After determining that there was a statistically significant effect of the independent variable, researchers will want to know the magnitude of the effect. Therefore, we want to calculate an estimate of effect size. For a t test, the calculation is

where df is the degrees of freedom. Thus, using the obtained value of t, 4.02, and 18 degrees of freedom, we find:

This value is a type of correlation coefficient that can range from 0.00 to 1.00; as mentioned in Chapter 12, .69 is considered a large effect size. For additional information on effect size calculation, see Rosenthal (1991). The same distinction between r and r2 that was made in Chapter 12 applies here as well.

Another effect size estimate used when comparing two means is called Cohen’s d. Cohen’s d expresses effect size in terms of standard deviation units. A d value of 1.0 tells you that the means are 1 standard deviation apart; a d of .2 indicates that the means are separated by .2 standard deviation.

You can calculate the value of Cohen’s d using the means (M) and standard deviations (SD) of the two groups:

Note that the formula uses M and SD instead of   and s. These abbreviations are used in APA style (see Appendix A).

The value of d is larger than the corresponding value of r, but it is easy to convert d to a value of r. Both statistics provide information on the size of the relationship between the variables studied. You might note that both effect size estimates have a value of 0.00 when there is no relationship. The value of r has a maximum value of 1.00, but d has no maximum value.

Confidence Intervals and Statistical Significance

Confidence intervals were described in Chapter 7. After obtaining a sample value, we can calculate a confidence interval. An interval of values defines the most likely range of actual population values. The interval has an associated confidence interval: A 95% confidence interval indicates that we are 95% sure that the population value lies within the range; a 99% interval would provide greater certainty but the range of values would be larger.

Page 277A confidence interval can be obtained for each of the means in the aggression experiment. The 95% confidence intervals for the two conditions are:

A bar graph that includes a visual depiction of the confidence interval can be very useful. The means from the aggression experiment are shown in Figure 13.2. The shaded bars represent the mean aggression scores in the two conditions. The confidence interval for each group is shown with a vertical I-shaped line that is bounded by the upper and lower limits of the 95% confidence interval. It is important to examine confidence intervals to obtain a greater understanding of the meaning of your obtained data. Although the obtained sample means provide the best estimate of the population values, you are able to see the likely range of possible values. The size of the interval is related to both the size of the sample and the confidence level. As the sample size increases, the confidence interval narrows. This is because sample means obtained with larger sample sizes are more likely to reflect the population mean. Second, higher confidence is associated with a larger interval. If you want to be almost certain that the interval contains the true population mean (e.g., a 99% confidence interval), you will need to include more possibilities. Note that the 95% confidence intervals for the two means do not overlap. This should be a clue to you that the difference is statistically significant. Indeed, examining confidence intervals is an alternative way of thinking about statistical significance. The null hypothesis is that the difference in population means is 0.00. However, if you were to subtract all the means in the 95% confidence interval for the no-model condition from all the means in the model condition, none of these differences would include the value of 0.00. We can be very confident that the null hypothesis should be rejected.

FIGURE 13.2

Mean aggression scores from the hypothetical modeling experiment including the 95% confidence intervals

Page 278

Statistical Significance: An Overview

The logic underlying the use of statistical tests rests on statistical theory. There are some general concepts, however, that should help you understand what you are doing when you conduct a statistical test. First, the goal of the test is to allow you to make a decision about whether your obtained results are reliable; you want to be confident that you would obtain similar results if you conducted the study over and over again. Second, the significance level (alpha level) you choose indicates how confident you wish to be when making the decision. A .05 significance level says that you are 95% sure of the reliability of your findings; however, there is a 5% chance that you could be wrong. There are few certainties in life! Third, you are most likely to obtain significant results when you have a large sample size because larger sample sizes provide better estimates of true population values. Finally, you are most likely to obtain significant results when the effect size is large, i.e., when differences between groups are large and variability of scores within groups is small.

In the remainder of the chapter, we will expand on these issues. We will examine the implications of making a decision about whether results are significant, the way to determine a significance level, and the way to interpret nonsignificant results. We will then provide some guidelines for selecting the appropriate statistical test in various research designs.

TYPE I AND TYPE II ERRORS

The decision to reject the null hypothesis is based on probabilities rather than on certainties. That is, the decision is made without direct knowledge of the true state of affairs in the population. Thus, the decision might not be correct; errors may result from the use of inferential statistics.

A decision matrix is shown in Figure 13.3. Notice that there are two possible decisions: (1) Reject the null hypothesis or (2) accept the null hypothesis. There are also two possible truths about the population: (1) The null hypothesis is true or (2) the null hypothesis is false. In sum, as the decision matrix shows, there are two kinds of correct decisions and two kinds of errors.

Correct Decisions

One correct decision occurs when we reject the null hypothesis and the research hypothesis is true in the population. Here, our decision is that the population means are not equal, and in fact, this is true in the population. This is the decision you hope to make when you begin your study.

Page 279

FIGURE 13.3

Decision matrix for Type I and Type II errors

The other correct decision is to accept the null hypothesis, and the null hypothesis is true in the population: The population means are in fact equal.

Type I Errors

Type I error is made when we reject the null hypothesis but the null hypothesis is actually true. Our decision is that the population means are not equal when they actually are equal. Type I errors occur when, simply by chance, we obtain a large value of t or F. For example, even though a t value of 4.025 is highly improbable if the population means are indeed equal (less than 5 chances out of 100), this can happen. When we do obtain such a large t value by chance, we incorrectly decide that the independent variable had an effect.

The probability of making a Type I error is determined by the choice of significance or alpha level (alpha may be shown as the Greek letter alpha—α). When the significance level for deciding whether to reject the null hypothesis is .05, the probability of a Type I error (alpha) is .05. If the null hypothesis is rejected, there are 5 chances out of 100 that the decision is wrong. The probability of making a Type I error can be changed by either decreasing or increasing the significance level. If we use a lower alpha level of .01, for example, there is less chance of making a Type I error. With a .01 significance level, the null hypothesis is rejected only when the probability of obtaining the results is .01 or less if the null hypothesis is correct.

Type II Errors

Type II error occurs when the null hypothesis is accepted although in the population the research hypothesis is true. The population means are not equal, but the results of the experiment do not lead to a decision to reject the null hypothesis.

Research should be designed so that the probability of a Type II error (this probability is called beta, or β) is relatively low. The probability of making a Page 280Type II error is related to three factors. The first is the significance (alpha) level. If we set a very low significance level to decrease the chances of a Type I error, we increase the chances of a Type II error. In other words, if we make it very difficult to reject the null hypothesis, the probability of incorrectly accepting the null hypothesis increases. The second factor is sample size. True differences are more likely to be detected if the sample size is large. The third factor is effect size. If the effect size is large, a Type II error is unlikely. However, a small effect size may not be significant with a small sample.

The Everyday Context of Type I and Type II Errors

The decision matrix used in statistical analyses can be applied to the kinds of decisions people frequently must make in everyday life. For example, consider the decision made by a juror in a criminal trial. As is the case with statistics, a decision must be made on the basis of evidence: Is the defendant innocent or guilty? However, the decision rests with individual jurors and does not necessarily reflect the true state of affairs: that the person really is innocent or guilty.

The juror’s decision matrix is illustrated in Figure 13.4. To continue the parallel to the statistical decision, assume that the null hypothesis is the defendant is innocent (i.e., the dictum that a person is innocent until proven guilty). Thus, rejection of the null hypothesis means deciding that the defendant is guilty, and acceptance of the null hypothesis means deciding that the defendant is innocent. The decision matrix also shows that the null hypothesis may actually be true or false. There are two kinds of correct decisions and two kinds of errors like those described in statistical decisions. A Type I error is finding the defendant guilty when the person really is innocent; a Type II error is finding the defendant innocent when the person actually is guilty. In our society, Type I errors by jurors generally are considered to be more serious than Type II errors. Thus, before finding someone guilty, the juror is asked to make sure that the person is guilty “beyond a reasonable doubt” or to consider that “it is better to have a hundred guilty persons go free than to find one innocent person guilty.”

The decision that a doctor makes to operate or not operate on a patient provides another illustration of how a decision matrix works. The matrix is shown in Figure 13.5. Here, the null hypothesis is that no operation is necessary. The decision is whether to reject the null hypothesis and perform the operation or to accept the null hypothesis and not perform surgery. In reality, the surgeon is faced with two possibilities: Either the surgery is unnecessary (the null hypothesis is true) or the patient will die without the operation (a dramatic case of the null hypothesis being false). Which error is more serious in this case? Most doctors would believe that not operating on a patient who really needs the operation—making a Type II error—is more serious than making the Type I error of performing surgery on someone who does not really need it.

FIGURE 13.4

Decision matrix for a juror

Page 281

FIGURE 13.5

Decision matrix for a doctor

One final illustration of the use of a decision matrix involves the important decision to marry someone. If the null hypothesis is that the person is “wrong” for you, and the true state is that the person is either “wrong” or “right,” you must decide whether to go ahead and marry the person. You might try to construct a decision matrix for this particular problem. Which error is more costly: a Type I error or a Type II error?

CHOOSING A SIGNIFICANCE LEVEL

Researchers traditionally have used either a .05 or a .01 significance level in the decision to reject the null hypothesis. If there is less than a .05 or a .01 probability that the results occurred because of random error, the results are said to be significant. However, there is nothing magical about a .05 or a .01 significance level. The significance level chosen merely specifies the probability of a Type I error if the null hypothesis is rejected. The significance level chosen by the researcher usually is dependent on the consequences of making a Type I versus a Type II error. As previously noted, for a juror, a Type I error is more serious than a Type II error; for a doctor, however, a Type II error may be more serious.

Researchers generally believe that the consequences of making a Type I error are more serious than those associated with a Type II error. If the null hypothesis is rejected, the researcher might publish the results in a journal, and the results might be reported by others in textbooks or in newspaper or magazine articles. Page 282Researchers do not want to mislead people or risk damaging their reputations by publishing results that are not reliable and so cannot be replicated. Thus, they want to guard against the possibility of making a Type I error by using a very low significance level (.05 or .01). In contrast to the consequences of publishing false results, the consequences of a Type II error are not seen as being very serious.

Thus, researchers want to be very careful to avoid Type I errors when their results may be published. However, in certain circumstances, a Type I error is not serious. For example, if you were engaged in pilot or exploratory research, your results would be used primarily to decide whether your research ideas were worth pursuing. In this situation, it would be a mistake to overlook potentially important data by using a very conservative significance level. In exploratory research, a significance level of .25 may be more appropriate for deciding whether to do more research. Remember that the significance level chosen and the consequences of a Type I or a Type II error are determined by what the results will be used for.

INTERPRETING NONSIGNIFICANT RESULTS

Although “accepting the null hypothesis” is convenient terminology, it is important to recognize that researchers are not generally interested in accepting the null hypothesis. Research is designed to show that a relationship between variables does exist, not to demonstrate that variables are unrelated.

More important, a decision to accept the null hypothesis when a single study does not show significant results is problematic, because negative or nonsignificant results are difficult to interpret. For this reason, researchers often say that they simply “fail to reject” or “do not reject” the null hypothesis. The results of a single study might be nonsignificant even when a relationship between the variables in the population does in fact exist. This is a Type II error. Sometimes, the reasons for a Type II error lie in the procedures used in the experiment. For example, a researcher might obtain nonsignificant results by providing incomprehensible instructions to the participants, by having a very weak manipulation of the independent variable, or by using a dependent measure that is unreliable and insensitive. Rather than concluding that the variables are not related, researchers may decide that a more carefully conducted study would find that the variables are related.

We should also consider the statistical reasons for a Type II error. Recall that the probability of a Type II error is influenced by the significance (alpha) level, sample size, and effect size. Thus, nonsignificant results are more likely to be found if the researcher is very cautious in choosing the alpha level. If the researcher uses a significance level of .001 rather than .05, it is more difficult to reject the null hypothesis (there is not much chance of a Type I error). However, that also means that there is a greater chance of accepting an incorrect null hypothesis (i.e., a Type II error is more likely). In other words, a meaningful result is more likely to be overlooked when the significance level is very low.

Page 283A Type II error may also result from a sample size that is too small to detect a real relationship between variables. A general principle is that the larger the sample size is, the greater the likelihood of obtaining a significant result. This is because large sample sizes give more accurate estimates of the actual population than do small sample sizes. In any given study, the sample size may be too small to permit detection of a significant result.

A third reason for a nonsignificant finding is that the effect size is small. Very small effects are difficult to detect without a large sample size. In general, the sample size should be large enough to find a real effect, even if it is a small one.

The fact that it is possible for a very small effect to be statistically significant raises another issue. A very large sample size might enable the researcher to find a significant difference between means; however, this difference, even though statistically significant, might have very little practical significance. For example, if an expensive new psychiatric treatment technique significantly reduces the average hospital stay from 60 to 59 days, it might not be practical to use the technique despite the evidence for its effectiveness. The additional day of hospitalization costs less than the treatment. There are other circumstances, however, in which a treatment with a very small effect size has considerable practical significance. Usually this occurs when a very large population is affected by a fairly inexpensive treatment. Suppose a simple flextime policy for employees reduces employee turnover by 1% per year. This does not sound like a large effect. However, if a company normally has a turnover of 2,000 employees each year and the cost of training a new employee is $10,000, the company saves $200,000 per year with the new procedure. This amount may have practical significance for the company.

The key point here is that you should not accept the null hypothesis just because the results are nonsignificant. Nonsignificant results do not necessarily indicate that the null hypothesis is correct. However, there must be circumstances in which we can accept the null hypothesis and conclude that two variables are, in fact, not related. Frick (1995) describes several criteria that can be used in a decision to accept the null hypothesis. For example, we should look for well-designed studies with sensitive dependent measures and evidence from a manipulation check that the independent variable manipulation had its intended effect. In addition, the research should have a reasonably large sample to rule out the possibility that the sample was too small. Further, evidence that the variables are not related should come from multiple studies. Under such circumstances, you are justified in concluding that there is in fact no relationship.

CHOOSING A SAMPLE SIZE: POWER ANALYSIS

We noted in Chapter 9 that researchers often select a sample size based on what is typical in a particular area of research. An alternative approach is to select a sample size on the basis of a desired probability of correctly rejecting the null hypothesis. This probability is called the power of the statistical test. It is obviously related to the probability of a Type II error:

Page 284

TABLE 13.2 Total sample size needed to detect a significant difference for a t test

We previously indicated that the probability of a Type II error is related to significance level (alpha), sample size, and effect size. Statisticians such as Cohen (1988) have developed procedures for determining sample size based on these factors. Table 13.2 shows the total sample size needed for an experiment with two groups and a significance level of .05. In the table, effect sizes range from .10 to .50, and the desired power is shown at .80 and .90. Smaller effect sizes require larger samples to be significant at the .05 level. Higher desired power demands a greater sample size; this is because you want a more certain “guarantee” that your results will be statistically significant. Researchers usually use a power between .70 and .90 when using this method to determine sample size. Several computer programs have been developed to allow researchers to easily make the calculations necessary to determine sample size based on effect size estimates, significance level, and desired power.

You may never need to perform a power analysis. However, you should recognize the importance of this concept. If a researcher is studying a relationship with an effect size correlation of .20, a fairly large sample size is needed for statistical significance at the .05 level. An inappropriately low sample size in this situation is likely to produce a nonsignificant finding.

THE IMPORTANCE OF REPLICATIONS

Throughout this discussion of statistical analysis, the focus has been on the results of a single research investigation. What were the means and standard deviations? Was the mean difference statistically significant? If the results are significant, you conclude that they would likely be obtained over and over again if the study were repeated. We now have a framework for understanding the results of the study. Be aware, however, that scientists do not attach Page 285too much importance to the results of a single study. A rich understanding of any phenomenon comes from the results of numerous studies investigating the same variables. Instead of inferring population values on the basis of a single investigation, we can look at the results of several studies that replicate previous investigations (see Cohen, 1994). The importance of replications is a central concept in Chapter 14.

SIGNIFICANCE OF A PEARSON r CORRELATION COEFFICIENT

Recall from Chapter 12 that the Pearson r correlation coefficient is used to describe the strength of the relationship between two variables when both variables have interval or ratio scale properties. However, there remains the issue of whether the correlation is statistically significant. The null hypothesis in this case is that the true population correlation is 0.00—the two variables are not related. What if you obtain a correlation of .27 (plus or minus)? A statistical significance test will allow you to decide whether to reject the null hypothesis and conclude that the true population correlation is, in fact, greater than 0.00. The technical way to do this is to perform a t test that compares the obtained coefficient with the null hypothesis correlation of 0.00. The procedures for calculating a Pearson r and determining significance are provided in Appendix C.

COMPUTER ANALYSIS OF DATA

Although you can calculate statistics with a calculator using the formulas provided in this chapter, Chapter 12, and Appendix C, most data analysis is carried out via computer programs. Sophisticated statistical analysis software packages make it easy to calculate statistics for any data set. Descriptive and inferential statistics are obtained quickly, the calculations are accurate, and information on statistical significance is provided in the output. Computers also facilitate graphic displays of data.

Some of the major statistical programs include SPSS, SAS, SYSTAT, and freely available R and MYSTAT. Other programs may be used on your campus. Many people do most of their statistical analyses using a spreadsheet program such as Microsoft Excel. You will need to learn the specific details of the computer system used at your college or university. No one program is better than another; they all differ in the appearance of the output and the specific procedures needed to input data and have the program perform the test. However, the general procedures for doing analyses are quite similar in all of the statistics programs.

The first step in doing the analysis is to input the data. Suppose you want to input the data in Table 12.1, the modeling and aggression experiment. Data Page 286are entered into columns. It is easiest to think of data for computer analysis as a matrix with rows and columns. Data for each research participant are the rows of the matrix. The columns contain each participant’s scores on one or more measures, and an additional column may be needed to indicate a code to identify which condition the individual was in (e.g., Group 1 or Group 2). A data matrix in SPSS for Windows is shown in Figure 13.6. The numbers in the “group” column indicate whether the individual is in Group 1 (model) or Group 2 (no model), and the numbers in the “aggscore” column are the aggression scores from Table 12.1.

Other programs may require somewhat different methods of data input. For example, in Excel, it is usually easiest to set up a separate column for each group, as shown in Figure 13.6.

The next step is to provide instructions for the statistical analysis. Again, each program uses somewhat different steps to perform the analysis; most require you to choose from various menu options. When the analysis is completed, you are provided with the output that shows the results of the statistical procedure you performed. You will need to learn how to interpret the output. Figure 13.6 shows the output for a t test using Excel.

When you are first learning to use a statistical analysis program, it is a good idea to practice with some data from a statistics text to make sure that you get the same results. This will ensure that you know how to properly input the data and request the statistical analysis.

SELECTING THE APPROPRIATE STATISTICAL TEST

We have covered several types of designs and the variables that we study may have nominal, ordinal, interval, or ratio scale properties. How do you choose the appropriate statistical test for analyzing your data? Fortunately, there are a number of online guides and tutorials such as http://www.socialresearch-methods.net/selstat/ssstart.htm and http://wise.cgu.edu/choosemod/opening.htm; SPSS even has its own Statistics Coach to help with the decision.

We cannot cover every possible analysis. Our focus will be on variables that have either (1) nominal scale properties—two or more discrete values such as male and female or (2) interval/ratio scale properties with many values such as reaction time or rating scales (also called continuous variables). We will not address variables with ordinal scale values.

Research Studying Two Variables (Bivariate Research)

In these cases, the researcher is studying whether two variables are related. In general, we would refer to the first variable as the independent variable (IV) and the second variable as the dependent variable (DV). However, because it does not matter whether we are doing experimental or nonexperimental research, we could just as easily refer to the two variables as Variable X and Variable Y or Variable A and Variable B.

Page 287

FIGURE 13.6

Sample computer input and output using data from Table 12.1 (modeling experiment)

Page 288

Research with Multiple Independent Variables

In the following situations, we have more complex research designs with two or more independent variables that are studied with a single outcome or dependent variable.

These research design situations have been described in previous chapters. There are of course many other types of designs. Designs with multiple variables (multivariate statistics) are described in detail by Tabachnick and Fidell (2007). Procedures for research using ordinal level measurement may be found in a book by Siegel and Castellan (1988).

You have now considered how to generate research ideas, conduct research to test your ideas, and evaluate the statistical significance of your results. In the final chapter, we will examine issues of generalizing research findings beyond the specific circumstances in which the research was conducted.

Study Terms

Alpha level (p. 270)

Analysis of variance (F test) (p. 275)

Confidence interval (p. 276)

Degrees of freedom (p. 274)

Page 289Error variance (p. 275)

Inferential statistics (p. 267)

Null hypothesis (p. 268)

Power (p. 284)

Probability (p. 269)

Research hypothesis (p. 268)

Sampling distribution (p. 270)

Statistical significance (p. 269)

Systematic variance (p. 275)

t test (p. 272)

Type I error (p. 279)

Type II error (p. 279)

Review Questions

  1. Distinguish between the null hypothesis and the research hypothesis. When does the researcher decide to reject the null hypothesis?
  2. What is meant by statistical significance?
  3. What factors are most important in determining whether obtained results will be significant?
  4. Distinguish between a Type I and a Type II error. Why is your significance level the probability of making a Type I error?
  5. What factors are involved in choosing a significance level?
  6. What influences the probability of a Type II error?
  7. What is the difference between statistical significance and practical significance?
  8. Discuss the reasons that a researcher might obtain nonsignificant results.

Activities

  1. In an experiment, one group of research participants is given 10 pages of material to proofread for errors. Another group proofreads the same material on a computer screen. The dependent variable is the number of errors detected in a 5-minute period. A .05 significance (alpha) level is used to evaluate the results.
  2. What statistical test would you use?
  3. What is the null hypothesis? The research hypothesis?
  4. What is the Type I error? The Type II error?
  5. What is the probability of making a Type I error?
  6. In Professor Dre’s study, the average number of errors detected in the print and computer conditions was 38.4 and 13.2, respectively; this difference was not statistically significant. When Professor Seuss conducted the same experiment, the means of the two groups were 21.1 and 14.7, but the difference was statistically significant. Explain how this could happen.
  7. Suppose that you work for the child social services agency in your county. Your job is to investigate instances of possible child neglect or abuse. After collecting your evidence, which may come from a variety of sources, you must decide whether to leave the child in the home or place the child in protective custody. Specify the null and research hypotheses in this situation. What constitutes a Type I and a Type II error? Is a Type I or Type II error the more serious error in this situation? Why?Page 290
  8. A researcher investigated attitudes toward individuals in wheelchairs. The question was: Would people react differently to a person they perceived as being temporarily confined to the wheelchair than to a person who had a permanent disability? Participants were randomly assigned to two groups. Individuals in one group each worked on various tasks with a confederate in a wheelchair; members of the other group worked with the same confederate in a wheelchair, but this time the confederate wore a leg cast. After the session was over, participants filled out a questionnaire regarding their reactions to the study. One question asked, “Would you be willing to work with your test partner in the future on a class assignment?” with “yes” and “no” as the only response alternatives. What would be the appropriate significance test for this experiment? Can you offer a critique of the dependent variable? If you changed the dependent variable, would it affect your choice of significance tests? If so, how?
  9. Are you interested in learning more about using statistical software? If so, start by finding out whether your campus has one or more statistical software packages available in your computer labs. Now conduct a Web search for information on one of these—you may find it most useful to search for tutorials (including videos). We also encourage you to explore R, an increasingly popular statistical analysis and graphing program. It is free for download to your computer (http://www.r-project.org). Again, you may search for tutorials on using R; a good place to start is http://personality-project.org/r/r.guide.html.

 

 

 

 

Discussion

Discussion

Type of document      Essay

6 Pages Double Spaced (approx 275 words per page)

Subject area    Psychology

Academic Level          Master

Style    APA

Number of sources/references          1

Order description:

Discussion: Please discuss, elaborate and give example on each question below. The reference is (Cozby, P.C. & Bates, S. (2015) Methods in behavioral research (12th ed.). New York, NY: McGraw-Hill). Please answer each question carefully and follow each instruction. Be careful with grammar and spelling.

Discuss each question on 200 words.

Question 1. Describe the F test, including systematic variance and error variance. Remember to back your facts with a source. 200 words

Question 2. So…. I had a doctoral student who was measuring stress levels of nurses who worked in emergency rooms of large cities. She wanted to know how many nurses she should include in her sample. Her methodology involved measuring cortisol in saliva (there is a direct relationship with cortisol and stress).  From past research, she knew that this method of measuring cortisol had a 95% CI of 3.2 – 6.4. That is to imply that this method of collecting data is within 3.2 to 6.4% points of being accurate with 95% confidence. These numbers were obtained from her review of past research.

 

So…she estimated her 95% Cl to be 4. She also estimated that in the United States there were 5,000 ER nurses.

 

She now had all the information to do a Power analysis.  So….what is a power analysis?

 

I encourage you to go to the following site for a Power analysis calculator, plug in the above numbers, and get the correct number of participants, based on a 95% Cl of 4

 

MaCorr Research (2018). Sample size calculator. Retrieved from http://www.macorr.com/sample-size-calculator.htm    Use only the top calculator.

 

She used a sample size of 540.  Was she over, under, or just right? 200 words

 

Question 3.Please describe the differences between type I and type II errors. Remember to use citations and a reference to back facts. 200 words

 

Question4.Apply what was learned in weeks 1-7 to the following research scenario:

 

A researcher is exploring differences between men and women on ‘number of different recreational drugs used.’  The researcher collects data on a sample of 50 men and 50 women between the ages of 18-25. Each participant is asked ‘how many different recreational drugs have you tried in your life?’  The IV is gender (male/female) and the DV is ‘number of reported drugs.’

 

Please elaborate on the power issues may arise. What factors influence statistical power?  What biases may occur in this study? 200 words

 

5.Imagine you are a social worker and you are assigned to a case involving possible child neglect/abuse.

How do social workers determine if there are signs of abuse? How do social workers determine if there are signs of neglect?

 

Your RQ question would be: Is this child experiencing neglect/abuse in the family home?  What would be your null hypothesis?  What wold be the research/alternative hypothesis?

 

Knowing what you know about type I and type II errors, which would be worse (making a type I or a type II error) in this scenario?  Please elaborate on your answer. 200 words

 

Question6.Go to the web (NIH, CDC, etc.) and find a study that uses a two tailed t-test as a statistical tool.  Summarize this study in 3 to 4 sentences. 200 words

 

Question7.Describe the t test and explain the differences between one-tailed and two-tailed tests.  Research examples would be appreciated. 200 words

Question8.Back in weeks 1 and 2, we discussed RQs and hypotheses.  As to relate what was learned then with now, please write a RQ that can be answered with a one tailed t-test.  Please write a RQ that can be answered with a two tailed t-test. 200 words.

 

LEARNING OBJECTIVES

  • Explain how researchers use inferential statistics to evaluate sample data.
  • Distinguish between the null hypothesis and the research hypothesis.
  • Discuss probability in statistical inference, including the meaning of statistical significance.
  • Describe the ttest and explain the difference between one-tailed and two-tailed tests.
  • Describe the Ftest, including systematic variance and error variance.
  • Describe what a confidence interval tells you about your data.
  • Distinguish between Type I and Type II errors.
  • Discuss the factors that influence the probability of a Type II error.
  • Discuss the reasons a researcher may obtain nonsignificant results.
  • Define powerof a statistical test.
  • Describe the criteria for selecting an appropriate statistical test.

Page 267IN THE PREVIOUS CHAPTER, WE EXAMINED WAYS OF DESCRIBING THE RESULTS OF A STUDY USING DESCRIPTIVE STATISTICS AND A VARIETY OF GRAPHING TECHNIQUES.In addition to descriptive statistics, researchers use inferential statistics to draw more general conclusions about their data. In short, inferential statistics allow researchers to (a) assess just how confident they are that their results reflect what is true in the larger population and (b) assess the likelihood that their findings would still occur if their study was repeated over and over. In this chapter, we examine methods for doing so.

SAMPLES AND POPULATIONS

Inferential statistics are necessary because the results of a given study are based only on data obtained from a single sample of research participants. Researchers rarely, if ever, study entire populations; their findings are based on sample data. In addition to describing the sample data, we want to make statements about populations. Would the results hold up if the experiment were conducted repeatedly, each time with a new sample?

In the hypothetical experiment described in Chapter 12 (see Table 12.1), mean aggression scores were obtained in model and no-model conditions. These means are different: Children who observe an aggressive model subsequently behave more aggressively than children who do not see the model. Inferential statistics are used to determine whether the results match what would happen if we were to conduct the experiment again and again with multiple samples. In essence, we are asking whether we can infer that the difference in the sample means shown in Table 12.1 reflects a true difference in the population means.

Recall our discussion of this issue in Chapter 7 on the topic of survey data. A sample of people in your state might tell you that 57% prefer the Democratic candidate for an office and that 43% favor the Republican candidate. The report then says that these results are accurate to within 3 percentage points, with a 95% confidence level. This means that the researchers are very (95%) confident that, if they were able to study the entire population rather than a sample, the actual percentage who preferred the Democratic candidate would be between 60% and 54% and the percentage preferring the Republican would be between 46% and 40%. In this case, the researcher could predict with a great deal of certainty that the Democratic candidate will win because there is no overlap in the projected population values. Note, however, that even when we are very (in this case, 95%) sure, we still have a 5% chance of being wrong.

Inferential statistics allow us to arrive at such conclusions on the basis of sample data. In our study with the model and no-model conditions, are we confident that the means are sufficiently different to infer that the difference would be obtained in an entire population?

Page 268

INFERENTIAL STATISTICS

Much of the previous discussion of experimental design centered on the importance of ensuring that the groups are equivalent in every way except the independent variable manipulation. Equivalence of groups is achieved by experimentally controlling all other variables or by randomization. The assumption is that if the groups are equivalent, any differences in the dependent variable must be due to the effect of the independent variable.

This assumption is usually valid. However, it is also true that the difference between any two groups will almost never be zero. In other words, there will be some difference in the sample means, even when all of the principles of experimental design are rigorously followed. This happens because we are dealing with samples, rather than populations. Random or chance error will be responsible for some difference in the means, even if the independent variable had no effect on the dependent variable.

Therefore, the difference in the sample means does show any true difference in the population means (i.e., the effect of the independent variable) plus any random error. Inferential statistics allow researchers to make inferences about the true difference in the population on the basis of the sample data. Specifically, inferential statistics give the probability that the difference between means reflects random error rather than a real difference.

NULL AND RESEARCH HYPOTHESES

Statistical inference begins with a statement of the null hypothesis and a research (or alternative) hypothesis. The null hypothesis is simply that the population means are equal—the observed difference is due to random error. The research hypothesis is that the population means are, in fact, not equal. The null hypothesis states that the independent variable had no effect; the research hypothesis states that the independent variable did have an effect. In the aggression modeling experiment, the null and research hypotheses are:

H0 (null hypothesis): The population mean of the no-model group is equal to the population mean of the model group.

H1 (research hypothesis): The population mean of the no-model group is not equal to the population mean of the model group.

The logic of the null hypothesis is this: If we can determine that the null hypothesis is incorrect, then we accept the research hypothesis as correct. Acceptance of the research hypothesis means that the independent variable had an effect on the dependent variable.

The null hypothesis is used because it is a very precise statement—the population means are exactly equal. This permits us to know precisely the Page 269probability of obtaining our results if the null hypothesis is correct. Such precision is not possible with the research hypothesis, so we infer that the research hypothesis is correct only by rejecting the null hypothesis. We reject the null hypothesis when we find a very low probability that the obtained results could be due to random error. This is what is meant by statistical significance: A significant result is one that has a very low probability of occurring if the population means are equal. More simply, significance indicates that there is a low probability that the difference between the obtained sample means was due to random error. Significance, then, is a matter of probability.

PROBABILITY AND SAMPLING DISTRIBUTIONS

Probability is the likelihood of the occurrence of some event or outcome. We all use probabilities frequently in everyday life. For example, if you say that there is a high probability that you will get an A in this course, you mean that this outcome is likely to occur. Your probability statement is based on specific information, such as your grades on examinations. The weather forecaster says there is a 10% chance of rain today; this means that the likelihood of rain is very low. A gambler gauges the probability that a particular horse will win a race on the basis of the past records of that horse.

Probability in statistical inference is used in much the same way. We want to specify the probability that an event (in this case, a difference between means in the sample) will occur if there is no difference in the population. The question is: What is the probability of obtaining this result if only random error is operating? If this probability is very low, we reject the possibility that only random or chance error is responsible for the obtained difference in means.

Probability: The Case of ESP

The use of probability in statistical inference can be understood intuitively from a simple example. Suppose that a friend claims to have ESP (extrasensory perception) ability. You decide to test your friend with a set of five cards commonly used in ESP research; a different symbol is presented on each card. In the ESP test, you look at each card and think about the symbol, and your friend tells you which symbol you are thinking about. In your actual experiment, you have 10 trials; each of the five cards is presented two times in a random order. Your task is to know whether your friend’s answers reflect random error (guessing) or whether they indicate that something more than random error is occurring. The null hypothesis in your study is that only random error is operating. In this case, the research hypothesis is that the number of correct answers shows more than random or chance guessing. (Note, however, that accepting the research hypothesis could mean that your friend has ESP ability, but it could also mean that the cards were marked, that you had somehow cued your friend when thinking about the symbols, and so on.)

Page 270You can easily determine the number of correct answers to expect if the null hypothesis is correct. Just by guessing, 1 out of 5 answers (20%) should be correct. On 10 trials, 2 correct answers are expected under the null hypothesis. If, in the actual experiment, more (or less) than 2 correct answers are obtained, would you conclude that the obtained data reflect random error or something more than merely random guessing?

Suppose that your friend gets 3 correct. Then you would probably conclude that only guessing is involved, because you would recognize that there is a high probability that there would be 3 correct answers even though only 2 correct are expected under the null hypothesis. You expect that exactly 2 answers in 10 trials would be correct in the long run, if you conducted this experiment with this subject over and over again. However, small deviations away from the expected 2 are highly likely in a sample of 10 trials.

Suppose, though, that your friend gets 7 correct. You might conclude that the results indicate more than random error in this one sample of 10 observations. This conclusion would be based on your intuitive judgment that an outcome of 70% correct when only 20% is expected is very unlikely. At this point, you would decide to reject the null hypothesis and state that the result is significant. A significant result is one that is very unlikely if the null hypothesis is correct.

A key question then becomes: How unlikely does a result have to be before we decide it is significant? A decision rule is determined prior to collecting the data. The probability required for significance is called the alpha level. The most common alpha level probability used is .05. The outcome of the study is considered significant when there is a .05 or less probability of obtaining the results; that is, there are only 5 chances out of 100 that the results were due to random error in one sample from the population. If it is very unlikely that random error is responsible for the obtained results, the null hypothesis is rejected.

Sampling Distributions

You may have been able to judge intuitively that obtaining 7 correct on the 10 trials is very unlikely. Fortunately, we do not have to rely on intuition to determine the probabilities of different outcomes. Table 13.1 shows the probability of actually obtaining each of the possible outcomes in the ESP experiment with 10 trials and a null hypothesis expectation of 20% correct. An outcome of 2 correct answers has the highest probability of occurrence. Also, as intuition would suggest, an outcome of 3 correct is highly probable, but an outcome of 7 correct is highly unlikely.

The probabilities shown in Table 13.1 were derived from a probability distribution called the binomial distribution; all statistical significance decisions are based on probability distributions such as this one. Such distributions are called sampling distributions. The sampling distribution is based on the assumption that the null hypothesis is true; in the ESP example, the null hypothesis is that the person is only guessing and should therefore get 20% correct. Such a distribution assumes that if you were to conduct the study with the same number of observations over and over again, the most frequent finding would be 20%. However, because of the random error possible in each sample, there is a certain probability associated with other outcomes. Outcomes that are close to the expected null hypothesis value of 20% are very likely. However, outcomes farther from the expected result are less and less likely if the null hypothesis is correct. When your obtained results are highly unlikely if you are, in fact, sampling from the distribution specified by the null hypothesis, you conclude that the null hypothesis is incorrect. Instead of concluding that your sample results reflect a random deviation from the long-run expectation of 20%, you decide that the null hypothesis is incorrect. That is, you conclude that you have not sampled from the sampling distribution specified by the null hypothesis. Instead, in the case of the ESP example, you decide that your data are from a different sampling distribution in which, if you were to test the person repeatedly, most of the outcomes would be near your obtained result of 7 correct answers.

Page 271

TABLE 13.1 Exact probability of each possible outcome of the ESP experiment with 10 trials

All statistical tests rely on sampling distributions to determine the probability that the results are consistent with the null hypothesis. When the obtained data are very unlikely according to null hypothesis expectations (usually a .05 probability or less), the researcher decides to reject the null hypothesis and therefore to accept the research hypothesis.

Sample Size

The ESP example also illustrates the impact of sample size—the total number of observations—on determinations of statistical significance. Suppose you had tested your friend on 100 trials instead of 10 and had observed 30 correct answers. Just as you had expected 2 correct answers in 10 trials, you would now expect 20 of 100 answers to be correct. However, 30 out of 100 has a much Page 272lower likelihood of occurrence than 3 out of 10. This is because, with more observations sampled, you are more likely to obtain an accurate estimate of the true population value. Thus, as the size of your sample increases, you are more confident that your outcome is actually different from the null hypothesis expectation.

EXAMPLE: THE t AND F TESTS

Different statistical tests allow us to use probability to decide whether to reject the null hypothesis. In this section, we will examine the t test and the Ftest. The t test is commonly used to examine whether two groups are significantly different from each other. In the hypothetical experiment on the effect of a model on aggression, a t test is appropriate because we are asking whether the mean of the no-model group differs from the mean of the model group. The F test is a more general statistical test that can be used to ask whether there is a difference among three or more groups or to evaluate the results of factorial designs (discussed in Chapter 10).

To use a statistical test, you must first specify the null hypothesis and the research hypothesis that you are evaluating. The null and research hypotheses for the modeling experiment were described previously. You must also specify the significance level that you will use to decide whether to reject the null hypothesis; this is the alpha level. As noted, researchers generally use a significance level of .05.

t Test

The sampling distribution of all possible values of t is shown in Figure 13.1. (This particular distribution is for the sample size we used in the hypothetical experiment on modeling and aggression; the sample size was 20 with 10 participants in each group.) This sampling distribution has a mean of 0 and a standard deviation of 1. It reflects all the possible outcomes we could expect if we compare the means of two groups and the null hypothesis is correct.

To use this distribution to evaluate our data, we need to calculate a value of t from the obtained data and evaluate the obtained t in terms of the sampling distribution of t that is based on the null hypothesis. If the obtained t has a low probability of occurrence (.05 or less), then the null hypothesis is rejected.

The t value is a ratio of two aspects of the data, the difference between the group means and the variability within groups. The ratio may be described as follows:

The group difference is simply the difference between your obtained means; under the null hypothesis, you expect this difference to be zero. The value of t increases as the difference between your obtained sample means increases. Note that the sampling distribution of t assumes that there is no difference in the population means; thus, the expected value of t under the null hypothesis is zero. The within-group variability is the amount of variability of scores about the mean. The denominator of the t formula is essentially an indicator of the amount of random error in your sample. Recall from Chapter 12 that s, the standard deviation, and s2, the variance, are indicators of how much scores deviate from the group mean.

Page 273

FIGURE 13.1

Sampling distributions of t values with 18 degrees of freedom

A concrete example of a calculation of a t test should help clarify these concepts. The formula for the t test for two groups with equal numbers of participants in each group is:

Page 274The numerator of the formula is simply the difference between the means of the two groups. In the denominator, we first divide the variance (  and  ) of each group by the number of subjects in that group (n1 and n2) and add these together. We then find the square root of the result; this converts the number from a squared score (the variance) to a standard deviation. Finally, we calculate our obtained t value by dividing the mean difference by this standard deviation. When the formula is applied to the data in Table 12.1, we find:

Thus, the t value calculated from the data is 4.02. Is this a significant result? A computer program analyzing the results would immediately tell you the probability of obtaining a t value of this size with a total sample size of 20. Without such a program, there are Internet resources to find a table of “critical values” of t (http://www.statisticsmentor.com/category/statstables/) or to calculate the probability for you (http://vassarstats.net/tabs.html). Before going any farther, you should know that the obtained result is significant. Using a significance level of .05, the critical value from the sampling distribution of t is 2.101. Any t value greater than or equal to 2.101 has a .05 or less probability of occurring under the assumptions of the null hypothesis. Because our obtained value is larger than the critical value, we can reject the null hypothesis and conclude that the difference in means obtained in the sample reflects a true difference in the population.

Degrees of Freedom

You are probably wondering how the critical value was selected from the table. To use the table, you must first determine the degrees of freedom for the test (the term degrees of freedom is abbreviated as df). When comparing two means, you assume that the degrees of freedom are equal to n1 + n2 − 2, or the total number of participants in the groups minus the number of groups. In our experiment, the degrees of freedom would be 10 + 10 − 2 = 18. The degrees of freedom are the number of scores free to vary once the means are known. For example, if the mean of a group is 6.0 and there are five scores in the group, there are 4 degrees of freedom; once you have any four scores, the fifth score is known because the mean must remain 6.0.

One-Tailed Versus Two-Tailed Tests

In the table, you must choose a critical t for the situation in which your research hypothesis either (1) specified a direction of difference between the Page 275groups (e.g., group 1 will be greater than group 2) or (2) did not specify a predicted direction of difference (e.g., group 1 will differ from group 2). Somewhat different critical values of t are used in the two situations: The first situation is called a one-tailed test, and the second situation is called a two-tailed test.

The issue can be visualized by looking at the sampling distribution of t values for 18 degrees of freedom, as shown in Figure 13.1. As you can see, a value of 0.00 is expected most frequently. Values greater than or less than zero are less likely to occur. The first distribution shows the logic of a two-tailed test. We used the value of 2.101 for the critical value of t with a .05 significance level because a direction of difference was not predicted. This critical value is the point beyond which 2.5% of the positive values and 2.5% of the negative values of t lie (hence, a total probability of .05 combined from the two “tails” of the sampling distribution). The second distribution illustrates a one-tailed test. If a directional difference had been predicted, the critical value would have been 1.734. This is the value beyond which 5% of the values lie in only one “tail” of the distribution. Whether to specify a one-tailed or two-tailed test will depend on whether you originally designed your study to test a directional hypothesis.

F Test

The analysis of variance, or F test, is an extension of the t test. The analysis of variance is a more general statistical procedure than the t test. When a study has only one independent variable with two groups, F and t are virtually identical—the value of F equals t2 in this situation. However, analysis of variance is also used when there are more than two levels of an independent variable and when a factorial design with two or more independent variables has been used. Thus, the F test is appropriate for the simplest experimental design, as well as for the more complex designs discussed in Chapter 10. The t test was presented first because the formula allows us to demonstrate easily the relationship of the group difference and the within-group variability to the outcome of the statistical test. However, in practice, analysis of variance is the more common procedure. The calculations necessary to conduct an F test are provided in Appendix C.

The F statistic is a ratio of two types of variance: systematic variance and error variance (hence the term analysis of variance). Systematic variance is the deviation of the group means from the grand mean, or the mean score of all individuals in all groups. Systematic variance is small when the difference between group means is small and increases as the group mean differences increase. Error variance is the deviation of the individual scores in each group from their respective group means. Terms that you may see in research instead of systematic and error variance are between-group variance and within-group variance. Systematic variance is the variability of scores between groups, and error variance is the variability of scores within groups. The larger the F ratio is, the more likely it is that the results are significant.

Page 276

Calculating Effect Size

The concept of effect size was discussed in Chapter 12. After determining that there was a statistically significant effect of the independent variable, researchers will want to know the magnitude of the effect. Therefore, we want to calculate an estimate of effect size. For a t test, the calculation is

where df is the degrees of freedom. Thus, using the obtained value of t, 4.02, and 18 degrees of freedom, we find:

This value is a type of correlation coefficient that can range from 0.00 to 1.00; as mentioned in Chapter 12, .69 is considered a large effect size. For additional information on effect size calculation, see Rosenthal (1991). The same distinction between r and r2 that was made in Chapter 12 applies here as well.

Another effect size estimate used when comparing two means is called Cohen’s d. Cohen’s d expresses effect size in terms of standard deviation units. A d value of 1.0 tells you that the means are 1 standard deviation apart; a d of .2 indicates that the means are separated by .2 standard deviation.

You can calculate the value of Cohen’s d using the means (M) and standard deviations (SD) of the two groups:

Note that the formula uses M and SD instead of   and s. These abbreviations are used in APA style (see Appendix A).

The value of d is larger than the corresponding value of r, but it is easy to convert d to a value of r. Both statistics provide information on the size of the relationship between the variables studied. You might note that both effect size estimates have a value of 0.00 when there is no relationship. The value of r has a maximum value of 1.00, but d has no maximum value.

Confidence Intervals and Statistical Significance

Confidence intervals were described in Chapter 7. After obtaining a sample value, we can calculate a confidence interval. An interval of values defines the most likely range of actual population values. The interval has an associated confidence interval: A 95% confidence interval indicates that we are 95% sure that the population value lies within the range; a 99% interval would provide greater certainty but the range of values would be larger.

Page 277A confidence interval can be obtained for each of the means in the aggression experiment. The 95% confidence intervals for the two conditions are:

A bar graph that includes a visual depiction of the confidence interval can be very useful. The means from the aggression experiment are shown in Figure 13.2. The shaded bars represent the mean aggression scores in the two conditions. The confidence interval for each group is shown with a vertical I-shaped line that is bounded by the upper and lower limits of the 95% confidence interval. It is important to examine confidence intervals to obtain a greater understanding of the meaning of your obtained data. Although the obtained sample means provide the best estimate of the population values, you are able to see the likely range of possible values. The size of the interval is related to both the size of the sample and the confidence level. As the sample size increases, the confidence interval narrows. This is because sample means obtained with larger sample sizes are more likely to reflect the population mean. Second, higher confidence is associated with a larger interval. If you want to be almost certain that the interval contains the true population mean (e.g., a 99% confidence interval), you will need to include more possibilities. Note that the 95% confidence intervals for the two means do not overlap. This should be a clue to you that the difference is statistically significant. Indeed, examining confidence intervals is an alternative way of thinking about statistical significance. The null hypothesis is that the difference in population means is 0.00. However, if you were to subtract all the means in the 95% confidence interval for the no-model condition from all the means in the model condition, none of these differences would include the value of 0.00. We can be very confident that the null hypothesis should be rejected.

FIGURE 13.2

Mean aggression scores from the hypothetical modeling experiment including the 95% confidence intervals

Page 278

Statistical Significance: An Overview

The logic underlying the use of statistical tests rests on statistical theory. There are some general concepts, however, that should help you understand what you are doing when you conduct a statistical test. First, the goal of the test is to allow you to make a decision about whether your obtained results are reliable; you want to be confident that you would obtain similar results if you conducted the study over and over again. Second, the significance level (alpha level) you choose indicates how confident you wish to be when making the decision. A .05 significance level says that you are 95% sure of the reliability of your findings; however, there is a 5% chance that you could be wrong. There are few certainties in life! Third, you are most likely to obtain significant results when you have a large sample size because larger sample sizes provide better estimates of true population values. Finally, you are most likely to obtain significant results when the effect size is large, i.e., when differences between groups are large and variability of scores within groups is small.

In the remainder of the chapter, we will expand on these issues. We will examine the implications of making a decision about whether results are significant, the way to determine a significance level, and the way to interpret nonsignificant results. We will then provide some guidelines for selecting the appropriate statistical test in various research designs.

TYPE I AND TYPE II ERRORS

The decision to reject the null hypothesis is based on probabilities rather than on certainties. That is, the decision is made without direct knowledge of the true state of affairs in the population. Thus, the decision might not be correct; errors may result from the use of inferential statistics.

A decision matrix is shown in Figure 13.3. Notice that there are two possible decisions: (1) Reject the null hypothesis or (2) accept the null hypothesis. There are also two possible truths about the population: (1) The null hypothesis is true or (2) the null hypothesis is false. In sum, as the decision matrix shows, there are two kinds of correct decisions and two kinds of errors.

Correct Decisions

One correct decision occurs when we reject the null hypothesis and the research hypothesis is true in the population. Here, our decision is that the population means are not equal, and in fact, this is true in the population. This is the decision you hope to make when you begin your study.

Page 279

FIGURE 13.3

Decision matrix for Type I and Type II errors

The other correct decision is to accept the null hypothesis, and the null hypothesis is true in the population: The population means are in fact equal.

Type I Errors

Type I error is made when we reject the null hypothesis but the null hypothesis is actually true. Our decision is that the population means are not equal when they actually are equal. Type I errors occur when, simply by chance, we obtain a large value of t or F. For example, even though a t value of 4.025 is highly improbable if the population means are indeed equal (less than 5 chances out of 100), this can happen. When we do obtain such a large t value by chance, we incorrectly decide that the independent variable had an effect.

The probability of making a Type I error is determined by the choice of significance or alpha level (alpha may be shown as the Greek letter alpha—α). When the significance level for deciding whether to reject the null hypothesis is .05, the probability of a Type I error (alpha) is .05. If the null hypothesis is rejected, there are 5 chances out of 100 that the decision is wrong. The probability of making a Type I error can be changed by either decreasing or increasing the significance level. If we use a lower alpha level of .01, for example, there is less chance of making a Type I error. With a .01 significance level, the null hypothesis is rejected only when the probability of obtaining the results is .01 or less if the null hypothesis is correct.

Type II Errors

Type II error occurs when the null hypothesis is accepted although in the population the research hypothesis is true. The population means are not equal, but the results of the experiment do not lead to a decision to reject the null hypothesis.

Research should be designed so that the probability of a Type II error (this probability is called beta, or β) is relatively low. The probability of making a Page 280Type II error is related to three factors. The first is the significance (alpha) level. If we set a very low significance level to decrease the chances of a Type I error, we increase the chances of a Type II error. In other words, if we make it very difficult to reject the null hypothesis, the probability of incorrectly accepting the null hypothesis increases. The second factor is sample size. True differences are more likely to be detected if the sample size is large. The third factor is effect size. If the effect size is large, a Type II error is unlikely. However, a small effect size may not be significant with a small sample.

The Everyday Context of Type I and Type II Errors

The decision matrix used in statistical analyses can be applied to the kinds of decisions people frequently must make in everyday life. For example, consider the decision made by a juror in a criminal trial. As is the case with statistics, a decision must be made on the basis of evidence: Is the defendant innocent or guilty? However, the decision rests with individual jurors and does not necessarily reflect the true state of affairs: that the person really is innocent or guilty.

The juror’s decision matrix is illustrated in Figure 13.4. To continue the parallel to the statistical decision, assume that the null hypothesis is the defendant is innocent (i.e., the dictum that a person is innocent until proven guilty). Thus, rejection of the null hypothesis means deciding that the defendant is guilty, and acceptance of the null hypothesis means deciding that the defendant is innocent. The decision matrix also shows that the null hypothesis may actually be true or false. There are two kinds of correct decisions and two kinds of errors like those described in statistical decisions. A Type I error is finding the defendant guilty when the person really is innocent; a Type II error is finding the defendant innocent when the person actually is guilty. In our society, Type I errors by jurors generally are considered to be more serious than Type II errors. Thus, before finding someone guilty, the juror is asked to make sure that the person is guilty “beyond a reasonable doubt” or to consider that “it is better to have a hundred guilty persons go free than to find one innocent person guilty.”

The decision that a doctor makes to operate or not operate on a patient provides another illustration of how a decision matrix works. The matrix is shown in Figure 13.5. Here, the null hypothesis is that no operation is necessary. The decision is whether to reject the null hypothesis and perform the operation or to accept the null hypothesis and not perform surgery. In reality, the surgeon is faced with two possibilities: Either the surgery is unnecessary (the null hypothesis is true) or the patient will die without the operation (a dramatic case of the null hypothesis being false). Which error is more serious in this case? Most doctors would believe that not operating on a patient who really needs the operation—making a Type II error—is more serious than making the Type I error of performing surgery on someone who does not really need it.

FIGURE 13.4

Decision matrix for a juror

Page 281

FIGURE 13.5

Decision matrix for a doctor

One final illustration of the use of a decision matrix involves the important decision to marry someone. If the null hypothesis is that the person is “wrong” for you, and the true state is that the person is either “wrong” or “right,” you must decide whether to go ahead and marry the person. You might try to construct a decision matrix for this particular problem. Which error is more costly: a Type I error or a Type II error?

CHOOSING A SIGNIFICANCE LEVEL

Researchers traditionally have used either a .05 or a .01 significance level in the decision to reject the null hypothesis. If there is less than a .05 or a .01 probability that the results occurred because of random error, the results are said to be significant. However, there is nothing magical about a .05 or a .01 significance level. The significance level chosen merely specifies the probability of a Type I error if the null hypothesis is rejected. The significance level chosen by the researcher usually is dependent on the consequences of making a Type I versus a Type II error. As previously noted, for a juror, a Type I error is more serious than a Type II error; for a doctor, however, a Type II error may be more serious.

Researchers generally believe that the consequences of making a Type I error are more serious than those associated with a Type II error. If the null hypothesis is rejected, the researcher might publish the results in a journal, and the results might be reported by others in textbooks or in newspaper or magazine articles. Page 282Researchers do not want to mislead people or risk damaging their reputations by publishing results that are not reliable and so cannot be replicated. Thus, they want to guard against the possibility of making a Type I error by using a very low significance level (.05 or .01). In contrast to the consequences of publishing false results, the consequences of a Type II error are not seen as being very serious.

Thus, researchers want to be very careful to avoid Type I errors when their results may be published. However, in certain circumstances, a Type I error is not serious. For example, if you were engaged in pilot or exploratory research, your results would be used primarily to decide whether your research ideas were worth pursuing. In this situation, it would be a mistake to overlook potentially important data by using a very conservative significance level. In exploratory research, a significance level of .25 may be more appropriate for deciding whether to do more research. Remember that the significance level chosen and the consequences of a Type I or a Type II error are determined by what the results will be used for.

INTERPRETING NONSIGNIFICANT RESULTS

Although “accepting the null hypothesis” is convenient terminology, it is important to recognize that researchers are not generally interested in accepting the null hypothesis. Research is designed to show that a relationship between variables does exist, not to demonstrate that variables are unrelated.

More important, a decision to accept the null hypothesis when a single study does not show significant results is problematic, because negative or nonsignificant results are difficult to interpret. For this reason, researchers often say that they simply “fail to reject” or “do not reject” the null hypothesis. The results of a single study might be nonsignificant even when a relationship between the variables in the population does in fact exist. This is a Type II error. Sometimes, the reasons for a Type II error lie in the procedures used in the experiment. For example, a researcher might obtain nonsignificant results by providing incomprehensible instructions to the participants, by having a very weak manipulation of the independent variable, or by using a dependent measure that is unreliable and insensitive. Rather than concluding that the variables are not related, researchers may decide that a more carefully conducted study would find that the variables are related.

We should also consider the statistical reasons for a Type II error. Recall that the probability of a Type II error is influenced by the significance (alpha) level, sample size, and effect size. Thus, nonsignificant results are more likely to be found if the researcher is very cautious in choosing the alpha level. If the researcher uses a significance level of .001 rather than .05, it is more difficult to reject the null hypothesis (there is not much chance of a Type I error). However, that also means that there is a greater chance of accepting an incorrect null hypothesis (i.e., a Type II error is more likely). In other words, a meaningful result is more likely to be overlooked when the significance level is very low.

Page 283A Type II error may also result from a sample size that is too small to detect a real relationship between variables. A general principle is that the larger the sample size is, the greater the likelihood of obtaining a significant result. This is because large sample sizes give more accurate estimates of the actual population than do small sample sizes. In any given study, the sample size may be too small to permit detection of a significant result.

A third reason for a nonsignificant finding is that the effect size is small. Very small effects are difficult to detect without a large sample size. In general, the sample size should be large enough to find a real effect, even if it is a small one.

The fact that it is possible for a very small effect to be statistically significant raises another issue. A very large sample size might enable the researcher to find a significant difference between means; however, this difference, even though statistically significant, might have very little practical significance. For example, if an expensive new psychiatric treatment technique significantly reduces the average hospital stay from 60 to 59 days, it might not be practical to use the technique despite the evidence for its effectiveness. The additional day of hospitalization costs less than the treatment. There are other circumstances, however, in which a treatment with a very small effect size has considerable practical significance. Usually this occurs when a very large population is affected by a fairly inexpensive treatment. Suppose a simple flextime policy for employees reduces employee turnover by 1% per year. This does not sound like a large effect. However, if a company normally has a turnover of 2,000 employees each year and the cost of training a new employee is $10,000, the company saves $200,000 per year with the new procedure. This amount may have practical significance for the company.

The key point here is that you should not accept the null hypothesis just because the results are nonsignificant. Nonsignificant results do not necessarily indicate that the null hypothesis is correct. However, there must be circumstances in which we can accept the null hypothesis and conclude that two variables are, in fact, not related. Frick (1995) describes several criteria that can be used in a decision to accept the null hypothesis. For example, we should look for well-designed studies with sensitive dependent measures and evidence from a manipulation check that the independent variable manipulation had its intended effect. In addition, the research should have a reasonably large sample to rule out the possibility that the sample was too small. Further, evidence that the variables are not related should come from multiple studies. Under such circumstances, you are justified in concluding that there is in fact no relationship.

CHOOSING A SAMPLE SIZE: POWER ANALYSIS

We noted in Chapter 9 that researchers often select a sample size based on what is typical in a particular area of research. An alternative approach is to select a sample size on the basis of a desired probability of correctly rejecting the null hypothesis. This probability is called the power of the statistical test. It is obviously related to the probability of a Type II error:

Page 284

TABLE 13.2 Total sample size needed to detect a significant difference for a t test

We previously indicated that the probability of a Type II error is related to significance level (alpha), sample size, and effect size. Statisticians such as Cohen (1988) have developed procedures for determining sample size based on these factors. Table 13.2 shows the total sample size needed for an experiment with two groups and a significance level of .05. In the table, effect sizes range from .10 to .50, and the desired power is shown at .80 and .90. Smaller effect sizes require larger samples to be significant at the .05 level. Higher desired power demands a greater sample size; this is because you want a more certain “guarantee” that your results will be statistically significant. Researchers usually use a power between .70 and .90 when using this method to determine sample size. Several computer programs have been developed to allow researchers to easily make the calculations necessary to determine sample size based on effect size estimates, significance level, and desired power.

You may never need to perform a power analysis. However, you should recognize the importance of this concept. If a researcher is studying a relationship with an effect size correlation of .20, a fairly large sample size is needed for statistical significance at the .05 level. An inappropriately low sample size in this situation is likely to produce a nonsignificant finding.

THE IMPORTANCE OF REPLICATIONS

Throughout this discussion of statistical analysis, the focus has been on the results of a single research investigation. What were the means and standard deviations? Was the mean difference statistically significant? If the results are significant, you conclude that they would likely be obtained over and over again if the study were repeated. We now have a framework for understanding the results of the study. Be aware, however, that scientists do not attach Page 285too much importance to the results of a single study. A rich understanding of any phenomenon comes from the results of numerous studies investigating the same variables. Instead of inferring population values on the basis of a single investigation, we can look at the results of several studies that replicate previous investigations (see Cohen, 1994). The importance of replications is a central concept in Chapter 14.

SIGNIFICANCE OF A PEARSON r CORRELATION COEFFICIENT

Recall from Chapter 12 that the Pearson r correlation coefficient is used to describe the strength of the relationship between two variables when both variables have interval or ratio scale properties. However, there remains the issue of whether the correlation is statistically significant. The null hypothesis in this case is that the true population correlation is 0.00—the two variables are not related. What if you obtain a correlation of .27 (plus or minus)? A statistical significance test will allow you to decide whether to reject the null hypothesis and conclude that the true population correlation is, in fact, greater than 0.00. The technical way to do this is to perform a t test that compares the obtained coefficient with the null hypothesis correlation of 0.00. The procedures for calculating a Pearson r and determining significance are provided in Appendix C.

COMPUTER ANALYSIS OF DATA

Although you can calculate statistics with a calculator using the formulas provided in this chapter, Chapter 12, and Appendix C, most data analysis is carried out via computer programs. Sophisticated statistical analysis software packages make it easy to calculate statistics for any data set. Descriptive and inferential statistics are obtained quickly, the calculations are accurate, and information on statistical significance is provided in the output. Computers also facilitate graphic displays of data.

Some of the major statistical programs include SPSS, SAS, SYSTAT, and freely available R and MYSTAT. Other programs may be used on your campus. Many people do most of their statistical analyses using a spreadsheet program such as Microsoft Excel. You will need to learn the specific details of the computer system used at your college or university. No one program is better than another; they all differ in the appearance of the output and the specific procedures needed to input data and have the program perform the test. However, the general procedures for doing analyses are quite similar in all of the statistics programs.

The first step in doing the analysis is to input the data. Suppose you want to input the data in Table 12.1, the modeling and aggression experiment. Data Page 286are entered into columns. It is easiest to think of data for computer analysis as a matrix with rows and columns. Data for each research participant are the rows of the matrix. The columns contain each participant’s scores on one or more measures, and an additional column may be needed to indicate a code to identify which condition the individual was in (e.g., Group 1 or Group 2). A data matrix in SPSS for Windows is shown in Figure 13.6. The numbers in the “group” column indicate whether the individual is in Group 1 (model) or Group 2 (no model), and the numbers in the “aggscore” column are the aggression scores from Table 12.1.

Other programs may require somewhat different methods of data input. For example, in Excel, it is usually easiest to set up a separate column for each group, as shown in Figure 13.6.

The next step is to provide instructions for the statistical analysis. Again, each program uses somewhat different steps to perform the analysis; most require you to choose from various menu options. When the analysis is completed, you are provided with the output that shows the results of the statistical procedure you performed. You will need to learn how to interpret the output. Figure 13.6 shows the output for a t test using Excel.

When you are first learning to use a statistical analysis program, it is a good idea to practice with some data from a statistics text to make sure that you get the same results. This will ensure that you know how to properly input the data and request the statistical analysis.

SELECTING THE APPROPRIATE STATISTICAL TEST

We have covered several types of designs and the variables that we study may have nominal, ordinal, interval, or ratio scale properties. How do you choose the appropriate statistical test for analyzing your data? Fortunately, there are a number of online guides and tutorials such as http://www.socialresearch-methods.net/selstat/ssstart.htm and http://wise.cgu.edu/choosemod/opening.htm; SPSS even has its own Statistics Coach to help with the decision.

We cannot cover every possible analysis. Our focus will be on variables that have either (1) nominal scale properties—two or more discrete values such as male and female or (2) interval/ratio scale properties with many values such as reaction time or rating scales (also called continuous variables). We will not address variables with ordinal scale values.

Research Studying Two Variables (Bivariate Research)

In these cases, the researcher is studying whether two variables are related. In general, we would refer to the first variable as the independent variable (IV) and the second variable as the dependent variable (DV). However, because it does not matter whether we are doing experimental or nonexperimental research, we could just as easily refer to the two variables as Variable X and Variable Y or Variable A and Variable B.

Page 287

FIGURE 13.6

Sample computer input and output using data from Table 12.1 (modeling experiment)

Page 288

Research with Multiple Independent Variables

In the following situations, we have more complex research designs with two or more independent variables that are studied with a single outcome or dependent variable.

These research design situations have been described in previous chapters. There are of course many other types of designs. Designs with multiple variables (multivariate statistics) are described in detail by Tabachnick and Fidell (2007). Procedures for research using ordinal level measurement may be found in a book by Siegel and Castellan (1988).

You have now considered how to generate research ideas, conduct research to test your ideas, and evaluate the statistical significance of your results. In the final chapter, we will examine issues of generalizing research findings beyond the specific circumstances in which the research was conducted.

Study Terms

Alpha level (p. 270)

Analysis of variance (F test) (p. 275)

Confidence interval (p. 276)

Degrees of freedom (p. 274)

Page 289Error variance (p. 275)

Inferential statistics (p. 267)

Null hypothesis (p. 268)

Power (p. 284)

Probability (p. 269)

Research hypothesis (p. 268)

Sampling distribution (p. 270)

Statistical significance (p. 269)

Systematic variance (p. 275)

t test (p. 272)

Type I error (p. 279)

Type II error (p. 279)

Review Questions

  1. Distinguish between the null hypothesis and the research hypothesis. When does the researcher decide to reject the null hypothesis?
  2. What is meant by statistical significance?
  3. What factors are most important in determining whether obtained results will be significant?
  4. Distinguish between a Type I and a Type II error. Why is your significance level the probability of making a Type I error?
  5. What factors are involved in choosing a significance level?
  6. What influences the probability of a Type II error?
  7. What is the difference between statistical significance and practical significance?
  8. Discuss the reasons that a researcher might obtain nonsignificant results.

Activities

  1. In an experiment, one group of research participants is given 10 pages of material to proofread for errors. Another group proofreads the same material on a computer screen. The dependent variable is the number of errors detected in a 5-minute period. A .05 significance (alpha) level is used to evaluate the results.
  2. What statistical test would you use?
  3. What is the null hypothesis? The research hypothesis?
  4. What is the Type I error? The Type II error?
  5. What is the probability of making a Type I error?
  6. In Professor Dre’s study, the average number of errors detected in the print and computer conditions was 38.4 and 13.2, respectively; this difference was not statistically significant. When Professor Seuss conducted the same experiment, the means of the two groups were 21.1 and 14.7, but the difference was statistically significant. Explain how this could happen.
  7. Suppose that you work for the child social services agency in your county. Your job is to investigate instances of possible child neglect or abuse. After collecting your evidence, which may come from a variety of sources, you must decide whether to leave the child in the home or place the child in protective custody. Specify the null and research hypotheses in this situation. What constitutes a Type I and a Type II error? Is a Type I or Type II error the more serious error in this situation? Why?Page 290
  8. A researcher investigated attitudes toward individuals in wheelchairs. The question was: Would people react differently to a person they perceived as being temporarily confined to the wheelchair than to a person who had a permanent disability? Participants were randomly assigned to two groups. Individuals in one group each worked on various tasks with a confederate in a wheelchair; members of the other group worked with the same confederate in a wheelchair, but this time the confederate wore a leg cast. After the session was over, participants filled out a questionnaire regarding their reactions to the study. One question asked, “Would you be willing to work with your test partner in the future on a class assignment?” with “yes” and “no” as the only response alternatives. What would be the appropriate significance test for this experiment? Can you offer a critique of the dependent variable? If you changed the dependent variable, would it affect your choice of significance tests? If so, how?
  9. Are you interested in learning more about using statistical software? If so, start by finding out whether your campus has one or more statistical software packages available in your computer labs. Now conduct a Web search for information on one of these—you may find it most useful to search for tutorials (including videos). We also encourage you to explore R, an increasingly popular statistical analysis and graphing program. It is free for download to your computer (http://www.r-project.org). Again, you may search for tutorials on using R; a good place to start is http://personality-project.org/r/r.guide.html.

 

 

FETAL MACROSOMIA

FETAL MACROSOMIA

Type of document      PowerPoint Presentation        Number of pages/words         3 Pages Double Spaced (approx 275 words per page)

Subject area      Nursing          Academic Level          Undergraduate

Style    APA     Number of sources/references          1

Order description:

Presentation: Students will develop a presentation pertaining to the Newborn, Woman’s Health or the Childbearing Family. The presentation should be about a topic of interest in the care of well women or the healthy newborn. Alternatively it may concern a topic related to complications of childbearing or high-risk newborn care. Presentations are limited to 10 – 15 minutes with an additional 5 minutes for audience response/questions. You may use power point, poster board, create your own audio/visual aids or you may chose an alternate format of presentation such as role play/skit, OB Jeopardy. A rubric is provided. Presentation topics must be approved by the instructor AND posted to the discussion thread on eCampus to ensure that only one student presents on a given topic. Students are encouraged to investigate topics that they want to learn more about than can be covered in class. 10 points

 

Criteria            Points Possible            Points Earned

  1. Topic Relevance
  • The topic presented is applicable to the course
  • The topic builds on information presented in the course but goes beyond previously presented information
  • Presenter is able to answer [most] questions after presentation
  1. Topic Knowledge
  • Presenter clearly articulates new information gained from Evidenced-based, current sources
  • Sources are cited verbally and/or credited in visual aids
  • Presentation provides the basis for future exploration and interest relevant to Nursing
  1. Presentation quality
  • Speech is loud enough to be heard and has been practiced, with minimal ‘um’s’, eye contact made with audience
  • Presenter uses high-quality visual aids as needed [PPT or Pictures] or other media to enhance the presentation
  1. Format
  • Presentation falls within established timeframe [10-15 minutes]
  • Presenter maintains interest of the audience throughout
  • Presenter invites questions/discussion and maintain control of the discussion

Total:   60

 

______/60 = _____% of 10 points

Humanities: Creative Minds

Humanities: Creative Minds

Discipline: Art

Type of service: Coursework

Spacing: Double spacing

Paper format: MLA

Number of pages: 10 pages

Number of sources: 3 sources

Paper details:

Entire coursework

DESCRIPTION:

This course is an introduction to the study of creativity in human life: its sources, development, social settings, and accomplishments in human culture.  We analyze creativity as a central source of meaning and purpose in our lives as well as the development of a person’s unique combination of human intelligences.  Lives of creative people from all over the world are examined and contextualized. This course builds commitment to civic and moral responsibility for diverse, equitable, healthy and sustainable communities.  Students engage themselves as members of larger social fabrics and develop the abilities and motivation to take informed action for change.

STUDENT LEARNING OUTCOME STATEMENTS:

Student Learning Outcome: Students synthesize their critical thinking, imaginative, cooperative and empathetic abilities as whole persons in order to contextualize knowledge, interpret and communicate meaning, and cultivate their capacity for personal, as well as social change.

Student Learning Outcome: Students cultivate and demonstrate an awareness of the power of creativity and the potential of the creative process through direct involvement.

FORMAT:

This course is entirely online. There are no on-campus meetings. Online courses are equivalent to on-campus courses in terms of units and the number of hours required by students.  Some students are under the mistaken impression that an online class requires less work because students do not attend class.  However, instead of attending lectures, students will complete assigned readings, written assignments, tests, and participate in online discussion forums.

This course is not self-paced. All weekly assignments and tests have due dates, and late penalties apply. An important factor for success in online classes is the student’s ability to organize, manage and realistically schedule their time.

TEXTBOOK:

Gloria Fiero, Landmarks in the Humanities. McGraw Hill.

Students may use any edition.  Used 1st edition texts are reasonably priced under $10.00 and can be obtained through online booksellers, such as Amazon or BookFinder.com.  Order early to avoid shipping delays.  The textbook is available in the Campus Bookstore.  Also, the textbook is available on reserve in the campus library.

ADDING THE COURSE: 

After the registration period, students may add the course ONLY if space is available.  The instructor will distribute add codes starting the first day of classes.  Students on the waitlist must contact the instructor by email on the first day of the quarter to obtain an add code.

DROPPING THE COURSE:

Any enrolled students who do not complete the Introductory Forum by the deadline will be dropped to make space for students wishing to add.

After the first week, it is the student’s responsibility to drop, not the instructor’s, and to adhere to all drop deadlines.

COURSE MATERIALS:

This course is divided into weekly course modules.  Each module contains the assignments, due dates and learning resources for that week.  In addition to the textbook reading assignments, students will utilize a variety of learning resources, including Chapter Summaries and Outlines, PowerPoint lectures, videos and practice quizzes.

ASSIGNMENTS AND GRADES:

Students will complete weekly graded assignments. Grades will be posted in Canvas within 5 days of the assignment deadline.  Graded assignments often include instructor comments. You can read these comments by clicking on ‘Grades’ and then clicking on the assignment name and the message icon.

DISCUSSION FORUMS:

Weekly discussions are an integral part of this course. They are your opportunity to get to know your classmates, to explore the course material, and for me to get to know you. Discussion forums are open and available to you from the first day of the quarter. Each weekly discussion forum has a due date on Thursday.  I will grade your participation and provide feedback on the following Monday.

EXAMS:

There are three online exams given through Canvas.  Each exam is open to you for a specific 24-hour time frame. The dates and times of each exam are listed in the weekly Course Modules and under the Assignment tab in Canvas.  There are no make-up exams or alternate exam times.

ANNOUNCEMENTS:

I will regularly send Announcements to the class. You will receive individual announcement messages in your Canvas Inbox and announcements will be available under the announcements link in our course navigation menu. In order to make sure you receive my communication in a timely fashion, I recommend that you check your email regularly.

ATTENDANCE POLICY:

Students are expected to login and participate weekly. The instructor reserves the right to drop inactive students who do not complete assignments and exams according to the course deadlines.  Any student who misses two consecutive deadlines is considered inactive and may be dropped.

COMPUTER REQUIREMENTS:

  • Students are required to have access to a working computer with high speed internet or a smart phone. Computer issues or internet problems are not an acceptable excuse for missing deadlines or exams.  There are no make-ups or deadline extensions due to computer/internet issues.
  • Students are required to have basic computer literacy.  It is the student’s responsibility to obtain additional training if they are unable to perform computer functions required to take online tests, submit online assignments and participate in online discussion forums. Inability to perform computer functions required by an online course is not an acceptable excuse for missing deadlines.
  • Canvas for Mobile Devices:  Visit your app store to download the Canvas app.

CANVAS TECHNICAL SUPPORT:

De Anza Online Education Center

Monday-Thursday: 8:30am-5:00pm

Friday: 8:30am-4:00pm

408.864.8969

After Hours Only
You can contact Canvas Support when our Online Education Center is closed, including weekends: 1-844-592-2207

COURSE POLICIES:

  • No make-up exams or alternate exam times
  • There is a 50% deduction for late work
  • No extra credit assignments are given in this course
  • Computer or internet issues are not an acceptable excuse for missing deadlines
  • No emailed work will be excepted unless requested by instructor

COURSE REQUIREMENTS:

  • Complete the Introductory Forum in Canvas
  • Complete weekly readings and utilize learning resources
  • Pass three online exams based on textbook assignments
  • Complete one paper assignment
  • Participate in online discussion forums

COURSE POINTS:

Exam #1 30 points

Exam #2 30 points

Exam #3 30 points

1 Paper Assignment, 25 points

6 Discussion Forums: 10 points each: 60 points

Total Course Points: 175

FINAL COURSE GRADES:

Students can check their grades for each assignment and chart their progress by clicking on the Grades tab in Canvas. Grades are based solely on points.  Final course grades will be determined using the following percentage breakdown.  Grades are not rounded up.

93-100% =A;     90-92% = A-

87-89% = B+;    83-86% = B;     80-82% = B-

77-79% = C+;    70-76% = C

67-69% = D+;    63-66% = D;    60-62% = D-

59% and below = F

STUDENTS WITH DISABILITIES:

If you have a disability-related need for reasonable academic accommodations or services in this course, provide (name of Instructor) with a Test Accommodation Verification Form (also known as a TAV form) from Disability Support Services (DSS) or the Educational Diagnostic Center (EDC). Students are expected to give five days’ notice of the need for accommodations.  Students with disabilities can obtain a TAV form from their DSS counselor (864-8753 DSS main number) or EDC advisor (864-8839 EDC main number).

ACADEMIC INTEGRITY POLICY:

Cheating or plagiarism will not be tolerated. Students who submit the work of others as their own or copy the words of others without giving credit will receive a zero on the assignment.

STUDENT SUCCESS CENTER:

Need help?  Visit De Anza’s Student Success Center for peer tutoring and workshop.

Visit http://www.deanza.edu/studentsuccess

 (Links to an external site.)

Links to an external site.

for hours and information about workshops, group, drop-in and online tutoring, and to apply for (limited) weekly individual tutoring.  Or, stop by for more information. ATC 302

Online Tutoring with Smarthinking is available for free for De Anza students inside MyPortal.

 

 

Addiction

 

Addiction

Discipline: English

Type of service: Other

Spacing: Double spacing

Paper format: APA

Number of pages: 2 pages

Number of sources: 1 source

Paper details:

This will be a Brochure 2 pages in APA format, Topic will be Addiction with focus on Drug Addiction in the Natural Sciences. I have uploaded file with all pertinent information. Let me know if you have any questions.

Natural Sciences

Article Translation

Objectives:

o Apply research methods to locate a peer reviewed source
o Critically review, synthesize, and organize key points from the article
o Translate the article into a brochure for a general, academic reading audience

Introduction:

An article review provides an opportunity to investigate advancements within an area of a discipline. It should challenge both your research and analytical skills as you report information and evaluate its significance.

Assignment:

This assignment will familiarize you with locating, evaluating, and discussing a scientific article related to your chosen course theme.

First, you will need to locate a peer reviewed journal article in the natural sciences. Once you review and annotate the source (for your own understanding), you will take the knowledge you have gained and share it in a way that allows a non-scientific audience to grasp the concepts. Instead of producing an essay, you will create a brochure.

You will need to document your source using APA style. Since the natural sciences use quotes sparingly, avoid using them in your brochure. Instead, utilize summary and paraphrase. If you use more than one source, you will need to include parenthetical citations to denote which source provided information. Though quotes should be avoided if possible, use of quotes requires quotation marks and parenthetical citation.

You are also encouraged to use charts/graphs/illustrations as needed, but be mindful not to let these dominate the piece. Also make sure visuals are readable.

This assignment allows you a bit of creative freedom, so take advantage of it. Just make sure your brochure is informative and documented. It should relay to main points of the article and offer real insight. You can use more than one article, but remember to document all used sources, please.

Also remember, for the natural sciences one should be as objective as possible, and use more formal language. Though you are not required to use discipline-specific language, you are still to maintain formality. This includes using third-person only (so no first person “I” or second person “you”).

The brochure should be 2 pages, front and back (see module example), and should utilize subheadings (e.g. Introduction).

Preparation

First, read and annotate the source carefully. Take notes, highlight specific passages, and look up key terms that may seem confusing. Next, outline the main ideas from the article in the order that you will need to present them in your brochure. Note supporting facts and details that are necessary to understanding the main ideas. Consider what information is relevant to a non- scientific audience and what information can be discarded.