DDBA 8438: The T Statistic Video Podcast Transcript

DDBA 8438: The T Statistic Video Podcast Transcript JENNIFER ANN MORROW: Welcome to "The T Statistic." My name is Dr. Jennifer Ann Morrow. In today's ...
Author: Gyles Bennett
5 downloads 8 Views 81KB Size
DDBA 8438: The T Statistic Video Podcast Transcript JENNIFER ANN MORROW: Welcome to "The T Statistic." My name is Dr. Jennifer Ann Morrow. In today's demonstration, I'll review with you the T statistic. We'll talk about the degrees of freedom, and I'll give you the T statistic formula. We'll go over the shape of the T distribution. I'll show you how to use a T distribution table. We'll go over the assumptions of the T statistic. I'll show you how to calculate the effect size, and we'll go over a couple of examples. Okay, let's get started. The T Statistic JENNIFER ANN MORROW: The T statistic is an analysis that is used to test hypotheses about an unknown population mean, known as mu, when value of the population variance, known as sigma squared, is unknown. The T statistic uses the sample variance, S squared, as an estimate of the population variance, sigma squared. This analysis is also known as the one-sample T test. Degrees of Freedom JENNIFER ANN MORROW: Degrees of freedom describe the number of scores in a sample that are free to vary. For a T statistic, the degrees of freedom are N minus 1, or the sample size minus 1. The greater the value of degrees of freedom for a sample, the better S squared represents sigma squared. Therefore, the larger the sample, the better your sample variance represents the population variance. We want these to be as similar as possible. T Statistic Formula JENNIFER ANN MORROW: Here is the formula for a one-sample T test. Where X bar is the mean of the sample, mu is the mean of the population, and S sub X bar is the estimated standard error, So your one-sample T value is equal to the mean of your sample minus the mean of your population divided by the estimated standard error.

Shape of a T Distribution JENNIFER ANN MORROW: As your sample size gets larger, the shape of the T distribution becomes more normal. If you have a sample size of at least 30 participants, your T distribution would be normally distributed. T Distribution Table JENNIFER ANN MORROW: When you are analyzing your data using a one-sample T distribution, you need to make a decision about the null hypothesis by comparing your result to the critical region that you established in step two of your hypothesis testing. Once you have calculated your T value, you need to look at a T distribution table, which you can find in the back of your statistics textbook, in your appendix. First, you look at the first column and look up the degrees of freedom for your statistic. As you can see here, 1 to 29 is represented in this T distribution table. If your degrees of freedom is 30 or larger, you can use the last row. Second, you look at the alpha level and the type of test, one-tailed versus two-tailed, that you chose. Now you find the critical value that you need to surpass to see if your T test is significant. For example, if you have 15 degrees of freedom, and an alpha level of 0.05 and two-tailed, your T statistic value must be greater than 2.132 for you to say you have a statistically significant result. If your T value doesn't surpass this, then you would say your T test was nonsignificant. Never say something is "insignificant." Always refer to a nonsignificant result as nonsignificant. Recap JENNIFER ANN MORROW: Okay, let's recap. So far, we learned what a T statistic is. We talked about what degrees of freedom are. I showed you this formula for a T statistic. We looked at the shape of the T distribution, and we learned how to use a T distribution table. Now let's go over the T statistic in more detail. Assumptions of the T Statistic

JENNIFER ANN MORROW: There are two assumptions of the T test that must be met. First, sample observations must be independent. In other words, there is no relationship between or among any of the observations, or scores, in the sample. Second, the population from which a sample has been obtained must be normally distributed. You must meet these assumptions, or you are not justified in using a T test. If you violate either of these assumptions, you should be using a different analysis. Effect Size JENNIFER ANN MORROW: For a T statistic, you can report two measures of effect size: Cohen's D and percentage of variance explained, also known as R squared. To calculate Cohen's D, just take the mean difference and divide that by the standard deviation. And to calculate the percentage of variance explained you take your T value and square it and divide that by your T value squared plus your degrees of freedom. All right, now let's go over how to calculate a T test. Examples Using Formula JENNIFER ANN MORROW: For the first example, I'll show you how to calculate a one-sample T test using the formula. My null hypothesis is that average number of drinks is equal to 3.5 drinks. My alternative or research hypothesis is that average number of drinks doesn't equal 3.5 drinks. I chose an alpha level of 0.05. My sample size is 50. My sample mean is 3.8 drinks. My population is 3.5 drinks. And my sample standard deviation is equal to 1.2. Now let's calculate the one-sample T test. Okay, the formula for a one-sample T test is equal to T equals mean minus mu divided by your estimated standard error, and in this case, T is equal to the mean, which is 3.8 drinks, minus the population mean, which is 3.5 drinks, divided by your estimated standard error. And so your T value is 1.77. My degrees of freedom for this example is equal to the sample size, which is 50, minus 1, which is equal to 49. My alpha level, again, I chose, of 0.05. So when I look in my T distribution table, for a twotailed test, the critical value that I'll need to surpass is 2.00. Now, this critical value is actually 60 degrees of freedom. However, in the table,

there is no value for 49 degrees of freedom, so you go to the closest one, and that one is 60 degrees of freedom, so my critical value for a two-tailed test that I would need to surpass is 2.00. And for a onetailed test, it would be 1.67. So if I had chosen a two-tailed test, my result would be T and 49 degrees of freedom equals to 1.77, comma, NS, comma, two-tailed. It would be nonsignificant. I would say that the sample... mean number of drinks, which, in this case, is 3.8, is not significantly different... From the population mean, which, in this case, is mu equals 3.5. However, if I had chosen a one-tailed test, again, it would be T and 49 degrees of freedom equals 1.77, comma, P less than 0.05, comma, one-tailed. I would've achieved significance if I had chosen a one-tailed test. So I would've said the sample mean number of drinks-- mean equals 3.8-- is statistically different from the population mean of mu equals 3.5. Again, you always choose the type of test-- one- or two-tailed-- a priori, before you do your research. You don't conduct both and then choose the one that is significant. We could also calculate the effect size for this analysis. So for Cohen's D, which is the mean difference divided by the standard deviation-- here, in this case, the mean difference is 0.3. That's 3.8 drinks minus 3.5 drinks. divided by the standard deviation, which is 1.2. So Cohen's D is equal to 0.25. My percentage of variance explained again, is equal to T squared divided by T squared plus the degrees of freedom, so here, in this case, it is 1.77 squared divided by 1.77 squared plus 49, which is my degrees of freedom. And that is equal to 0.06. So the percentage of variance explained for this analysis is .0.06. Using SPSS JENNIFER ANN MORROW: Now let's learn how to do a one-sample T test using SPSS. Once you have SPSS open, you need to choose the data set that you're going to use to conduct your analysis. Click on File. Click on Open. Click on Data. Now find the data set that you are going to use. Once you have found the data set, click on it, and then click on Open. And make sure the data view window appears on your screen. Now let's go over how to do a one-sample T test. Click on Analyze. Click on Compare Means. Click on One-Sample T Test. And now your One-Sample T Test dialog box will appear on your screen. For this example, I want to compare the sample mean for anxiety to the population mean for anxiety. So I look here in the box on the left for my variable Anxiety. I click on that variable, Anxiety, then click on the right arrow key, and that moves Anxiety to the Test

Variable dialog box here on the right. Below is a box for "Test Value." That is the population mean that you want to compare the sample mean to. Say I know the population mean is equal to 2.00. So click on the box and type in your population mean of 2.00. Now just click Okay. As you see in your output file, SPSS will give you two tables, First one tells you the variable, Anxiety. Your sample size is 149. Your sample mean for anxiety is 2.1577, or round that up to 2.16. Your sample standard deviation is 0.65115, or round that to 0.65. And your standard error of the mean is 0.05334, or, again, round that to 0.05. The next box is your one-sample T test table. Here, you see your test value is equal to 2.00, and that's your population mean. Your T statistic value is equal to 2.955. Your degrees of freedom is 148-- again, remember, that's N minus 1-- the significance is 0.004, and your mean difference is 0.15765. So what does this mean? To write up this T statistic, it would be T, 148 degrees of freedom, is equal to 2.95, comma, and it would be P less than 0.01, or you can give the exact P value of P equals 0.004. Both would be acceptable. And then comma, two-tailed. So you know that you have found a significant result. So what you would be saying is, the sample mean... for Anxiety... the mean equals 2.16... Is significantly greater... Than your population mean... or mu equals 2.00. You could also calculate the effect size for this analysis. Cohen's D, again, is equal to the mean difference divided by your standard deviation. which here, in this case, your mean difference is 0.1577, again, divided by your standard deviation, which is 0.65115. So your Cohen's D is equal to 0.24. You can also calculate the percent of variance, which, again, is equal to T squared divided by T squared plus your degrees of freedom. So here, in this case, it's 2.95 squared divided by 2.95 squared plus 148, and that is equal to 0.05, so the percentage of variance accounted for is 0.05. Recap JENNIFER ANN MORROW: Okay, let's recap. We learned about the assumptions of the T statistic. We learned how to calculate the effect size. And we went over two examples, one using the formula and one using SPSS. We have now come to the end of our demonstration. Don't forget practice conducting the one-sample T statistic on your own, using both the formula and SPSS.

Suggest Documents