Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

ANOVA CNSL 503 TEST EXAM|| QUESTIONS AND 100% CORRECT ANSWERS ALREADY GRADED A+, Exams of Statistics

What is an ANOVA? - ANSWER: ANOVA is an analysis of variance between groups (or levels of a factor) or within groups (or error) Whereas t-tests compare only two sample distributions, ANOVA is capable of comparing many. We partition the total variance into how group means differ from the grand mean and how individual observations within groups differ from their group's mean What are the three assumptions of ANOVA? - ANSWER: 1. independence of observations 2. normality 3. homogeneity of variance What is the null hypothesis? - ANSWER: All means are equal μ1 = μ2 = μ3 = μ4 What is the Alternative (one way) hypothesis? - ANSWER: μ1≠ μ2≠ μ3≠ μ4 (There is at least ONE difference between groups) What is the F statistic or F ratio? - ANSWER: An F statistic is a value you get when you run an ANOVA test or a regression analysis to find out if the means between two populations are significantly different. It's similar to a T

Typology: Exams

2024/2025

Available from 07/15/2025

WINGS_TO-FLY
WINGS_TO-FLY 🇺🇸

1

(1)

1.3K documents

1 / 27

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
1 | P a g e
ANOVA CNSL 503 TEST EXAM|| QUESTIONS
AND 100% CORRECT ANSWERS ALREADY
GRADED A+
What is an ANOVA? - ANSWER: ANOVA is an analysis of variance between
groups (or levels of a factor) or within groups (or error) Whereas t-tests compare
only two sample distributions, ANOVA is capable of comparing many. We
partition the total variance into how group means differ from the grand mean and
how individual observations within groups differ from their group's mean
What are the three assumptions of ANOVA? - ANSWER: 1. independence of
observations 2. normality 3. homogeneity of variance
What is the null hypothesis? - ANSWER: All means are equal
μ1 = μ2 = μ3 = μ4
What is the Alternative (one way) hypothesis? - ANSWER: μ1≠ μ2≠ μ3≠ μ4
(There is at least ONE difference between groups)
What is the F statistic or F ratio? - ANSWER: An F statistic is a value you get
when you run an ANOVA test or a regression analysis to find out if the means
between two populations are significantly different. It's similar to a T statistic from
a T-Test; A-T test will tell you if a single variable is statistically significant and an
F test will tell you if a group of variables are jointly significant.
Sum of squares - ANSWER: The residual sum of squares is used to help you
decide if a statistical model is a good fit for your data. It measures the overall
difference between your data and the values predicted by your estimation model (a
"residual" is a measure of the distance from a data point to a regression line). The
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b

Partial preview of the text

Download ANOVA CNSL 503 TEST EXAM|| QUESTIONS AND 100% CORRECT ANSWERS ALREADY GRADED A+ and more Exams Statistics in PDF only on Docsity!

ANOVA CNSL 503 TEST EXAM|| QUESTIONS

AND 100% CORRECT ANSWERS ALREADY

GRADED A+

What is an ANOVA? - ANSWER: ANOVA is an analysis of variance between groups (or levels of a factor) or within groups (or error) Whereas t-tests compare only two sample distributions, ANOVA is capable of comparing many. We partition the total variance into how group means differ from the grand mean and how individual observations within groups differ from their group's mean What are the three assumptions of ANOVA? - ANSWER: 1. independence of observations 2. normality 3. homogeneity of variance What is the null hypothesis? - ANSWER: All means are equal μ1 = μ2 = μ3 = μ What is the Alternative (one way) hypothesis? - ANSWER: μ1≠ μ2≠ μ3≠ μ (There is at least ONE difference between groups) What is the F statistic or F ratio? - ANSWER: An F statistic is a value you get when you run an ANOVA test or a regression analysis to find out if the means between two populations are significantly different. It's similar to a T statistic from a T-Test; A-T test will tell you if a single variable is statistically significant and an F test will tell you if a group of variables are jointly significant. Sum of squares - ANSWER: The residual sum of squares is used to help you decide if a statistical model is a good fit for your data. It measures the overall difference between your data and the values predicted by your estimation model (a "residual" is a measure of the distance from a data point to a regression line). The

sum of the squared deviations, (X-Xbar)², is also called the sum of squares or more simply SS. SS represents the sum of squared differences from the mean and is an extremely important term in statistics. Variance. The sum of squares gives rise to variance. In statistical data analysis the total sum of squares (TSS or SST) is a quantity that appears as part of a standard way of presenting results of such analyses. It is defined as being the sum, over all observations, of the squared differences of each observation from the overall mean. Can calculate the sums of squares TOTAL, WITHIN, and BETWEEN What is the mean square formula? - ANSWER: Mean Square= Sum of Squares divided by its degrees of freedom In ANOVA, mean squares are used to determine whether factors (treatments) are significant. Represents the variation between the sample means. What is the purpose of a post-hoc test? - ANSWER: Purpose: Discover which pairs of scores are significantly different

  • Preserves FAMILY-WISE PROTECTION against Type I error When do you look at post-hoc tests? - ANSWER: ONLY IF overall F value is significant What is a 2 way ANOVA? - ANSWER: 2 independent variables What are the # of levels? Ex:

What do main effect and interaction effect have in common? - ANSWER: Assessing effect of the IV You want to test for sex differences in gait velocity among 3 age groups used earlier. What is the interaction asking? - ANSWER: Does the effect of gender (on gait speed) DEPEND on how old you are? Does the effect of age (on gait speed) DEPEND on sex? or: For gait speed, is there a different trend of increasing age for women versus men? What does a SIGNIFICANT INTERACTION tell us? - ANSWER: This tells us that any main effects may be MISLEADING or MEANINGLESS You want to test for sex differences in gait velocity among 3 age groups used earlier. What is the null hypothesis for each main effect? What is the null for the interaction? - ANSWER: One null for each main effect: Age: μ1 = μ2 = μ Sex: μ males = μ females One null for the interaction:

μ1 males = μ1 females = μ2 males = μ2 females = μ3 males = μ3 females t test - ANSWER: "ANOVA's younger sibling." independent sample t test - ANSWER: When making simple, straightforward comparisons of the means of two independent variables with two levels, the independent samples t-test is usually the statistic of choice. Example: Two independent samples of high school seniors (60 boys; 60 girls) to see if there are gender differences on vocab test. independent variable (IV) - ANSWER: the factor being manipulated by the experimenter; the thing we think affects other things; can be continuous, ordinal, or categorical, but always categorical in this course; also called factors or effects; each IV has greater than or equal to 2 levels (number depends on what was measured or reported); careful about collapsing levels dependent variable (DV; Y) - ANSWER: the factor being measured (i.e., the result of interest); the thing we think is affected by other things; always measured; can be continuous, ordinal, or categorical, but only continuous in this course paired sample t test - ANSWER: usually based on groups of individuals who experience both conditions of the variable of interest. For instance, one study might examine the effects of Drug A versus Drug B on a single sample of 100 diabetics. Subjects in this sample would receive Drug A one week, and Drug B the next; participants receive both drug/stimulus conditions statistically significant - ANSWER: Simply put, if you have significant result, it means that your results likely did not happen by chance. If you don't have

Bonferroni Procedure Duncan's new multiple range test (MRT) Dunn's Multiple Comparison Test Fisher's Least Significant Difference (LSD) Holm-Bonferroni Procedure Newman-Keuls Rodger's Method Scheffé's Method Tukey's Test (see also: Studentized Range Distribution) Dunnett's correction Benjamin-Hochberg (BH) procedure family-wise error rate - ANSWER: The familywise error rate (FWE or FWER) is the probability of a coming to at least one false conclusion in a series of hypothesis tests. In other words, it's the probability of making at least one Type I Error. The term "familywise" error rate comes from family of tests, which is the technical definition for a series of tests on data. The FWER is also called alpha inflation or cumulative Type I error. symbols - ANSWER: N = total sample size; n = subsample size; SS = sum of squares; MS = mean square; df = degree of freedom; a = number of groups (or levels for a categorical variable (or factor); used in calculation of some df; Y = the dependent variable; u (mu) = mean; i = identifying number of an individual within a group (or level); j = identifying number of a group or level

continuous variable - ANSWER: variable that takes on an infinite number of different values presented on a continuum; examples: time, weight, income, age. Only continuous DVs in this class!! ordinal variable - ANSWER: a qualitative variable that incorporates an order position, or ranking; ordinal scale examples: class rankings, SES, Likert scale categorical variable - ANSWER: a variable that names categories (whether with words or numerals); examples: hair color, gender. Only categorical IVs in this class!! variability - ANSWER: How do things differ from the average? Look at variance, standard deviation, range, and interquartile range general linear model - ANSWER: The General Linear Model (GLM) is a useful framework for comparing how several variables affect different continuous variables. In it's simplest form, GLM is described as: Data = Model + Error (Rutherford, 2001, p.3) test statistic (degree to which data depart from the expected, null hypothesis; based on the sums of squares In other words: is the variability between groups greater than that expected on the basis of the within-group variability? ANOVA is a special case of the general linear model

differences from the mean and is an extremely important term in statistics. In a regression analysis, the goal is to determine how well a data series can be fitted to a function which might help to explain how the data series was generated. In the context of ANOVA, this quantity is called the total sum of squares (abbreviated SST) because it relates to the total variance of the observations. Three basic types of quantitative research designs - ANSWER: experimental (random, equal groups); quasi-experimental (may use random; may have self selection); observational/phenomenological/descriptive skewness - ANSWER: a statistical measure indicating the symmetry of the distribution around the mean; within +- 2: normal kurtosis - ANSWER: how flat or peaked a normal distribution is; within +- 2: normal Estimated Marginal Means - ANSWER: The Estimated Marginal Means in SPSS GLM tell you the mean response for each factor, adjusted for any other variables in the model. If all factors (aka categorical predictors) were manipulated, these factors should be independent. Or at least they will be if you randomly assigned subjects to conditions well. Effect Size - ANSWER: Statistical testing is not enough. With large samples (lots of power) very small effects can be significant...but are they important? Effect sizes help us to decide. The terms "Measure of Association" and "Effect Size" both mean the same thing: quantifying the relationship between two groups. It's more common to talk about Effect Size in the medical field, when you want to know how exposure is related to disease (i.e. What effect does exposure have on disease outcome?). On the other hand, Measure of Association is used in an informal way to mean the same thing (quantifying relationships between groups) in most other fields. The effect size is how large an effect of something is. For example, medication A is better than medication B at treating depression. But how much

better is it? A traditional hypothesis test will not give you that answer. Medication B could be ten times better, or it could be slightly better. This variability (twice as much? ten times as much?) is what is called an effect size. Effect Size (Measures of Association) Definition and Use in Research Statistics Definitions > Effect Size / Measurement of Association Before reading this article, you may want to review: What is a p value?. The terms "Measure of Association" and "Effect Size" both mean the same thing: quantifying the relationship between two groups. It's more common to talk about Effect Size in the medical field, when you want to know how exposure is related to disease (i.e. What effect does exposure have on disease outcome?). On the other hand, Measure of Association is used in an informal way to mean the same thing (quantifying relationships between groups) in most other fields. Measure of association could also refer to specific tests for relationships, like: Chi square test of independence, Odds ratio, Proportionate mortality ratio R Observed Power - ANSWER: Observed power (or post-hoc power) is the statistical power of the test you have performed, based on the effect size estimate from your data. Statistical power is the probability of finding a statistical difference from 0 in your test (aka a 'significant effect'), if there is a true difference to be found. Covariate - ANSWER: In general terms, covariates are characteristics (excluding the actual treatment) of the participants in an experiment. If you collect data on characteristics before you run an experiment, you could use that data to see how

t test versus f test - ANSWER: Hypothesis testing starts with setting up the premises, which is followed by selecting a significance level. Next, we have to choose the test statistic, i.e. t-test or f-test. While t-test is used to compare two related samples, f-test is used to test the equality of two populations. T-test is a univariate hypothesis test, that is applied when standard deviation is not known and the sample size is small. F-test is statistical test, that determines the equality of the variances of the two normal populations. pairwise comparisons - ANSWER: comparisons of each possible pair of means; each comparison has its own new null hypothesis type I error - ANSWER: false positive (e.g. male pregnant) type II error - ANSWER: false negative (e.g. very pregnant woman told not pregnant) alpha level - ANSWER: probability required for significance; aka rejection rule; usually. ways to correct for family wise type I error - ANSWER: LSD (least significant difference), Bonferroni, Sidak family wise error - ANSWER: is the probability of making one or more false discoveries, or type I errors when performing multiple hypotheses tests. a priori comparisons - ANSWER: comparisons planned beforehand; if hypotheses is truly a priori we do not need to correct for family wise Type I error

post hoc comparisons - ANSWER: comparisons explored afterwards psi equation - ANSWER: contrast expressed as an equation; contrast the weight and mean of a level of the IV orthogonality - ANSWER: helps keep us honest and sane; ensures we do not violate the spirit of a priori contrasts, not redundant What is an ANOVA? - ANSWER: ANOVA is an analysis of variance between groups (or levels of a factor) or within groups (or error) Whereas t-tests compare only two sample distributions, ANOVA is capable of comparing many. We partition the total variance into how group means differ from the grand mean and how individual observations within groups differ from their group's mean What are the three assumptions of ANOVA? - ANSWER: 1. independence of observations 2. normality 3. homogeneity of variance What is the null hypothesis? - ANSWER: All means are equal μ1 = μ2 = μ3 = μ What is the Alternative (one way) hypothesis? - ANSWER: μ1≠ μ2≠ μ3≠ μ (There is at least ONE difference between groups) What is the F statistic or F ratio? - ANSWER: An F statistic is a value you get when you run an ANOVA test or a regression analysis to find out if the means between two populations are significantly different. It's similar to a T statistic from a T-Test; A-T test will tell you if a single variable is statistically significant and an F test will tell you if a group of variables are jointly significant.

What are the # of levels? Ex: You want to know how gait speed varies based on age and gender. Define IV, DV, and levels. - ANSWER: # of groups within each independent variable DV: Gait speed IV: Age and Gender 3 levels (groups) for age 2 levels (groups) for gender You want to test for sex differences in gait velocity among 3 age groups used earlier. What is the "main effect"? - ANSWER: Average effect for each independent variable. Main effect for sex (combing all ages) "What is the effect of sex on gait speed? Is there a difference in gait speed if you're male versus female? Do men and women have different gaits speeds?" Main effect for age (combining both genders)? "Does gait speed depend on age?" What is the interaction? - ANSWER: Is there an interaction BETWEEN the independent variables?

Analyze all subgroups for significant differences What do main effect and interaction effect have in common? - ANSWER: Assessing effect of the IV You want to test for sex differences in gait velocity among 3 age groups used earlier. What is the interaction asking? - ANSWER: Does the effect of gender (on gait speed) DEPEND on how old you are? Does the effect of age (on gait speed) DEPEND on sex? or: For gait speed, is there a different trend of increasing age for women versus men? What does a SIGNIFICANT INTERACTION tell us? - ANSWER: This tells us that any main effects may be MISLEADING or MEANINGLESS You want to test for sex differences in gait velocity among 3 age groups used earlier. What is the null hypothesis for each main effect? What is the null for the interaction? - ANSWER: One null for each main effect: Age: μ1 = μ2 = μ

statistically significant - ANSWER: Simply put, if you have significant result, it means that your results likely did not happen by chance. If you don't have statistically significant results, you throw your test data out (as it doesn't show anything!); in other words, you can't reject the null hypothesis. In general, if your calculated F value in a test is larger than your F statistic, you can reject the null hypothesis. However, the statistic is only one measure of significance in an F Test. You should also consider the p value. The p value is determined by the F statistic and is the probability your results could have happened by chance. p value - ANSWER: The F statistic must be used in combination with the p value when you are deciding if your overall results are significant. Why? If you have a significant result, it doesn't mean that all your variables are significant. The statistic is just comparing the joint effect of all the variables together. If the p value is less than the alpha level, go to Step 2 (otherwise your results are not significant and you cannot reject the null hypothesis). A common alpha level for tests is 0.05. Study the individual p values to find out which of the individual variables are statistically significant. f value in ANOVA - ANSWER: SPSS calculates the F value. The F value in one way ANOVA is a tool to help you answer the question "Is the variance between the means of two populations significantly different?" The F value in the ANOVA test also determines the P value; The P value is the probability of getting a result at least as extreme as the one that was actually observed, given that the null hypothesis is true. The p value is a probability, while the f ratio is a test statistic, calculated as: F value = variance of the group means (Mean Square Between) / mean of the within group variances (Mean Squared Error)

post hoc test - ANSWER: Post-hoc (Latin, meaning "after this") means to analyze the results of your experimental data. They are often based on a familywise error rate; the probability of at least one Type I error in a set (family) of comparisons. The most common post-hoc tests are: Bonferroni Procedure Duncan's new multiple range test (MRT) Dunn's Multiple Comparison Test Fisher's Least Significant Difference (LSD) Holm-Bonferroni Procedure Newman-Keuls Rodger's Method Scheffé's Method Tukey's Test (see also: Studentized Range Distribution) Dunnett's correction Benjamin-Hochberg (BH) procedure family-wise error rate - ANSWER: The familywise error rate (FWE or FWER) is the probability of a coming to at least one false conclusion in a series of hypothesis tests. In other words, it's the probability of making at least one Type I Error. The term "familywise" error rate comes from family of tests, which is the technical definition for a series of tests on data. The FWER is also called alpha inflation or cumulative Type I error. symbols - ANSWER: N = total sample size; n = subsample size; SS = sum of squares; MS = mean square; df = degree of freedom; a = number of groups (or levels for a categorical variable (or factor); used in calculation of some df; Y = the