




























































































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Discovering Statistics Using IBM SPSS Statistics, 5e Exam 2025-2026 Exam Test Questions And Verified Answers
Typology: Exams
1 / 283
This page cannot be seen from the preview
Don't miss anything!
Alpha factoring - ✔ANASWER✔-a method of factor analysis. −2LL - ✔ANASWER✔-the log-likelihood multiplied by minus 2. used in logistic regression
Biserial correlation coefficient - ✔ANASWER✔-Coefficient used when one variable is a continuous dichotomy (e.g., has an underlying continuum between the categories). α-level - ✔ANASWER✔-the probability of making a Type I error (usually this value is 0.05). Adjusted mean - ✔ANASWER✔-in the context of analysis of covariance this is the value of the group mean adjusted for the effect of the covariate. Adjusted predicted value - ✔ANASWER✔-The predicted value of a case from a model estimated without that case included in the data, calculated by re-estimating the model without the case in question, then using this new model to predict the value of the excluded case. A measure of the influence of a particular case of data How to interpret adjusted predicted value - ✔ANASWER✔-If a case does not exert a large influence over the model then its predicted value should be similar regardless of whether the model was estimated including or excluding that case.
Alternative hypothesis - ✔ANASWER✔-the prediction that there will be an effect (i.e., that your experimental manipulation will have some effect or that certain variables will relate to each other). Analysis of covariance (ANCOVA) - ✔ANASWER✔-a statistical procedure that uses the F-statistic to test the overall fit of a linear model, adjusting for the effect that one or more covariates have on the outcome variable. In experimental research this linear model tends to be defined in terms of group means and the resulting ANOVA is therefore an overall test of whether group means differ after the variance in the outcome variable explained by any covariates has been removed. Analysis of variance (ANOVA) - ✔ANASWER✔-a statistical procedure that uses the F-statistic to test the overall fit of a linear model. In experimental research this linear model tends to be defined in terms of group means, and the result is therefore an overall test of whether group means differ. Anderson-Rubin method - ✔ANASWER✔-a way of calculating factor scores which produces scores that are
uncorrelated and standardized with a mean of 0 and a standard deviation of 1. AR(1) - ✔ANASWER✔-this stands for first-order autoregressive structure. It is a covariance structure used in multilevel linear models in which the relationship between scores changes in a systematic way. It is assumed that the correlation between scores gets smaller over time and that variances are assumed to be homogeneous. This structure is often used for repeated- measures data (especially when measurements are taken over time such as in growth models). Autocorrelation - ✔ANASWER✔-when the residuals of two observations in a regression model are correlated. bi - ✔ANASWER✔-unstandardized regression coefficient. Indicates the strength of relationship between a given predictor, i, of many and an outcome in the units of measurement of the predictor. It is the change in the outcome associated with a unit change in the predictor. βi - ✔ANASWER✔-standardized regression coefficient. Indicates the strength of relationship between a given predictor, i, of many and an outcome in a standardized
Questionable practical utility because it's often significant Bimodal - ✔ANASWER✔-a description of a distribution of observations that has two values that appear most often Binary logistic regression - ✔ANASWER✔-logistic regression in which the outcome variable has exactly two categories. Binary variable - ✔ANASWER✔-a categorical variable that has only two mutually exclusive categories (e.g., being dead or alive). Biserial correlation - ✔ANASWER✔-a standardized measure of the strength of relationship between two variables when one of the two variables is dichotomous. Bivariate correlation - ✔ANASWER✔-a correlation between two variables. Blockwise regression - ✔ANASWER✔-another name for hierarchical regression.
Bonferroni correction - ✔ANASWER✔-a correction applied to the α-level to control the overall Type I error rate when multiple significance tests are carried out. Each test conducted should use a criterion of significance of the α- level (normally 0.05) divided by the number of tests conducted. Tends to be too strict when lots of tests are performed Bootstrap - ✔ANASWER✔-a technique from which the sampling distribution of a statistic is estimated by taking repeated samples (with replacement) from the data set (in effect, treating the data as a population from which smaller samples are taken). The statistic of interest (e.g., the mean, or b coefficient) is calculated for each sample, from which the sampling distribution of the statistic is estimated. The standard error of the statistic is estimated as the standard deviation of the sampling distribution created from the <___> samples. From this, confidence intervals and significance tests can be computed. Boredom effect - ✔ANASWER✔-refers to the possibility that performance in tasks may be influenced (the assumption is a negative influence) by ____ or lack of concentration if there are many tasks, or the task goes on for a long period of time.
Brown-Forsythe F - ✔ANASWER✔-a version of the F- statistic designed to be accurate when the assumption of homogeneity of variance has been violated. CAIC (Bozdogan's criterion) - ✔ANASWER✔-a goodness- of-fit measure similar to the AIC, but correcting for model complexity and sample size. It is not intrinsically interpretable, but can be compared in different models to see how changing the model affects the fit. A small value represents a better fit to the data. Categorical variable - ✔ANASWER✔-any variable made up of categories of objects/entities. Ex: The university you attend: students who attend the University of Sussex are not also enrolled at Harvard or UV Amsterdam, therefore, students fall into distinct categories. Central limit theorem - ✔ANASWER✔-this theorem states that when samples are large (above about 30) the sampling distribution will take the shape of a normal distribution regardless of the shape of the population from which the sample was drawn. For small samples the t- distribution better approximates the shape of the sampling distribution. We also know from this theorem that the standard deviation of the sampling distribution (i.e., the standard error of the sample mean) will be equal to the
standard deviation of the sample (s) divided by the square root of the sample size (N). Central tendency - ✔ANASWER✔-a generic term describing the centre of a frequency distribution of observations as measured by the mean, mode and median. Centring - ✔ANASWER✔-the process of transforming a variable into deviations around a fixed point. This fixed point can be any value that is chosen, but typically a mean is used. To centre a variable the mean is subtracted from each score. See grand mean centring, group mean centring. Chartjunk - ✔ANASWER✔-superfluous material that distracts from the data being displayed on a graph. Chi-square distribution - ✔ANASWER✔-a probability distribution of the sum of squares of several normally distributed variables. It tends to be used to test hypotheses about categorical data, and to test the fit of models to the observed data.
Common variance - ✔ANASWER✔-variance shared by two or more variables. Communality - ✔ANASWER✔-the proportion of a variable's variance that is common variance. This term is used primarily in factor analysis. A variable that has no unique variance (or random variance) would have a communality of 1, whereas a variable that shares none of its variance with any other variable would have a communality of 0. Complete separation - ✔ANASWER✔-a situation in logistic regression when the outcome variable can be perfectly predicted by one predictor or a combination of predictors! Component matrix - ✔ANASWER✔-general term for the structure matrix in principal components analysis. Compound symmetry - ✔ANASWER✔-a condition that holds true when both the variances across conditions are equal (this is the same as the homogeneity of variance assumption) and the covariances between pairs of conditions are also equal.
Concurrent validity - ✔ANASWER✔-a form of criterion validity where there is evidence that scores from an instrument correspond to concurrently recorded external measures conceptually related to the measured construct. Confidence interval - ✔ANASWER✔-for a given statistic calculated for a sample of observations (e.g., the mean), a range of values around that statistic that are believed to contain, in a certain proportion of samples (e.g., 95%), the true value of that statistic (i.e., the population parameter). What that also means is that for the other proportion of samples (e.g., 5%), this won't contain that true value. The trouble is, you don't know which category your particular sample falls into. Confirmatory factor analysis (CFA) - ✔ANASWER✔-a version of factor analysis in which specific hypotheses about structure and relations between the latent variables that underlie the data are tested. Confounding variable - ✔ANASWER✔-a variable (that we may or may not have measured) other than the predictor variables in which we're interested that potentially affects an outcome variable.
Cook's distance - ✔ANASWER✔-a measure of the overall influence of a case on a model. Suggested that values greater than 1 may be cause for concern. Correlation coefficient - ✔ANASWER✔-a measure of the strength of association or relationship between two variables. See Pearson's correlation coefficient, Spearman's correlation coefficient, Kendall's tau. Correlational research - ✔ANASWER✔-a form of research in which you observe what naturally goes on in the world without directly interfering with it. This term implies that data will be analysed so as to look at relationships between naturally occurring variables rather than making statements about cause and effect. Counterbalancing - ✔ANASWER✔-a process of systematically varying the order in which experimental conditions are conducted. In the simplest case of there being two conditions (A and B), this implies that half of the participants complete condition A followed by condition B, whereas the remainder do condition B followed by condition A. The aim is to remove systematic bias caused by practice effects or boredom effects.
Covariance - ✔ANASWER✔-a measure of the 'average' relationship between two variables. It is the average cross- product deviation (i.e., the cross-product divided by one less than the number of observations). Covariance ratio (CVR) - ✔ANASWER✔-a measure of whether a case influences the variance of the parameters in a regression model. When this ratio is close to 1 the case has very little influence on the variances of the model parameters. Belsey et al. (1980) recommend the following: if the ____ of a case is greater than 1 + [3(k + 1)/n] then deleting that case will damage the precision of some of the model's parameters, but if it is less than 1 − [3(k + 1)/n] then deleting the case will improve the precision of some of the model's parameters (k is the number of predictors and n is the sample size). Covariate - ✔ANASWER✔-a variable that has a relationship with (in terms of covariance), or has the potential to be related to, the outcome variable we've measured. Cox and Snell's - ✔ANASWER✔-a version of the coefficient of determination for logistic regression. It is
Cross-product deviations - ✔ANASWER✔-a measure of the 'total' relationship between two variables. It is the deviation of one variable from its mean multiplied by the other variable's deviation from its mean. Cross-sectional research - ✔ANASWER✔-a form of research in which you observe what naturally goes on in the world without directly interfering with it by measuring several variables at a single time point. In psychology, this term usually implies that data come from people at different age points, with different people representing each age point. Cross-validation - ✔ANASWER✔-assessing the accuracy of a model across different samples. This is an important step in generalization. In a regression model there are two main methods of this: adjusted R² or data splitting, in which the data are split randomly into two halves, and a regression model is estimated for each half and then compared. Cubic trend - ✔ANASWER✔-if you connected the means in ordered conditions with a line then a cubic trend is shown by two changes in the direction of this line. You must have at least four ordered conditions.
Currency variable - ✔ANASWER✔-a variable containing values of money. Data view - ✔ANASWER✔-there are two ways to view the contents of the data editor window. The data view shows you a spreadsheet and can be used for entering raw data. See also variable view. Date variable - ✔ANASWER✔-a variable made up of dates. The data can take forms such as dd-mmm-yyyy (e.g., 21-Jun-1973), dd-mmm-yy (e.g., 21-Jun-73), mm/dd/yy (e.g., 06/21/73), dd.mm.yyyy (e.g., 21.06.1973). Degrees of freedom - ✔ANASWER✔-an impossible thing to define in a few pages, let alone a few lines. Essentially it is the number of 'entities' that are free to vary when estimating some kind of statistical parameter. In a more practical sense, it has a bearing on significance tests for many commonly used test statistics (such as the F¬¬- statistic, t-test, chi-square statistic) and determines the exact form of the probability distribution for these test statistics.