| |

| |

| |

Introduction | |

| |

| |

| |

Descriptive Statistics | |

| |

| |

| |

Inferential Statistics | |

| |

| |

| |

Our Concern: Applied Statistics | |

| |

| |

| |

Variables and Constants | |

| |

| |

| |

Scales of Measurement | |

| |

| |

| |

Scales of Measurement and Problems of Statistical Treatment | |

| |

| |

| |

Do Statistics Lie? | |

| |

| |

Point of Controversy: Are Statistical Procedures Necessary? | |

| |

| |

| |

Some Tips on Studying Statistics | |

| |

| |

| |

Statistics and Computers | |

| |

| |

| |

Summary | |

| |

| |

| |

Frequency Distributions, Percentiles, and Percentile Ranks | |

| |

| |

| |

Organizing Qualitative Data | |

| |

| |

| |

Grouped Scores | |

| |

| |

| |

How to Construct a Grouped Frequency Distribution | |

| |

| |

| |

Apparent versus Real Limits | |

| |

| |

| |

The Relative Frequency Distribution | |

| |

| |

| |

The Cumulative Frequency Distribution | |

| |

| |

| |

Percentiles and Percentile Ranks | |

| |

| |

| |

Computing Percentiles from Grouped Data | |

| |

| |

| |

Computation of Percentile Rank | |

| |

| |

| |

Summary | |

| |

| |

| |

Graphic Representation of Frequency Distributions | |

| |

| |

| |

Basic Procedures | |

| |

| |

| |

The Histogram | |

| |

| |

| |

The Frequency Polygon | |

| |

| |

| |

Choosing between a Histogram and a Polygon | |

| |

| |

| |

The Bar Diagram and the Pie Chart | |

| |

| |

| |

The Cumulative Percentage Curve | |

| |

| |

| |

Factors Affecting the Shape of Graphs | |

| |

| |

| |

Shape of Frequency Distributions | |

| |

| |

| |

Summary | |

| |

| |

| |

Central Tendency | |

| |

| |

| |

The Mode | |

| |

| |

| |

The Median | |

| |

| |

| |

The Mean | |

| |

| |

| |

Properties of the Mode | |

| |

| |

| |

Properties of the Mean | |

| |

| |

Point of Controversy: Is It Permissible to Calculate the Mean for Tests in the Behavioral Sciences? | |

| |

| |

| |

Properties of the Median | |

| |

| |

| |

Measures of Central Tendency in Symmetrical and Asymmetrical Distributions | |

| |

| |

| |

The Effects of Score Transformations | |

| |

| |

| |

Summary | |

| |

| |

| |

Variability and Standard (z) Scores | |

| |

| |

| |

The Range and Semi-Interquartile Range | |

| |

| |

| |

Deviation Scores | |

| |

| |

| |

Deviational Measures: The Variance | |

| |

| |

| |

Deviational Measures: The Standard Deviation | |

| |

| |

| |

Calculation of the Variance and Standard Deviation: Raw-Score Method | |

| |

| |

| |

Calculation of the Standard Deviation with IBM SPSS (formerly SPSS) | |

| |

| |

Point of Controversy: Calculating the Sample Variance: Should We Divide by n or (n - 1)? | |

| |

| |

| |

Properties of the Range and Semi-Interquartile Range | |

| |

| |

| |

Properties of the Standard Deviation | |

| |

| |

| |

How Big Is a Standard Deviation? | |

| |

| |

| |

Score Transformations and Measures of Variability | |

| |

| |

| |

Standard Scores (z Scores) | |

| |

| |

| |

A Comparison of z Scores and Percentile Ranks | |

| |

| |

| |

Summary | |

| |

| |

| |

Standard Scores and the Normal Curve | |

| |

| |

| |

Historical Aspects of the Normal Curve | |

| |

| |

| |

The Nature of the Normal Curve | |

| |

| |

| |

Standard Scores and the Normal Curve | |

| |

| |

| |

The Standard Normal Curve: Finding Areas When the Score Is Known | |

| |

| |

| |

The Standard Normal Curve: Finding Scores When the Area Is Known | |

| |

| |

| |

The Normal Curve as a Model for Real Variables | |

| |

| |

| |

The Normal Curve as a Model for Sampling Distributions | |

| |

| |

| |

Summary | |

| |

| |

Point of Controversy: How Normal Is the Normal Curve? | |

| |

| |

| |

Correlation | |

| |

| |

| |

Some History | |

| |

| |

| |

Graphing Bivariate Distributions: The Scatter Diagram | |

| |

| |

| |

Correlation: A Matter of Direction | |

| |

| |

| |

Correlation: A Matter of Degree | |

| |

| |

| |

Understanding the Meaning of Degree of Correlation | |

| |

| |

| |

Formulas for Pearson's Coefficient of Correlation | |

| |

| |

| |

Calculating r from Raw Scores | |

| |

| |

| |

Calculating r with IBM SPSS | |

| |

| |

| |

Spearman's Rank-Order Correlation Coefficient | |

| |

| |

| |

Correlation Does Not Prove Causation | |

| |

| |

| |

The Effects of Score Transformations | |

| |

| |

| |

Cautions Concerning Correlation Coefficients | |

| |

| |

| |

Summary | |

| |

| |

| |

Prediction | |

| |

| |

| |

The Problem of Prediction | |

| |

| |

| |

The Criterion of Best Fit | |

| |

| |

Point of Controversy: Least-Squares Regression versus the Resistant Line | |

| |

| |

| |

The Regression Equation: Standard-Score Form | |

| |

| |

| |

The Regression Equation: Raw-Score Form | |

| |

| |

| |

Error of Prediction: The Standard Error of Estimate | |

| |

| |

| |

An Alternative (and Preferred) Formula for S<sub>YX</sub> | |

| |

| |

| |

Calculating the "Raw-Score" Regression Equation and Standard Error of Estimate with IBM SPSS | |

| |

| |

| |

Error in Estimating Y from X | |

| |

| |

| |

Cautions Concerning Estimation of Predictive Error | |

| |

| |

| |

Prediction Does Not Prove Causation | |

| |

| |

| |

Summary | |

| |

| |

| |

Interpretive Aspects of Correlation and Regression | |

| |

| |

| |

Factors Influencing r: Degree of Variability in Each Variable | |

| |

| |

| |

Interpretation of r: The Regression Equation I | |

| |

| |

| |

Interpretation of r: The Regression Equation II | |

| |

| |

| |

Interpretation of r: Proportion of Variation in Y Not Associated with | |

| |

| |

Variation in X | |

| |

| |

| |

Interpretation of r: Proportion of Variation in Y Associated with | |

| |

| |

Variation in X | |

| |

| |

| |

Interpretation of r: Proportion of Correct Placements | |

| |

| |

| |

Summary | |

| |

| |

| |

Probability | |

| |

| |

| |

Defining Probability | |

| |

| |

| |

A Mathematical Model of Probability | |

| |

| |

| |

Two Theorems in Probability | |

| |

| |

| |

An Example of a Probability Distribution: The Binomial | |

| |

| |

| |

Applying the Binomial | |

| |

| |

| |

Probability and Odds | |

| |

| |

| |

Are Amazing Coincidences Really That Amazing? | |

| |

| |

| |

Summary | |

| |

| |

| |

Random Sampling and Sampling Distributions | |

| |

| |

| |

Random Sampling | |

| |

| |

| |

Using a Table of Random Numbers | |

| |

| |

| |

The Random Sampling Distribution of the Mean: An Introduction | |

| |

| |

| |

Characteristics of the Random Sampling Distribution of the Mean | |

| |

| |

| |

Using the Sampling Distribution of X to Determine the Probability for Different Ranges of Values of X | |

| |

| |

| |

Random Sampling Without Replacement | |

| |

| |

| |

Summary | |

| |

| |

| |

Introduction to Statistical Inference: Testing Hypotheses about Single Means (z and t) | |

| |

| |

| |

Testing a Hypothesis about a Single Mean | |

| |

| |

| |

The Null and Alternative Hypotheses | |

| |

| |

| |

When Do We Retain and When Do We Reject the Null Hypothesis? | |

| |

| |

| |

Review of the Procedure for Hypothesis Testing | |

| |

| |

| |

Dr. Brown's Problem: Conclusion | |

| |

| |

| |

The Statistical Decision | |

| |

| |

| |

Choice of H<sub>A</sub>: One-Tailed and Two-Tailed Tests | |

| |

| |

| |

Review of Assumptions in Testing Hypotheses about a Single Mean | |

| |

| |

Point of Controversy: The Single-Subject Research Design | |

| |

| |

| |

Estimating the Standard Error of the Mean When ï¿½ Is Unknown | |

| |

| |

| |

The t Distribution | |

| |

| |

| |

Characteristics of Student's Distribution of t | |

| |

| |

| |

Degrees of Freedom and Student's Distribution of t | |

| |

| |

| |

An Example: Has the Violent Content of Television Programs Increased? | |

| |

| |

| |

Calculating t from Raw Scores | |

| |

| |

| |

Calculating t with IBM SPSS | |

| |

| |

| |

Levels of Significance versus p-Values | |

| |

| |

| |

Summary | |

| |

| |

| |

Interpreting the Results of Hypothesis Testing: Effect Size, Type I and Type II Errors, and Power | |

| |

| |

| |

A Statistically Significant Difference versus a Practically Important Difference | |

| |

| |

Point of Controversy: The Failure to Publish "Nonsignificant" Results | |

| |

| |

| |

Effect Size | |

| |

| |

| |

Errors in Hypothesis Testing | |

| |

| |

| |

The Power of a Test | |

| |

| |

| |

Factors Affecting Power: Difference between the True Population Mean and the Hypothesized Mean (Size of Effect) | |

| |

| |

| |

Factors Affecting Power: Sample Size | |

| |

| |

| |

Factors Affecting Power:Variability of the Measure | |

| |

| |

| |

Factors Affecting Power: Level of Significance (ï¿½) | |

| |

| |

| |

Factors Affecting Power: One-Tailed versus Two-Tailed Tests | |

| |

| |

| |

Calculating the Power of a Test | |

| |

| |

Point of Controversy: Meta-Analysis | |

| |

| |

| |

Estimating Power and Sample Size for Tests of Hypotheses about Means | |

| |

| |

| |

Problems in Selecting a Random Sample and in Drawing Conclusions | |

| |

| |

| |

Summary | |

| |

| |

| |

Testing Hypotheses about the Difference between Two Independent Groups | |

| |

| |

| |

The Null and Alternative Hypotheses | |

| |

| |

| |

The Random Sampling Distribution of the Difference between Two Sample Means | |

| |

| |

| |

Properties of the Sampling Distribution of the Difference between Means | |

| |

| |

| |

Determining a Formula for t | |

| |

| |

| |

Testing the Hypothesis of No Difference between Two Independent Means: The Dyslexic Children Experiment | |

| |

| |

| |

Use of a One-Tailed Test | |

| |

| |

| |

Calculation of t with IBM SPSS | |

| |

| |

| |

Sample Size in Inference about Two Means | |

| |

| |

| |

Effect Size | |

| |

| |

| |

Estimating Power and Sample Size for Tests of Hypotheses about the Difference between Two Independent Means | |

| |

| |

| |

Assumptions Associated with Inference about the Difference between Two Independent Means | |

| |

| |

| |

The Random-Sampling Model versus the Random-Assignment Model | |

| |

| |

| |

Random Sampling and Random Assignment as Experimental Controls | |

| |

| |

| |

Summary | |

| |

| |

| |

Testing for a Difference between Two Dependent (Correlated) Groups | |

| |

| |

| |

Determining a Formula for t | |

| |

| |

| |

Degrees of Freedom for Tests of No Difference between Dependent Means | |

| |

| |

| |

An Alternative Approach to the Problem of Two Dependent Means | |

| |

| |

| |

Testing a Hypothesis about Two Dependent Means: Does Text Messaging Impair Driving? | |

| |

| |

| |

Calculating t with IBM SPSS | |

| |

| |

| |

Effect Size | |

| |

| |

| |

Power | |

| |

| |

| |

Assumptions When Testing a Hypothesis about the Difference between Two Dependent Means | |

| |

| |

| |

Problems with Using the Dependent-Samples Design | |

| |

| |

| |

Summary | |

| |

| |

| |

Inference about Correlation Coefficients | |

| |

| |

| |

The Random Sampling Distribution of r | |

| |

| |

| |

Testing the Hypothesis that r = 0 | |

| |

| |

| |

Fisher's z' Transformation | |

| |

| |

| |

Strength of Relationship | |

| |

| |

| |

A Note about Assumptions | |

| |

| |

| |

Inference When Using Spearman's r<sub>S</sub> | |

| |

| |

| |

Summary | |

| |

| |

| |

An Alternative to Hypothesis Testing: Confidence Intervals | |

| |

| |

| |

Examples of Estimation | |

| |

| |

| |

Confidence Intervals for ï¿½<sub>X</sub> | |

| |

| |

| |

The Relation between Confidence Intervals and Hypothesis Testing | |

| |

| |

| |

The Advantages of Confidence Intervals | |

| |

| |

| |

Random Sampling and Generalizing Results | |

| |

| |

| |

Evaluating a Confidence Interval | |

| |

| |

Point of Controversy: Objectivity and Subjectivity in Inferential Statistics: Bayesian Statistics | |

| |

| |

| |

Confidence Intervals for ï¿½<sub>X</sub> - ï¿½<sub>Y</sub> | |

| |

| |

| |

Sample Size Required for Confidence Intervals of ï¿½<sub>X</sub> and ï¿½<sub>X</sub> - ï¿½<sub>Y</sub> | |

| |

| |

| |

Confidence Intervals for ï¿½ | |

| |

| |

| |

Where are We in Statistical Reform? | |

| |

| |

| |

Summary | |

| |

| |

| |

Testing for Differences among Three or More Groups: One-Way Analysis of Variance (and Some Alternatives) | |

| |

| |

| |

The Null Hypothesis | |

| |

| |

| |

The Basis of One-Way Analysis of Variance:Variation within and between Groups | |

| |

| |

| |

Partition of the Sums of Squares | |

| |

| |

| |

Degrees of Freedom | |

| |

| |

| |

Variance Estimates and the F Ratio | |

| |

| |

| |

The Summary Table | |

| |

| |

| |

Example: Does Playing Violent Video Games Desensitize People to Real-Life Aggression? | |

| |

| |

| |

Comparison of t and F | |

| |

| |

| |

Raw-Score Formulas for Analysis of Variance | |

| |

| |

| |

Calculation of ANOVA for Independent Measures with IBM SPSS | |

| |

| |

| |

Assumptions Associated with ANOVA | |

| |

| |

| |

Effect Size | |

| |

| |

| |

ANOVA and Power | |

| |

| |

| |

Post Hoc Comparisons | |

| |

| |

| |

Some Concerns about Post Hoc Comparisons | |

| |

| |

| |

An Alternative to the F Test: Planned Comparisons | |

| |

| |

| |

How to Construct Planned Comparisons | |

| |

| |

| |

Analysis of Variance for Repeated Measures | |

| |

| |

| |

Calculation of ANOVA for Repeated Measures with IBM SPSS | |

| |

| |

| |

Summary | |

| |

| |

| |

Factorial Analysis of Variance: The Two-Factor Design | |

| |

| |

| |

Main Effects | |

| |

| |

| |

Interaction | |

| |

| |

| |

The Importance of Interaction | |

| |

| |

| |

Partition of the Sums of Squares for Two-Way ANOVA | |

| |

| |

| |

Degrees of Freedom | |

| |

| |

| |

Variance Estimates and F Tests | |

| |

| |

| |

Studying the Outcome of Two-Factor Analysis of Variance | |

| |

| |

| |

Effect Size | |

| |

| |

| |

Calculation of Two-Factor ANOVA with IBM SPSS | |

| |

| |

| |

Planned Comparisons | |

| |

| |

| |

Assumptions of the Two-Factor Design and the Problem of Unequal Numbers of Scores | |

| |

| |

| |

Mixed Two-Factor Within-Subjects Design | |

| |

| |

| |

Calculation of the Mixed Two-Factor Within-Subjects Design with IBM SPSS | |

| |

| |

| |

Summary | |

| |

| |

| |

Chi-Square and Inference about Frequencies | |

| |

| |

| |

The Chi-Squre Test for Goodness of Fit | |

| |

| |

| |

Chi-Square (ï¿½<sup>2</sup>) as a Measure of the Difference between Observed and Expected Frequencies | |

| |

| |

| |

The Logic of the Chi-Square Test | |

| |

| |

| |

Interpretation of the Outcome of a Chi-Square Test | |

| |

| |

| |

Different Hypothesized Proportions in the Test for Goodness of Fit | |

| |

| |

| |

Effect Size for Goodness-of-Fit Problems | |

| |

| |

| |

Assumptions in the Use of the Theoretical Distribution of Chi-Square | |

| |

| |

| |

Chi-Square as a Test for Independence between Two Variables | |

| |

| |

| |

Finding Expected Frequencies in a Contingency Table | |

| |

| |

| |

Calculation of ï¿½<sup>2</sup> and Determination of Significance in a Contingency Table | |

| |

| |

| |

Measures of Effect Size (Strength of Association) for Tests of Independence | |

| |

| |

Point of Controversy: Yates' Correction for Continuity | |

| |

| |

| |

Power and the Chi-Square Test of Independence | |

| |

| |

| |

Summary | |

| |

| |

| |

Some (Almost) Assumption-Free Tests | |

| |

| |

| |

The Null Hypothesis in Assumption-Freer Tests | |

| |

| |

| |

Randomization Tests | |

| |

| |

| |

Rank-Order Tests | |

| |

| |

| |

The Bootstrap Method of Statistical Inference | |

| |

| |

| |

An Assumption-Freer Alternative to the t Test of a Difference between Two Independent Groups: The Mann-Whitney U Test | |

| |

| |

Point of Controversy: A Comparison of the t Test and Mann-Whitney U Test with Real-World Distributions | |

| |

| |

| |

An Assumption-Freer Alternative to the t Test of a Difference between Two Dependent Groups: The Sign Test | |

| |

| |

| |

Another Assumption-Freer Alternative to the t Test of a Difference between Two Dependent Groups: The Wilcoxon Signed-Ranks Test | |

| |

| |

| |

An Assumption-Freer Alternative to One-Way ANOVA for Independent Groups: The Kruskal-Wallis Test | |

| |

| |

| |

An Assumption-Freer Alternative to ANOVA for Repeated Measures: | |

| |

| |

Friedman's Rank Test for Correlated Samples | |

| |

| |

| |

Summary | |

| |

| |

| |

Review of Basic Mathematics | |

| |

| |

| |

List of Symbols | |

| |

| |

| |

Answers to Problems | |

| |

| |

| |

Statistical Tables | |

| |

| |

| |

Areas under the Normal Curve Corresponding to Given Values of z | |

| |

| |

| |

The Binomial Distribution | |

| |

| |

| |

Random Numbers | |

| |

| |

| |

Student's t Distribution | |

| |

| |

| |

The F Distribution | |

| |

| |

| |

The Studentized Range Statistic | |

| |

| |

| |

Values of the Correlation Coefficient Required for Different Levels of Significance When H<sub>0</sub>: r= 0 | |

| |

| |

| |

Values of Fisher's z' for Values of r | |

| |

| |

| |

The ï¿½<sup>2</sup> Distribution | |

| |

| |

| |

Critical One-Tail Values of SR<sub>X</sub> for the Mann-Whitney U Test | |

| |

| |

| |

Critical Values for the Smaller of R<sub>+</sub> or R<sub>-</sub> for the Wilcoxon Signed-Ranks Test | |

| |

| |

Epilogue: The Realm of Statistics | |

| |

| |

ReferenceS | |

| |

| |

Index | |