| |
| |
| |
Descriptive Statistics | |
| |
| |
| |
Introduction to Statistics | |
| |
| |
Stumbling Blocks to Statistics | |
| |
| |
A Brief Look at the History of Statistics | |
| |
| |
Benefits of a Course in Statistics | |
| |
| |
General Field of Statistics | |
| |
| |
| |
Graphs and Measures of Central Tendency | |
| |
| |
Graphs | |
| |
| |
Measures of Central Tendency | |
| |
| |
Appropriate Use of the Mean, the Median, and the Mode | |
| |
| |
| |
Variability | |
| |
| |
Measures of Variability | |
| |
| |
Graphs and Variability | |
| |
| |
Questionnaire Percentages | |
| |
| |
| |
The Normal Curve and z Scores | |
| |
| |
The Normal Curve. z Scores | |
| |
| |
Translating Raw Scores into z Scores. z Score Translations in Practice | |
| |
| |
Fun with Your Calculator | |
| |
| |
| |
z Scores Revisited: t Scores and Other Normal Curve Transformations | |
| |
| |
Other Applications of the z Score | |
| |
| |
The Percentile Table. t Scores | |
| |
| |
Normal Curve Equivalents | |
| |
| |
Stanines | |
| |
| |
Grade-Equivalent Scores: A Note of Caution | |
| |
| |
The Importance of the z Score | |
| |
| |
| |
Probability | |
| |
| |
The Definition of Probability | |
| |
| |
Probability and Percentage Areas of the Normal Curve | |
| |
| |
Combining Probabilities for Independent Events | |
| |
| |
A Reminder about Logic | |
| |
| |
| |
Inferential Statistics | |
| |
| |
| |
Statistics and Parameters | |
| |
| |
Generalizing from the Few to the Many | |
| |
| |
Key Concepts of the Inferential Statistics | |
| |
| |
Techniques of Sampling | |
| |
| |
Exit Polling | |
| |
| |
Sampling Distributions | |
| |
| |
Back to z | |
| |
| |
Some Words of Encouragement | |
| |
| |
| |
Parameter Estimates and Hypothesis Testing | |
| |
| |
The Standard Deviation Revisited | |
| |
| |
Estimating the Standard Error of the Mean | |
| |
| |
Estimating the Popular Mean: Interval Estimates and Hypothesis Testing | |
| |
| |
The t ratio | |
| |
| |
The Type 1 Error | |
| |
| |
Alpha Levels | |
| |
| |
Effect Size | |
| |
| |
Interval Estimates: No Hypothesis Needed | |
| |
| |
| |
The Fundamentals of Research Methodology | |
| |
| |
Research Strategies | |
| |
| |
Independent and Dependent Variables | |
| |
| |
The Cause-and-Effect Trap | |
| |
| |
Theory of Measurement | |
| |
| |
Research: Experimental versus Post-Facto | |
| |
| |
The Experimental Method: The Case of Cause and Effect | |
| |
| |
Creating Equivalent Groups: The True Experiment | |
| |
| |
Designing the True Experiment | |
| |
| |
The Hawthorne Effect | |
| |
| |
Repeated Measure Designs with Separate Control Groups | |
| |
| |
Requirements for the True Experiment | |
| |
| |
Post-Facto Research | |
| |
| |
Combination Research | |
| |
| |
Research Errors | |
| |
| |
Experimental Error: Failure to Use an Adequate Control Group | |
| |
| |
Post-Facto Errors | |
| |
| |
Meta-Analysis | |
| |
| |
Methodology as a Basis for More Sophisticated Techniques | |
| |
| |
| |
The Hypothesis of Difference | |
| |
| |
Sampling Distribution of Differences | |
| |
| |
Estimated Standard Error of Difference | |
| |
| |
Two-Sample t Test for Independent Samples | |
| |
| |
Significance | |
| |
| |
Two-Tail t Table | |
| |
| |
Alpha and Confidence Levels | |
| |
| |
Confidence Interval for Differences Between Two Independent Samples | |
| |
| |
The Minimum Difference | |
| |
| |
Outliers | |
| |
| |
One-Tail t Test | |
| |
| |
Importance of Having at Least Two Samples | |
| |
| |
Power | |
| |
| |
Effect Size | |
| |
| |
| |
The Hypothesis of Association: Correlation | |
| |
| |
Cause and Effect | |
| |
| |
The Pearson r | |
| |
| |
Interclass versus Intraclass | |
| |
| |
Correlation Matrix | |
| |
| |
The Spearman r | |
| |
| |
An Important Difference between Correlation Coefficient and the t Test | |
| |
| |
| |
Analysis of Variance | |
| |
| |
Advantages of ANOVA | |
| |
| |
Analyzing the Variance | |
| |
| |
Applications of ANOVA | |
| |
| |
The Factorial ANOVA | |
| |
| |
Eta squared and d | |