| |
| |
List of figures | |
| |
| |
Preface | |
| |
| |
Preface to the Second Edition | |
| |
| |
| |
Introduction | |
| |
| |
| |
Descriptive statistics | |
| |
| |
Measures of 'central tendency' | |
| |
| |
Measures of 'spread' | |
| |
| |
Describing a set of data: in conclusion | |
| |
| |
Comparing two sets of data with descriptive statistics | |
| |
| |
Some important information about numbers | |
| |
| |
| |
Standard scores | |
| |
| |
Comparing scores from different distributions | |
| |
| |
The Normal Distribution | |
| |
| |
The Standard Normal Distribution | |
| |
| |
| |
Introduction to hypothesis testing | |
| |
| |
Testing an hypothesis | |
| |
| |
The logic of hypothesis testing | |
| |
| |
One- and two-tailed predictions | |
| |
| |
| |
Sampling | |
| |
| |
Populations and samples | |
| |
| |
Selecting a sample | |
| |
| |
Sample statistics and population parameters | |
| |
| |
Summary | |
| |
| |
| |
Hypothesis testing with one sample | |
| |
| |
An example | |
| |
| |
When we do not have the known population standard deviation | |
| |
| |
Confidence intervals | |
| |
| |
Hypothesis testing with one sample: in conclusion | |
| |
| |
| |
Selecting samples for comparison | |
| |
| |
Designing experiments to compare samples | |
| |
| |
The interpretation of sample differences | |
| |
| |
| |
Hypothesis testing with two samples | |
| |
| |
The assumptions of the two sample t test | |
| |
| |
Related or independent samples | |
| |
| |
The related t test | |
| |
| |
The independent t test | |
| |
| |
Confidence intervals | |
| |
| |
| |
Significance, error and power | |
| |
| |
Type I and Type II errors | |
| |
| |
Statistical power | |
| |
| |
The power of a test | |
| |
| |
The choice of [alpha] level | |
| |
| |
Effect size | |
| |
| |
Sample size | |
| |
| |
Conclusion | |
| |
| |
| |
Introduction to the analysis of variance | |
| |
| |
Factors and conditions | |
| |
| |
The problem of many conditions and the t test | |
| |
| |
Why do scores vary in an experiment? | |
| |
| |
The process of analysing variability | |
| |
| |
The F distribution | |
| |
| |
Conclusion | |
| |
| |
| |
One factor independent measures ANOVA | |
| |
| |
Analysing variability in the independent measures ANOVA | |
| |
| |
Rejecting the null hypothesis | |
| |
| |
Unequal sample sizes | |
| |
| |
The relationship of F to t | |
| |
| |
| |
Multiple comparisons | |
| |
| |
The Tukey test (for all pairwise comparisons) | |
| |
| |
The Scheffe test (for complex comparisons) | |
| |
| |
| |
One factor repeated measures ANOVA | |
| |
| |
Deriving the F value | |
| |
| |
Multiple comparisons | |
| |
| |
| |
The interaction of factors in the analysis of variance | |
| |
| |
Interactions | |
| |
| |
Dividing up the between conditions sums of squares | |
| |
| |
Simple main effects | |
| |
| |
Conclusion | |
| |
| |
| |
Calculating the two factor ANOVA | |
| |
| |
The two factor independent measures ANOVA | |
| |
| |
The two factor mixed design ANOVA | |
| |
| |
The two factor repeated measures ANOVA | |
| |
| |
A non-significant interaction | |
| |
| |
| |
An introduction to nonparametric analysis | |
| |
| |
Calculating ranks | |
| |
| |
| |
Two sample nonparametric analyses | |
| |
| |
The Mann-Whitney U test (for independent samples) | |
| |
| |
The Wilcoxon signed-ranks test (for related samples) | |
| |
| |
| |
One factor ANOVA for ranked data | |
| |
| |
Kruskal-Wallis test (for independent measures) | |
| |
| |
The Friedman test (for related samples) | |
| |
| |
| |
Analysing frequency data: chi-square | |
| |
| |
Nominal data, categories and frequency counts | |
| |
| |
Introduction to X[superscript 2] | |
| |
| |
Chi-square (X[superscript 2]) as a 'goodness of fit' test | |
| |
| |
Chi-square (X[superscript 2]) as a test of independence | |
| |
| |
The chi-square distribution | |
| |
| |
The assumptions of the X[superscript 2] test | |
| |
| |
| |
Linear correlation and regression | |
| |
| |
Introduction | |
| |
| |
Pearson r correlation coefficient | |
| |
| |
Linear regression | |
| |
| |
The interpretation of correlation and regression | |
| |
| |
Problems with correlation and regression | |
| |
| |
The standard error of the estimate | |
| |
| |
The Spearman r[subscript S] correlation coefficient | |
| |
| |
| |
Multiple correlation and regression | |
| |
| |
Introduction to multivariate analysis | |
| |
| |
Partial correlation | |
| |
| |
Multiple correlation | |
| |
| |
Multiple regression | |
| |
| |
| |
Complex analyses and computers | |
| |
| |
Undertaking data analysis by computer | |
| |
| |
Complex analyses | |
| |
| |
Reliability | |
| |
| |
Factor analysis | |
| |
| |
Multivariate analysis of variance (MANOVA) | |
| |
| |
Discriminant function analysis | |
| |
| |
Conclusion | |
| |
| |
| |
An introduction to the general linear model | |
| |
| |
Models | |
| |
| |
An example of a linear model | |
| |
| |
Modelling data | |
| |
| |
The model: the regression equation | |
| |
| |
Selecting a good model | |
| |
| |
Comparing samples (the analysis of variance once again) | |
| |
| |
Explaining variations in the data | |
| |
| |
The general linear model | |
| |
| |
Notes | |
| |
| |
Glossary | |
| |
| |
References | |
| |
| |
| |
Acknowledgements and statistical tables | |
| |
| |
| |
The standard normal distribution tables | |
| |
| |
| |
Critical values of the t distribution | |
| |
| |
| |
Critical values of the F distribution | |
| |
| |
| |
Critical values of the Studentized range statistic, q | |
| |
| |
| |
Critical values of the Mann-Whitney U statistic | |
| |
| |
| |
Critical values of the Wilcoxon T statistic | |
| |
| |
| |
Critical values of the chi-square (X[superscript 2]) distribution | |
| |
| |
| |
Table of probabilities for X[superscript 2 subscript r] when k and n are small | |
| |
| |
| |
Critical values of the Pearson r correlation coefficient | |
| |
| |
| |
Critical values of the Spearman r[subscript S] ranked correlation coefficient | |
| |
| |
Index | |