| |

| |

| |

Discrete Probability Distributions | |

| |

| |

| |

Discrete Probability Distributions and Probability Mass Functions | |

| |

| |

| |

Bernoulli Experiments and trials | |

| |

| |

| |

Binomial Random Variables, Experiments, and Probability Functions | |

| |

| |

| |

The Binomial Coefficient | |

| |

| |

| |

The Binomial Probability Function | |

| |

| |

| |

Mean, Variance, and Standard Deviation of the Binomial Probability Distribution | |

| |

| |

| |

The Binomial Expansion and the Binomial Theorem | |

| |

| |

| |

Pascal's Triangle and the Binomial Coefficient | |

| |

| |

| |

The Family of Binomial Distributions | |

| |

| |

| |

The Cumulative Binomial Probability Table | |

| |

| |

| |

Lot-Acceptance Sampling | |

| |

| |

| |

Consumer's Risk and Producer's Risk | |

| |

| |

| |

Multivariate Probability Distributions and Joint Probability Distributions | |

| |

| |

| |

The Multinomial Experiment | |

| |

| |

| |

The Multinomial Coefficient | |

| |

| |

| |

The Multinomial Probability Function | |

| |

| |

| |

The Family of Multinomial Probability Distributions | |

| |

| |

| |

The Means of the Multinomial Probability Distribution | |

| |

| |

| |

The Multinomial Expansion and the Multinomial Theorem | |

| |

| |

| |

The Hypergeometric Experiment | |

| |

| |

| |

The Hypergeometric Probability Function | |

| |

| |

| |

The Family of Hypergeometric Probability Distributions | |

| |

| |

| |

The Mean, Variance, and Standard Deviation of the Hypergeometric Probability Distribution | |

| |

| |

| |

The Generalization of the Hypergeometric Probability Distribution | |

| |

| |

| |

The Binomial and Multinomial Approximations to the Hypergeometric Distribution | |

| |

| |

| |

Poisson Processes, Random Variables, and Experiments | |

| |

| |

| |

The Poisson Probability Function | |

| |

| |

| |

The Family of Poisson Probability Distributions | |

| |

| |

| |

The Mean, Variance, and Standard Deviation of the Poisson Probability Distribution | |

| |

| |

| |

The Cumulative Poisson Probability Table | |

| |

| |

| |

The Poisson Distribution as an Approximation to the Binomial Distribution | |

| |

| |

| |

The Normal Distribution and Other Continuous Probability Distributions | |

| |

| |

| |

Continuous Probability Distributions | |

| |

| |

| |

The Normal Probability Distributions and the Normal Probability Density Function | |

| |

| |

| |

The Family of Normal Probability Distributions | |

| |

| |

| |

The Normal Distribution: Relationship between the Mean ([mu]), the Median ([mu]), and the Mode | |

| |

| |

| |

Kurtosis | |

| |

| |

| |

The Standard Normal Distribution | |

| |

| |

| |

Relationship Between the Standard Normal Distribution and the Standard Normal Variable | |

| |

| |

| |

Table of Areas in the Standard Normal Distribution | |

| |

| |

| |

Finding Probabilities Within any Normal Distribution by Applying the Z Transformation | |

| |

| |

| |

One-tailed Probabilities | |

| |

| |

| |

Two-tailed Probabilities | |

| |

| |

| |

The Normal Approximation to the Binomial Distribution | |

| |

| |

| |

The Normal Approximation to the Poisson Distribution | |

| |

| |

| |

The Discrete Uniform Probability Distribution | |

| |

| |

| |

The Continuous Uniform Probability Distribution | |

| |

| |

| |

The Exponential Probability Distribution | |

| |

| |

| |

Relationship between the Exponential Distribution and the Poisson Distribution | |

| |

| |

| |

Sampling Distributions | |

| |

| |

| |

Simple Random Sampling Revisited | |

| |

| |

| |

Independent Random Variables | |

| |

| |

| |

Mathematical and Nonmathematical Definitions of Simple Random Sampling | |

| |

| |

| |

Assumptions of the Sampling Technique | |

| |

| |

| |

The Random Variable X | |

| |

| |

| |

Theoretical and Empirical Sampling Distributions of the Mean | |

| |

| |

| |

The Mean of the Sampling Distribution of the Mean | |

| |

| |

| |

The Accuracy of an Estimator | |

| |

| |

| |

The Variance of the Sampling Distribution of the Mean: Infinite Population or Sampling with Replacement | |

| |

| |

| |

The Variance of the Sampling Distribution of the Mean: Finite Population Sampled without Replacement | |

| |

| |

| |

The Standard Error of the Mean | |

| |

| |

| |

The Precision of An Estimator | |

| |

| |

| |

Determining Probabilities with a Discrete Sampling Distribution of the Mean | |

| |

| |

| |

Determining Probabilities with a Normally Distributed Sampling Distribution of the Mean | |

| |

| |

| |

The Central Limit Theorem: Sampling from a Finite Population with Replacement | |

| |

| |

| |

The Central Limit Theorem: Sampling from an Infinite Population | |

| |

| |

| |

The Central Limit Theorem: Sampling from a Finite Population without Replacement | |

| |

| |

| |

How Large is "Sufficiently Large?" | |

| |

| |

| |

The Sampling Distribution of the Sample Sum | |

| |

| |

| |

Applying the Central Limit Theorem to the Sampling Distribution of the Sample Sum | |

| |

| |

| |

Sampling from a Binomial Population | |

| |

| |

| |

Sampling Distribution of the Number of Successes | |

| |

| |

| |

Sampling Distribution of the Proportion | |

| |

| |

| |

Applying the Central Limit Theorem to the Sampling Distribution of the Number of Successes | |

| |

| |

| |

Applying the Central Limit Theorem to the Sampling Distribution of the Proportion | |

| |

| |

| |

Determining Probabilities with a Normal Approximation to the Sampling Distribution of the Proportion | |

| |

| |

| |

One-Sample Estimation of The Population Mean | |

| |

| |

| |

Estimation | |

| |

| |

| |

Criteria for Selecting the Optimal Estimator | |

| |

| |

| |

The Estimated Standard Error of the Mean S[subscript x] | |

| |

| |

| |

Point Estimates | |

| |

| |

| |

Reporting and Evaluating the Point Estimate | |

| |

| |

| |

Relationship between Point Estimates and Interval Estimates | |

| |

| |

| |

Deriving P(x[subscript 1-alpha/2] [less than or equal] X [less than or equal] x[subscript alpha/2]) = P(-z[subscript alpha/2] [less than or equal] Z [less than or equal] z[subscript alpha/2]) = 1 - [alpha] | |

| |

| |

| |

Deriving P(X - z[subscript alpha/2] [sigma subscript x] [less than or equal] [mu] [less than or equal] X + z[subscript alpha/2] [sigma subscript x]) = 1 - [alpha] | |

| |

| |

| |

Confidence Interval for the Population Mean [mu]: Known Standard Deviation [sigma], Normally Distributed Population | |

| |

| |

| |

Presenting Confidence Limits | |

| |

| |

| |

Precision of the Confidence Interval | |

| |

| |

| |

Determining Sample Size when the Standard Deviation is Known | |

| |

| |

| |

Confidence Interval for the Population Mean [mu]: Known Standard Deviation [sigma], Large Sample (n [greater than or equal] 30) from any Population Distribution | |

| |

| |

| |

Determining Confidence Intervals for the Population Mean [mu] when the Population Standard Deviation [sigma] is Unknown | |

| |

| |

| |

The t Distribution | |

| |

| |

| |

Relationship between the t Distribution and the Standard Normal Distribution | |

| |

| |

| |

Degrees of Freedom | |

| |

| |

| |

The Term "Student's t Distribution" | |

| |

| |

| |

Critical Values of the t Distribution | |

| |

| |

| |

Table A.6: Critical Values of the t Distribution | |

| |

| |

| |

Confidence Interval for the Population Mean [mu]: Standard Deviation [sigma] not known, Small Sample (n [ 30) from a Normally Distributed Population | |

| |

| |

| |

Determining Sample Size: Unknown Standard Deviation, Small Sample from a Normally Distributed Population | |

| |

| |

| |

Confidence Interval for the Population Mean [mu]: Standard Deviation [sigma] not known, large sample (n [greater than or equal] 30) from a Normally Distributed Population | |

| |

| |

| |

Confidence Interval for the Population Mean [mu]: Standard Deviation [sigma] not known, Large Sample (n [greater than or equal] 30) from a Population that is not Normally Distributed | |

| |

| |

| |

Confidence Interval for the Population Mean [mu]: Small Sample (n [ 30) from a Population that is not Normally Distributed | |

| |

| |

| |

One-Sample Estimation of the Population Variance, Standard Deviation, and Proportion | |

| |

| |

| |

Optimal Estimators of Variance, Standard Deviation, and Proportion | |

| |

| |

| |

The Chi-Square Statistic and the Chi-Square Distribution | |

| |

| |

| |

Critical Values of the Chi-Square Distribution | |

| |

| |

| |

Table A.7: Critical Values of the Chi-Square Distribution | |

| |

| |

| |

Deriving the Confidence Interval for the Variance [sigma superscript 2] of a Normally Distributed Population | |

| |

| |

| |

Presenting Confidence Limits | |

| |

| |

| |

Precision of the Confidence Interval for the Variance | |

| |

| |

| |

Determining Sample Size Necessary to Achieve a Desired Quality-of-Estimate for the Variance | |

| |

| |

| |

Using Normal-Approximation Techniques To Determine Confidence Intervals for the Variance | |

| |

| |

| |

Using the Sampling Distribution of the Sample Variance to Approximate a Confidence Interval for the Population Variance | |

| |

| |

| |

Confidence Interval for the Standard Deviation [sigma] of a Normally Distributed Population | |

| |

| |

| |

Using the Sampling Distribution of the Sample Standard Deviation to Approximate a Confidence Interval for the Population Standard Deviation | |

| |

| |

| |

The Optimal Estimator for the Proportion p of a Binomial Population | |

| |

| |

| |

Deriving the Approximate Confidence Interval for the Proportion p of a Binomial Population | |

| |

| |

| |

Estimating the Parameter p | |

| |

| |

| |

Deciding when n is "Sufficiently Large", p not known | |

| |

| |

| |

Approximate Confidence Intervals for the Binomial Parameter p When Sampling From a Finite Population without Replacement | |

| |

| |

| |

The Exact Confidence Interval for the Binomial Parameter p | |

| |

| |

| |

Precision of the Approximate Confidence-Interval Estimate of the Binomial Parameter p | |

| |

| |

| |

Determining Sample Size for the Confidence Interval of the Binomial Parameter p | |

| |

| |

| |

Approximate Confidence Interval for the Percentage of a Binomial Population | |

| |

| |

| |

Approximate Confidence Interval for the Total Number in a Category of a Binomial Population | |

| |

| |

| |

The Capture--Recapture Method for Estimating Population Size N | |

| |

| |

| |

One-Sample Hypothesis Testing | |

| |

| |

| |

Statistical Hypothesis Testing | |

| |

| |

| |

The Null Hypothesis and the Alternative Hypothesis | |

| |

| |

| |

Testing the Null Hypothesis | |

| |

| |

| |

Two-Sided Versus One-Sided Hypothesis Tests | |

| |

| |

| |

Testing Hypotheses about the Population Mean [mu]: Known Standard Deviation [sigma], Normally Distributed Population | |

| |

| |

| |

The P Value | |

| |

| |

| |

Type I Error versus Type II Error | |

| |

| |

| |

Critical Values and Critical Regions | |

| |

| |

| |

The Level of Significance | |

| |

| |

| |

Decision Rules for Statistical Hypothesis Tests | |

| |

| |

| |

Selecting Statistical Hypotheses | |

| |

| |

| |

The Probability of a Type II Error | |

| |

| |

| |

Consumer's Risk and Producer's Risk | |

| |

| |

| |

Why It is Not Possible to Prove the Null Hypothesis | |

| |

| |

| |

Classical Inference Versus Bayesian Inference | |

| |

| |

| |

Procedure for Testing the Null Hypothesis | |

| |

| |

| |

Hypothesis Testing Using X as the Test Statistic | |

| |

| |

| |

The Power of a Test, Operating Characteristic Curves, and Power Curves | |

| |

| |

| |

Testing Hypothesis about the Population Mean [mu]: Standard Deviation [sigma] Not Known, Small Sample (n [ 30) from a Normally Distributed Population | |

| |

| |

| |

The P Value for the t Statistic | |

| |

| |

| |

Decision Rules for Hypothesis Tests with the t Statistic | |

| |

| |

| |

[beta], 1 - [beta], Power Curves, and OC Curves | |

| |

| |

| |

Testing Hypotheses about the Population Mean [mu]: Large Sample (n [greater than or equal] 30) from any Population Distribution | |

| |

| |

| |

Assumptions of One-Sample Parametric Hypothesis Testing | |

| |

| |

| |

When the Assumptions are Violated | |

| |

| |

| |

Testing Hypothesis about the Variance [sigma superscript 2] of a Normally Distributed Population | |

| |

| |

| |

Testing Hypotheses about the Standard Deviation [sigma] of a Normally Distributed Population | |

| |

| |

| |

Testing Hypotheses about the Proportion p of a Binomial Population: Large Samples | |

| |

| |

| |

Testing Hypotheses about the Proportion p of a Binomial Population: Small Samples | |

| |

| |

| |

Two-Sample Estimation and Hypothesis Testing | |

| |

| |

| |

Independent Samples Versus Paired Samples | |

| |

| |

| |

The Optimal Estimator of the Difference Between Two Population Means ([mu subscript 1] - [mu subscript 2]) | |

| |

| |

| |

The Theoretical Sampling Distribution of the Difference Between Two Means | |

| |

| |

| |

Confidence Interval for the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Standard Deviations ([sigma subscript 1] and [sigma subscript 2]) Known, Independent Samples from Normally Distributed Populations | |

| |

| |

| |

Testing Hypotheses about the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Standard Deviations ([sigma subscript 1] and [sigma subscript 2]) known, Independent Samples from Normally Distributed Populations | |

| |

| |

| |

The Estimated Standard Error of the Difference Between Two Means | |

| |

| |

| |

Confidence Interval for the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Standard Deviations not known but Assumed Equal ([sigma subscript 1] = [sigma subscript 2]), Small (n[subscript 1] [ 30 and n[subscript 2] [ 30) Independent Samples from Normally Distributed Populations | |

| |

| |

| |

Testing Hypotheses about the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Standard Deviations not Known but Assumed Equal ([sigma subscript 1] = [sigma subscript 2]), Small (n[subscript 1] [ 30 and n[subscript 2] [ 30) Independent Samples from Normally Distributed Populations | |

| |

| |

| |

Confidence Interval for the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Standard Deviations ([sigma subscript 1] and [sigma subscript 2]) not Known, Large (n[subscript 1] [greater than or equal] 30 and n[subscript 2] [greater than or equal] 30) Independent Samples from any Populations Distributions | |

| |

| |

| |

Testing Hypotheses about the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Standard Deviations ([sigma subscript 1] and [sigma subscript 2]), not known, Large (n[subscript 1] [greater than or equal] 30 and n[subscript 2] [greater than or equal] 30) Independent Samples from any Populations Distributions | |

| |

| |

| |

Confidence Interval for the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Paired Samples | |

| |

| |

| |

Testing Hypotheses about the Difference Between Means ([mu subscript 1] - [mu subscript 2]): Paired Samples | |

| |

| |

| |

Assumptions of Two-Sample Parametric Estimation and Hypothesis Testing about Means | |

| |

| |

| |

When the Assumptions are Violated | |

| |

| |

| |

Comparing Independent-Sampling and Paired-Sampling Techniques on Precision and Power | |

| |

| |

| |

The F Statistic | |

| |

| |

| |

The F Distribution | |

| |

| |

| |

Critical Values of the F Distribution | |

| |

| |

| |

Table A.8: Critical Values of the F Distribution | |

| |

| |

| |

Confidence Interval for the Ratio of Variances ([sigma superscript 2 subscript 1]/[sigma superscript 2 subscript 2]): Parameters ([sigma superscript 2 subscript 1], [sigma subscript 1], [mu subscript 1] and [sigma superscript 2 subscript 2], [sigma subscript 2], [mu subscript 2]) Not Known, Independent Samples From Normally Distributed Populations | |

| |

| |

| |

Testing Hypotheses about the Ratio of Variances ([sigma superscript 2 subscript 1]/[sigma superscript 2 subscript 2]): Parameters ([sigma superscript 2 subscript 1], [sigma subscript 1], [mu subscript 1] and [sigma superscript 2 subscript 2], [sigma subscript 2], [mu subscript 2]) not known, Independent Samples from Normally Distributed Populations | |

| |

| |

| |

When to Test for Homogeneity of Variance | |

| |

| |

| |

The Optimal Estimator of the Difference Between Proportions (p[subscript 1] - p[subscript 2]): Large Independent Samples | |

| |

| |

| |

The Theoretical Sampling Distribution of the Difference Between Two Proportions | |

| |

| |

| |

Approximate Confidence Interval for the Difference Between Proportions from Two Binomial Populations (p[subscript 1] - p[subscript 2]): Large Independent Samples | |

| |

| |

| |

Testing Hypotheses about the Difference Between Proportions from Two Binomial Populations (p[subscript 1] - p[subscript 2]): Large Independent Samples | |

| |

| |

| |

Multisample Estimation and Hypothesis Testing | |

| |

| |

| |

Multisample Inferences | |

| |

| |

| |

The Analysis of Variance | |

| |

| |

| |

Anova: One-Way, Two-Way, or Multiway | |

| |

| |

| |

One-Way Anova: Fixed-Effects or Random Effects | |

| |

| |

| |

One-way, Fixed-Effects Anova: The Assumptions | |

| |

| |

| |

Equal-Samples, One-Way, Fixed-Effects Anova: H[subscript 0] and H[subscript 1] | |

| |

| |

| |

Equal-Samples, One-Way, Fixed-Effects Anova: Organizing the Data | |

| |

| |

| |

Equal-Samples, One-Way, Fixed-Effects Anova: the Basic Rationale | |

| |

| |

| |

SST = SSA + SSW | |

| |

| |

| |

Computational Formulas for SST and SSA | |

| |

| |

| |

Degrees of Freedom and Mean Squares | |

| |

| |

| |

The F Test | |

| |

| |

| |

The Anova Table | |

| |

| |

| |

Multiple Comparison Tests | |

| |

| |

| |

Duncan's Multiple-Range Test | |

| |

| |

| |

Confidence-Interval Calculations Following Multiple Comparisons | |

| |

| |

| |

Testing for Homogeneity of Variance | |

| |

| |

| |

One-Way, Fixed-Effects ANOVA: Equal or Unequal Sample Sizes | |

| |

| |

| |

General-Procedure, One-Way, Fixed-effects ANOVA: Organizing the Data | |

| |

| |

| |

General-Procedure, One-Way, Fixed-effects ANOVA: Sum of Squares | |

| |

| |

| |

General-Procedure, One-Way, Fixed-Effects ANOVA Degrees of Freedom and Mean Squares | |

| |

| |

| |

General-Procedure, One-Way, Fixed-Effects ANOVA: the F Test | |

| |

| |

| |

General-Procedure, One-Way, Fixed-Effects ANOVA: Multiple Comparisons | |

| |

| |

| |

General-Procedure, One-Way, Fixed-Effects ANOVA: Calculating Confidence Intervals and Testing for Homogeneity of Variance | |

| |

| |

| |

Violations of ANOVA Assumptions | |

| |

| |

| |

Regression and Correlation | |

| |

| |

| |

Analyzing the Relationship between Two Variables | |

| |

| |

| |

The Simple Linear Regression Model | |

| |

| |

| |

The Least-Squares Regression Line | |

| |

| |

| |

The Estimator of the Variance [sigma superscript 2 subscript Y times X] | |

| |

| |

| |

Mean and Variance of the y Intercept a and the Slope b | |

| |

| |

| |

Confidence Intervals for the y Intercept a and the Slope b | |

| |

| |

| |

Confidence Interval for the Variance [sigma superscript 2 subscript Y times X] | |

| |

| |

| |

Prediction Intervals for Expected Values of Y | |

| |

| |

| |

Testing Hypotheses about the Slope b | |

| |

| |

| |

Comparing Simple Linear Regression Equations from Two or More Samples | |

| |

| |

| |

Multiple Linear Regression | |

| |

| |

| |

Simple Linear Correlation | |

| |

| |

| |

Derivation of the Correlation Coefficient r | |

| |

| |

| |

Confidence Intervals for the Population Correlation Coefficient [rho] | |

| |

| |

| |

Using the r Distribution to Test Hypotheses about the Population Correlation Coefficient [rho] | |

| |

| |

| |

Using the t Distribution to Test Hypotheses about p | |

| |

| |

| |

Using the Z Distribution to Test the Hypothesis [rho] = c | |

| |

| |

| |

Interpreting the Sample Correlation Coefficient r | |

| |

| |

| |

Multiple Correlation and Partial Correlation | |

| |

| |

| |

Nonparametric Techniques | |

| |

| |

| |

Nonparametric vs. Parametric Techniques | |

| |

| |

| |

Chi-Square Tests | |

| |

| |

| |

Chi-Square Test for Goodness-of-fit | |

| |

| |

| |

Chi-Square Test for Independence: Contingency Table Analysis | |

| |

| |

| |

Chi-Square Test for Homogeneity Among k Binomial Proportions | |

| |

| |

| |

Rank Order Tests | |

| |

| |

| |

One-Sample Tests: The Wilcoxon Signed-Rank Test | |

| |

| |

| |

Two-Sample Tests: the Wilcoxon Signed-Rank Test for Dependent Samples | |

| |

| |

| |

Two-Sample Tests: the Mann-Whitney U Test for Independent Samples | |

| |

| |

| |

Multisample Tests: the Kruskal-Wallis H Test for k Independent Samples | |

| |

| |

| |

The Spearman Test of Rank Correlation | |

| |

| |

Appendix | |

| |

| |

| |

Cumulative Binomial Probabilities | |

| |

| |

| |

Cumulative Poisson Probabilities | |

| |

| |

| |

Areas of the Standard Normal Distribution | |

| |

| |

| |

Critical Values of the t Distribution | |

| |

| |

| |

Critical Values of the Chi-Square Distribution | |

| |

| |

| |

Critical Values of the F Distribution | |

| |

| |

| |

Least Significant Studentized Ranges r[subscript p] | |

| |

| |

| |

Transformation of r to z[subscript r] | |

| |

| |

| |

Critical Values of the Pearson Product-Moment Correlation Coefficient r | |

| |

| |

| |

Critical Values of the Wilcoxon W | |

| |

| |

| |

Critical Values of the Mann-Whitney U | |

| |

| |

| |

Critical Values of the Kruskal-Wallis H | |

| |

| |

| |

Critical Values of the Spearman r[subscript S] | |

| |

| |

Index | |