Skip to content

Mathematical Statistics with Applications

Best in textbook rentals since 2012!

ISBN-10: 0534377416

ISBN-13: 9780534377410

Edition: 6th 2002 (Revised)

Authors: Dennis D. Wackerly, William Mendenhall, Richard L. Scheaffer

List price: $366.95
Blue ribbon 30 day, 100% satisfaction guarantee!
what's this?
Rush Rewards U
Members Receive:
Carrot Coin icon
XP icon
You have reached 400 XP and carrot coins. That is the daily max!

This is the most widely used mathematical statistics text at the top 200 universities in the United States. Premiere authors Dennis Wackerly, William Mendenhall, and Richard L. Scheaffer present a solid undergraduate foundation in statistical theory while conveying the relevance and importance of the theory in solving practical problems in the real world. The authors' use of practical applications and excellent exercises helps students discover the nature of statistics and understand its essential role in scientific research.
Customers also bought

Book details

List price: $366.95
Edition: 6th
Copyright year: 2002
Publisher: Brooks/Cole
Publication date: 5/30/2001
Binding: Hardcover
Pages: 798
Size: 7.50" wide x 9.25" long x 1.50" tall
Weight: 3.278
Language: English

Richard L. Scheaffer, Professor Emeritus of Statistics, University of Florida, received his Ph.D. in statistics from Florida State University. Accompanying a career of teaching, research and administration, Dr. Scheaffer has led efforts on the improvement of statistics education throughout the school and college curriculum. Co-author of five textbooks, he was one of the developers of the Quantitative Literacy Project that formed the basis of the data analysis strand in the curriculum standards of the National Council of Teachers of Mathematics. He also led the task force that developed the AP Statistics Program, for which he served as Chief Faculty Consultant. Dr. Scheaffer is a Fellow and…    

Preface
Note to the Student
What Is Statistics?
Introduction
Characterizing a Set of Measurements: Graphical Methods
Characterizing a Set of Measurements: Numerical Methods
How Inferences Are Made
Theory and Reality
Summary
Probability
Introduction
Probability and Inference
A Review of Set Notation
A Probabilistic Model for an Experiment: The Discrete Case
Calculating the Probability of an Event: The Sample-Point Method
Tools for Counting Sample Points
Conditional Probability and the Independence of Events
Two Laws of Probability
Calculating the Probability of an Event: The Event-Composition Method
The Law of Total Probability and Bayes' Rule
Numerical Events and Random Variables
Random Sampling
Summary
Discrete Random Variables and Their Probability Distributions
Basic Definition
The Probability Distribution for a Discrete Random Variable
The Expected Value of a Random Variable or a Function of a Random Variable
The Binomial Probability Distribution
The Geometric Probability Distribution
The Negative Binomial Probability Distribution (Optional)
The Hypergeometric Probability Distribution
The Poisson Probability Distribution
Moments and Moment-Generating Functions
Probability-Generating Functions (Optional)
Tchebysheff's Theorem
Summary
Continuous Random Variables and Their Probability Distributions
Introduction
The Probability Distribution for a Continuous Random Variable
Expected Values for Continuous Random Variables
The Uniform Probability Distribution
The Normal Probability Distribution
The Gamma Probability Distribution
The Beta Probability Distribution
Some General Comments
Other Expected Values
Tchebysheff's Theorem
Expectations of Discontinuous Functions and Mixed Probability Distributions (Optional)
Summary
Multivariate Probability Distributions
Introduction
Bivariate and Multivariate Probability Distributions
Marginal and Conditional Probability Distributions
Independent Random Variables
The Expected Value of a Function of Random Variables
Special Theorems
The Covariance of Two Random Variables
The Expected Value and Variance of Linear Functions of Random Variables
The Multinomial Probability Distribution
The Bivariate Normal Distribution (Optional)
Conditional Expectations
Summary
Functions of Random Variables
Introduction
Finding the Probability Distribution of a Function of Random Variables
The Method of Distribution Functions
The Method of Transformations
The Method of Moment-Generating Functions
Multivariable Transformations Using Jacobians (Optional)
Order Statistics
Summary
Sampling Distributions and the Central Limit Theorem
Introduction
Sampling Distributions Related to the Normal Distribution
The Central Limit Theorem
A Proof of the Central Limit Theorem (Optional)
The Normal Approximation to the Binomial Distribution
Summary
Estimation
Introduction
The Bias and Mean Square Error of Point Estimators
Some Common Unbiased Point Estimators
Evaluating the Goodness of a Point Estimator
Confidence Intervals
Large-Sample Confidence Intervals
Selecting the Sample Size
Small-Sample Confidence Intervals for [mu] and [mu subscript 1] - [mu subscript 2]
Confidence Intervals for [sigma superscript 2]
Summary
Properties of Point Estimators and Methods of Estimation
Introduction
Relative Efficiency
Consistency
Sufficiency
The Rao-Blackwell Theorem and Minimum-Variance Unbiased Estimation
The Method of Moments
The Method of Maximum Likelihood
Some Large-Sample Properties of MLEs (Optional)
Summary
Hypothesis Testing
Introduction
Elements of a Statistical Test
Common Large-Sample Tests
Calculating Type II Error Probabilities and Finding the Sample Size for the Z Test
Relationships Between Hypothesis-Testing Procedures and Confidence Intervals
Another Way to Report the Results of a Statistical Test: Attained Significance Levels or p-Values
Some Comments on the Theory of Hypothesis Testing
Small-Sample Hypothesis Testing for [mu] and [mu subscript 1] - [mu subscript 2]
Testing Hypotheses Concerning Variances
Power of Tests and the Neyman-Pearson Lemma
Likelihood Ratio Tests
Summary
Linear Models and Estimation by Least Squares
Introduction
Linear Statistical Models
The Method of Least Squares
Properties of the Least Squares Estimators: Simple Linear Regression
Inferences Concerning the Parameters [beta subscript i]
Inferences Concerning Linear Functions of the Model Parameters: Simple Linear Regression
Predicting a Particular Value of Y Using Simple Linear Regression
Correlation
Some Practical Examples
Fitting the Linear Model by Using Matrices
Linear Functions of the Model Parameters: Multiple Linear Regression
Inferences Concerning Linear Functions of the Model Parameters: Multiple Linear Regression
Predicting a Particular Value of Y Using Multiple Regression
A Test for H[subscript 0]: [beta subscript g+1] = [beta subscript g+2] = ... = [beta subscript k] = 0
Summary and Concluding Remarks
Considerations in Designing Experiments
The Elements Affecting the Information in a Sample
Designing Experiments to Increase Accuracy
The Matched Pairs Experiment
Some Elementary Experimental Designs
Summary
The Analysis of Variance
Introduction
The Analysis of Variance Procedure
Comparison of More than Two Means: Analysis of Variance for a One-Way Layout
An Analysis of Variance Table for a One-Way Layout
A Statistical Model for the One-Way Layout
Proof of Additivity of the Sums of Squares and E(MST) for a One-Way Layout (Optional)
Estimation in the One-Way Layout
A Statistical Model for the Randomized Block Design
The Analysis of Variance for a Randomized Block Design
Estimation in the Randomized Block Design
Selecting the Sample Size
Simultaneous Confidence Intervals for More than One Parameter
Analysis of Variance Using Linear Models
Summary
Analysis of Categorical Data
A Description of the Experiment
The Chi-Square Test
A Test of a Hypothesis Concerning Specified Cell Probabilities: A Goodness-of-Fit Test
Contingency Tables
r [times] c Tables with Fixed Row or Column Totals
Other Applications
Summary and Concluding Remarks
Nonparametric Statistics
Introduction
A General Two-Sample Shift Model
The Sign Test for a Matched Pairs Experiment
The Wilcoxon Signed-Rank Test for a Matched Pairs Experiment
The Use of Ranks for Comparing Two Population Distributions: Independent Random Samples
The Mann-Whitney U Test: Independent Random Samples
The Kruskal-Wallis Test for the One-Way Layout
The Friedman Test for Randomized Block Designs
The Runs Test: A Test for Randomness
Rank Correlation Coefficient
Some General Comments on Nonparametric Statistical Tests
Matrices and Other Useful Mathematical Results
Matrices and Matrix Algebra
Addition of Matrices
Multiplication of a Matrix by a Real Number
Matrix Multiplication
Identity Elements
The Inverse of a Matrix
The Transpose of a Matrix
A Matrix Expression for a System of Simultaneous Linear Equations
Inverting a Matrix
Solving a System of Simultaneous Linear Equations
Other Useful Mathematical Results
Common Probability Distributions, Means, Variances, and Moment-Generating Functions
Discrete Distributions
Continuous Distributions
Tables
Binomial Probabilities
Table of e[superscript -x]
Poisson Probabilities
Normal Curve Areas
Percentage Points of the t Distributions
Percentage Points of the x[superscript 2] Distributions
Percentage Points of the F Distributions
Distribution Function of U
Critical Values of T in the Wilcoxon Matched-Pairs, Signed-Ranks Test
Distribution of the Total Number of Runs R in Samples of Size (n[subscript 1], n[subscript 2]); P(R [less than or equal] a)
Critical Values of Spearman's Rank Correlation Coefficient
Random Numbers
Answers to Exercises
Index