Skip to content

Statistical Analysis for the Social Sciences An Interactive Approach

Best in textbook rentals since 2012!

ISBN-10: 0205294936

ISBN-13: 9780205294930

Edition: 2001

Authors: Philip C. Abrami, Paul Cholmsky, Robert Gordon

List price: $168.00
Blue ribbon 30 day, 100% satisfaction guarantee!
what's this?
Rush Rewards U
Members Receive:
Carrot Coin icon
XP icon
You have reached 400 XP and carrot coins. That is the daily max!

Description:

Integrated book and CD-ROM package emphasizes the logic of statistical procedures, fundamental concepts, and application of quantitative techniques to statistical problems. Every book is packaged with a free CD-ROM featuring Activities and Problem Generators that reinforce the key concepts that are most difficult for students. Users actively participate in experiments by directly manipulating data points and varying numerical values, and by observing real-time changes to graphs and equations. Sets of questions structure exploration. Problem Generators provide a wide variety of problems with worked solutions, helping students develop confidence in statistical analysis. Annotated icons link…    
Customers also bought

Book details

List price: $168.00
Copyright year: 2001
Publisher: Allyn & Bacon, Incorporated
Publication date: 11/13/2000
Binding: Paperback
Pages: 591
Size: 7.50" wide x 9.25" long x 1.00" tall
Weight: 2.486
Language: English

Robert Gordon has written for major publications in the U.S. and England, and has contributed to several books. He produced the Al green CD box set, "Anthology", for which his liner notes were Grammy nominated. As a filmmaker, he directed the award-winning blues documentary "All Day and All Night", and his music video work has appeared on MTV, BET, and CMT. He is the author of a forthcoming biography of Muddy Waters, and director of the companion documentary. He lives in Memphis with his wife and two daughters.

Preface
Introduction
Overview
Statistics
Variables and Variability
Preparing Data for Analysis
Putting It All Together
Key Terms
References
Problems
Research Methodology: A Primer
Overview
The Importance of Good Research Design
Statistical Analysis and the Big Picture
Research Ethics
Basics of Research Design
Experimental and Correlational Investigations
Two Basic Questions
Notational System
Types of Designs
Pre-Experimental Designs
One-Shot Case Study
The One-Group Prtest-Posttest Design
Static Group Comparison Design
Experimental versus Statistical Control
More on Internal Validity
Possible or Probable?
True Experimental Designs
Pretest-Posttest Control Group Design
Posttest-Only Control Group Design
External Validity
Quasi-Experimental Designs
Time Series Design
Nonequivalent Control Group Design
Measurement Issues
Scales of Measurement
Reliability and Validity
Tests and Self-Report Measures
Desirable Characteristics of Standardized Tests
Test Reliability
Test Validity
Other Characteristics of Standardized Tests
Putting It All Together
Key Terms
References
Problems
Organizing and Displaying Data
Overview
Why Organize and Display Data?
Ways of Organizing Data
Data Screening
Organizing the Data
Ranking
Percentages and Percentiles
Uses of Percentiles and Percentile Ranks
Grouping the Data
Selecting a Class Interval
Estimating Percentiles and Percentile Ranks from Grouped Frequency Distributions
Grouping: Advantages and Disadvantages
Crosstabulation
Displaying the Data
What Exactly Are Visual Displays?
Plots and Charts
Putting It All Together
Key Terms
References
Problems
Descriptive Statistics
Overview
Why Summarize the Data?
Summation Notation
Measures of Central Tendency
The Basics
Selecting a Measure of Central Tendency
Other Measures of Central Tendency
Shapes of Distributions
Measures of Dispersion
The Basics
Moments About the Mean
Measures of Bivariate Relationship
Putting It All Together
Measures of Central Tendency
Shapes of Distributions
Measures of Dispersion
Moments About the Mean
Measures of Bivariate Relationship
Key Terms
References
Problems
Building Blocks of Inferential Statistics: Probability, Chance, Variability, and Distributions
Overview
Probability: The Foundation of Inferential Statistics
Probability
Interpreting the Findings
Approaches to Probability
The Role of Probability Theory in Inferential Statistics
Samples and Populations
Variability
The Shape of Chance Variability: More Pieces of the Puzzle
The Binomial Distribution
Properties of the Normal Distribution
Binomial Distribution: The Normal Approximation
Normal Distribution: Some Other Important Properties
Areas Under the Normal Distribution
The Standard Normal Distribution and z-Scores
Other Issues in Understanding and Using the Normal Distribution
T-Scores
Putting It All Together
The Binomial Distribution
Properties of the Normal Distribution
T-Scores
Key Terms
References
Problems
Sampling Distributions
Overview
Basic Concepts in Statistical Inference
Key Terms in Statistical Inference
Using Sample Statistics to Estimate Population Parameters
Desirable Properties of Estimators
Good Estimators
Formulas for Samples, Populations, and Population Estimators
Sampling Distributions
Interval Estimation of the Mean
Hypothesis Testing
Hypothesis Testing Using the z-Test
Interval Estimation of the Mean Difference
Putting It All Together
Formulas for Samples, Populations, Population Estimators, Sampling Distributions, and Sampling Distribution Estimators
Using the Sampling Distribution for Hypothesis Testing
Key Terms
References
Problems
Statistical Issues in Hypothesis Testing
Overview
Steps in Hypothesis Testing
State the Null and Alternative Hypotheses
Select Alpha: The Probability Value for Significance Testing
Select the Appropriate Test Statistic
Compute the Calculated Value of the Test Statistic
Find the Critical Value of the Test Statistic
Compare the Calculated and Critical Values
An Important Caveat on the Six Steps in Hypothesis Testing
Devil's Advocate Example
z-Test Interval Estimation
One-Sided Confidence Intervals
Statistical Power
The Alternative Distribution
Steps in Estimating Power
Interpretation and Guidelines for Acceptable Statistical Power
Estimating the Power of Your Research
Ways to Increase Statistical Power
Other Considerations
Effect Size and Practical Importance
The Effect Size
Effect Size Calculation for the Devil's Advocate Scenario
Effect Sizes and Power
Is John Correct?
Guidelines for Using Statistics
Putting It All Together
Key Terms
References
Problems
Testing the Difference Between Two Independent Groups: The t-Test
Overview
What's Wrong with the z-Test?
Review and Application of the z-Test
The Contribution of William Gosset
The Separate Variance Model t-Test
Effect of Sample Size on the Critical Value
Degrees of Freedom
Finding the Correct Critical Value
Exact Probabilities
Confidence Intervals
Effect Sizes Calculations and Power
The Drill Press Example
The Pooled Variance Model t-Test
Pooled Standard Error
Pooled Variance Model t-Test
Confidence Intervals Based on the Pooled Variance Approach
Effect Size Calculations Based on the Pooled Variance Approach
Comparison of Two Approaches
Underlying Assumptions
The Variability Within Each Group Should Be Normally Distributed
Each Data Point, or Score, Should Be Independent of Every Other Data Point
The Variances of the Two Groups Should Be Equal or Homogeneous
Choosing Between the Separate and Pooled Variance Models
Guidelines for Choosing Between the Two Models
Putting It All Together
Key Terms
References
Problems
Testing the Difference Between Two or More Independent Groups: The Oneway between-Groups Analysis of Variance
Overview
Omnibus Tests of Significance
Multilevel Independent Variables
Specific or General Hypotheses?
Partitioning Variability
The Simple Mathematics of Variance Partitioning
Total Variability
Between-Groups Variability
Within-Groups Variability
The F-Test
From Sums of Squares to Variances
Mean Squares
F as a Ratio of Variances
Reporting ANOVA Results
Underlying Assumptions
Assumptions
Effects of Assumption Violations
Effect Size Calculations and Power
Effect Size Calculations for Two Groups
Using R[superscript 2] and f to Estimate the General Effect Size
Another Method for Estimating General Effect Sizes: [omega superscript 2]
Power
Putting It All Together
Key Terms
References
Problems
A Proof that t[superscript 2] = F
Testing the Difference Between Two or More Independent Groups: Multiple Comparisons
Overview
Multiple Comparisons Basics
What Is a Comparison?
When Multiple Comparisons Should Be Avoided
Why Use Multiple Comparisons?
Planned Comparisons (Also Known as A Priori Comparisons)
Post Hoc Comparisons (Also Known as A Posteriori Comparisons)
Planned Comparisons
Specific Hypotheses
Rules for Evaluating Planned Comparisons
Symbol System
Valid Planned Comparisons
Independence of Planned Comparisons
Statistically Evaluating Planned Comparisons
Dealing with Variance Heterogeneity
Post Hoc Comparisons
Conceptual Unit for Error Rate
Tukey's HSD
Scheffe's S Method
Putting It All Together
Planned Comparisons
Post Hoc Comparisons
Key Terms
References
Problems
Analyzing More Than a Single Independent Variable: Factorial Between-Groups Analysis of Variance
Overview
Factorial Designs
The Simplest Case: A 2 [times] 2 Factorial
A Bit More on Main Effects and Interactions
Factorial Analysis of Variance
Subdividing Between-Groups Variability
Omnibus Hypotheses
The General Linear Model
Partitioning Variability in Twoway ANOVA
From SS to MS to F
Twoway Factorial ANOVA: Computational Example
Other Issues in Factorial ANOVA
Multiple Comparison and Simple Effect Tests
Overview
Planned Orthogonal Comparisons Procedures for Factorial ANOVA
Post Hoc Comparisons for Factorial ANOVA
Tests of Simple Effects
Probing Significant Simple Effects
Putting It All Together
Partitioning Variability in Twoway ANOVA
Key Terms
References
Problems
Within-Groups Designs: Analyzing Repeated Measures
Overview
Basics of Within-Groups Designs
Designs with a Single Treatment
Designs with More Than a Single Treatment
Other Uses of Within-Groups Analyses
The Advantages of Within-Groups Designs
The Disadvantages of Within-Groups Designs
Counterbalancing
Correlated or Dependent Samples t-Test
Review of the Independent Samples t-Test
Correlated Samples t-Test: Raw Score Method
Correlated Samples t-Test: Individual Difference Score Method
Effect Size and Power
Oneway Within-Groups ANOVA
Underlying Logic
Analyzing the Data with and Without the Dependencies
Computational Procedures for the Oneway Within-Groups ANOVA
Assumptions and Assumption Violations
Multiple Comparisons
Effect Size and Power
Mixed Designs
Underlying Logic
Computational Procedures for the Twoway Mixed Design ANOVA
Putting It All Together
Correlated Samples t-Test
Oneway Within-Groups ANOVA
Mixed Designs
Key Terms
References
Problems
Determining the Relationship Between Two Variables: Correlation
Overview
Correlation Basics
Similarities with Other Research Situations
Differences from Other Research Situations
Correlation and Causation
Oneway ANOVA versus Correlation: What Is the Difference?
Correlation: Scatterplots
Creating a Scatterplot
Interpreting a Scatterplot
The Pearson Product Moment Correlation
Describing Linear Relationships
Range of Values
The Contribution of Karl Pearson
Computing the Pearson Product Moment Correlation
The z-Score Method
The Covariation Method
Inferential Uses
Hypothesis Testing
Calculated and Critical Values
Parental Performance Example
Underlying Assumptions
Confidence Intervals
Skewed Distribution of r
Fisher's Transformation
Estimating Confidence Intervals
The Value of Confidence Intervals
Strength of Association
Coefficient of Determination
Parental Performance Example
Derivatives of the Pearson Product Moment Correlation
Some Cautions and Limitations
Restriction of Range
Attenuation Due to Measurement Error
Departures from Linearity
Outliers
Dealing with Missing Values
Putting It All Together
Computational Procedures
Cautions and Limitations
Key Terms
References
Problems
Determining the Relationship Between Two Variables: Simple Linear Regression
Overview
Some Simple Regression Basics
What Is the Difference Between Correlation and Regression?
Lines and Plots
Perfect Prediction
Slope
Intercept
Imperfect Prediction
The Best-Fit Straight Line
Relationship Between the Correlation Coefficient (r) and the Regression Coefficient (b')
The z-Score Method for Determining the Regression Equation
Statistical Tests for Simple Regression
Partitioning Variability
From Sums of Squares to the F-Test
Testing Significance Using the t-Test
Other Issues in Simple Linear Regression
Confidence Intervals
Strength of Association
Power
Underlying Assumptions
Using Standardized Residuals to Check the Data
Putting It All Together
Key Terms
References
Problems
Dealing with More Than a Single Predictor Variable: Multiple Linear Regression
Overview
The Logic of Multiple Linear Regression
The Multiple Regression Equation
Uncorrelated versus Correlated Predictors
The Multiple Regression Equation and Tests of Significance
Components of the Multiple Regression Equation
Tests of Significance
The Incremental Approach to Multiple Linear Regression
Hierarchical Regression
Stepwise Regression
Special Issues in Multiple Linear Regression
Outliers
III-Conditioned Data
Adjusted R[superscript 2]
Putting It All Together
The Multiple Regression Equation
Tests of Significance: Simultaneous Approach
Incremental Approach to Multiple Regression
Special Issues in Multiple Linear Regression
Key Terms
References
Problems
Nonparametric Statistical Tests
Overview
Why Nonparametric Statistical Tests?
Assumptions and Assumption Violations
Scales of Measurement
Advantages and Disadvantages
Chi-Square and the Analysis of Nominal Data
Requirements for Using Chi-Square
Frequencies and Categories
About Chi-Square
Goodness of Fit
Test of Independence
Dealing with Small Sample Sizes
Multiple Comparisons
Measures of Association or Effect Size
The Analysis of Ordinal Data
Mann-Whitney U-Test
Kruskal-Wallis Oneway ANOVA H-Test
Correlated Samples
Putting It All Together
Chi-Square and the Analysis of Nominal Data
The Analysis of Ordinal Data
Key Terms
References
Problems
Areas Under the Standard Normal Curve Corresponding to Given Values of z
Table of Random Numbers
Critical Values of the t-Distribution
Critical Values of the F Distribution
Power Tables for the Analysis of Variance
Percentage Points of the Studentized Range Statistic
Values of the Correlation Coefficient Required for Different Levels of Significance When H[subscript 0]: p = 0
Values of Fisher's z[subscript F] for Values of r
Upper Percentage Points of the Chi-Square Distribution
Critical Values of Mann-Whitney's U
Critical Values of Wilcoxon's T
Answers
Index