Skip to content

Introduction to Econometrics

Best in textbook rentals since 2012!

ISBN-10: 0321278879

ISBN-13: 9780321278876

Edition: 2nd 2007 (Revised)

Authors: James H. Stock, Mark W. Watson

List price: $210.00
Blue ribbon 30 day, 100% satisfaction guarantee!
what's this?
Rush Rewards U
Members Receive:
Carrot Coin icon
XP icon
You have reached 400 XP and carrot coins. That is the daily max!

Description:

To make econometrics relevant in an introductory course, interesting applications must motivate the theory and the theory must match the applications. This text aims to motivate the need for tools with concrete applications, providing simple assumptions that match the application.
Customers also bought

Book details

List price: $210.00
Edition: 2nd
Copyright year: 2007
Publisher: Addison Wesley
Publication date: 7/21/2006
Binding: Hardcover
Pages: 840
Size: 7.50" wide x 9.50" long x 1.25" tall
Weight: 2.992
Language: English

Preface
Introduction and Review
Economic Questions and Data
Economic Questions We Examine
Does Reducing Class Size Improve Elementary School Education?
Is There Racial Discrimination in the Market for Home Loans?
How Much Do Cigarette Taxes Reduce Smoking?
What Will the Rate of Inflation Be Next Year?
Quantitative Questions, Quantitative Answers
Causal Effects and Idealized Experiments
Estimation of Causal Effects
Forecasting and Causality
Data: Sources and Types
Experimental versus Observational Data
Cross-Sectional Data
Time Series Data
Panel Data
Review of Probability
Random Variables and Probability Distributions
Probabilities, the Sample Space, and Random Variables
Probability Distribution of a Discrete Random Variable
Probability Distribution of a Continuous Random Variable
Expected Values, Mean, and Variance
The Expected Value of a Random Variable
The Standard Deviation and Variance
Mean and Variance of a Linear Function of a Random Variable
Other Measures of the Shape of a Distribution
Two Random Variables
Joint and Marginal Distributions
Conditional Distributions
Independence
Covariance and Correlation
The Mean and Variance of Sums of Random Variables
The Normal, Chi-Squared, Student t, and F Distributions
The Normal Distribution
The Chi-Squared Distribution
The Student t Distribution
The F Distribution
Random Sampling and the Distribution of the Sample Average
Random Sampling
The Sampling Distribution of the Sample Average
Large-Sample Approximations to Sampling Distributions
The Law of Large Numbers and Consistency
The Central Limit Theorem
Derivation of Results in Key Concept 2.3
Review of Statistics
Estimation of the Population Mean
Estimators and Their Properties
Properties of Y
The Importance of Random Sampling
Hypothesis Tests Concerning the Population Mean
Null and Alternative Hypotheses
The p-Value
Calculating the p-Value When [sigma subscript Y] is Known
The Sample Variance, Sample Standard Deviation, and Standard Error
Calculating the p-Value When [sigma subscript Y] Is Unknown
The t-Statistic
Hypothesis Testing with a Prespecified Significance Level
One-Sided Alternatives
Confidence Intervals for the Population Mean
Comparing Means from Different Populations
Hypothesis Tests for the Difference Between Two Means
Confidence Intervals for the Difference Between Two Population Means
Differences-of-Means Estimation of Causal Effects Using Experimental Data
The Causal Effect as a Difference of Conditional Expectations
Estimation of the Causal Effect Using Differences of Means
Using the t-Statistic When the Sample Size Is Small
The t-Statistic and the Student t Distribution
Use of the Student t Distribution in Practice
Scatterplot, the Sample Covariance, and the Sample Correlation
Scatterplots
Sample Covariance and Correlation
The U.S. Current Population Survey
Two Proofs That Y Is the Least Squares Estimator of [mu subscript Y]
A Proof That the Sample Variance Is Consistent
Fundamentals of Regression Analysis
Linear Regression with One Regressor
The Linear Regression Model
Estimating the Coefficients of the Linear Regression Model
The Ordinary Least Squares Estimator
OLS Estimates of the Relationship Between Test Scores and the Student-Teacher Ratio
Why Use the OLS Estimator?
Measures of Fit
The R[superscript 2]
The Standard Error of the Regression
Application to the Test Score Data
The Least Squares Assumptions
The Conditional Distribution of u[subscript i] Given X[subscript i] Has a Mean of Zero
(X[subscript i], Y[subscript i]), = 1,..., n Are Independently and Identically Distributed
Large Outliers Are Unlikely
Use of the Least Squares Assumptions
The Sampling Distribution of the OLS Estimators
The Sampling Distribution of the OLS Estimators
Conclusion
The California Test Score Data Set
Derivation of the OLS Estimators
Sampling Distribution of the OLS Estimator
Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals
Testing Hypotheses About One of the Regression Coefficients
Two-Sided Hypotheses Concerning [Beta subscript 1]
One-Sided Hypotheses Concerning [Beta subscript 1]
Testing Hypotheses About the Intercept [Beta subscript 0]
Confidence Intervals for a Regression Coefficient
Regression When X Is a Binary Variable
Interpretation of the Regression Coefficients
Heteroskedasticity and Homoskedasticity
What Are Heteroskedasticity and Homoskedasticity?
Mathematical Implications of Homoskedasticity
What Does This Mean in Practice?
The Theoretical Foundations of Ordinary Least Squares
Linear Conditionally Unbiased Estimators and the Gauss-Markov Theorem
Regression Estimators Other Than OLS
Using the t-Statistic in Regression When the Sample Size Is Small
The t-Statistic and the Student t Distribution
Use of the Student t Distribution in Practice
Conclusion
Formulas for OLS Standard Errors
The Gauss-Markov Conditions and a Proof of the Gauss-Markov Theorem
Linear Regression with Multiple Regressors
Omitted Variable Bias
Definition of Omitted Variable Bias
A Formula for Omitted Variable Bias
Addressing Omitted Variable Bias by Dividing the Data into Groups
The Multiple Regression Model
The Population Regression Line
The Population Multiple Regression Model
The OLS Estimator in Multiple Regression
The OLS Estimator
Application to Test Scores and the Student-Teacher Ratio
Measures of Fit in Multiple Regression
The Standard Error of the Regression (SER)
The R[superscript 2]
The "Adjusted R[superscript 2]"
Application to Test Scores
The Least Squares Assumptions in Multiple Regression
The Conditional Distribution of u[subscript i] Given X[subscript 1i], X[subscript 2i],..., X[subscript ki] Has a Mean of Zero
(X[subscript 1i], X[subscript 2i],..., X[subscript ki], Y[subscript i]) i = 1,..., n Are i.i.d.
Large Outliers Are Unlikely
No Perfect Multicollinearity
The Distribution of the OLS Estimators in Multiple Regression
Multicollinearity
Examples of Perfect Multicollinearity
Imperfect Multicollinearity
Conclusion
Derivation of Equation (6.1)
Distribution of the OLS Estimators When There Are Two Regressors and Homoskedastic Errors
Hypothesis Tests and Confidence Intervals in Multiple Regression
Hypothesis Tests and Confidence Intervals for a Single Coefficient
Standard Errors for the OLS Estimators
Hypothesis Tests for a Single Coefficient
Confidence Intervals for a Single Coefficient
Application to Test Scores and the Student-Teacher Ratio
Tests of Joint Hypotheses
Testing Hypotheses on Two or More Coefficients
The F-Statistic
Application to Test Scores and the Student-Teacher Ratio
The Homoskedasticity-Only F-Statistic
Testing Single Restrictions Involving Multiple Coefficients
Confidence Sets for Multiple Coefficients
Model Specification for Multiple Regression
Omitted Variable Bias in Multiple Regression
Model Specification in Theory and in Practice
Interpreting the R[superscript 2] and the Adjusted R[superscript 2] in Practice
Analysis of the Test Score Data Set
Conclusion
The Bonferroni Test of a Joint Hypotheses
Nonlinear Regression Functions
A General Strategy for Modeling Nonlinear Regression Functions
Test Scores and District Income
The Effect on Y of a Change in X in Nonlinear Specifications
A General Approach to Modeling Nonlinearities Using Multiple Regression
Nonlinear Functions of a Single Independent Variable
Polynomials
Logarithms
Polynomial and Logarithmic Models of Test Scores and District Income
Interactions Between Independent Variables
Interactions Between Two Binary Variables
Interactions Between a Continuous and a Binary Variable
Interactions Between Two Continuous Variables
Nonlinear Effects on Test Scores of the Student-Teacher Ratio
Discussion of Regression Results
Summary of Findings
Conclusion
Regression Functions That Are Nonlinear in the Parameters
Assessing Studies Based on Multiple Regression
Internal and External Validity
Threats to Internal Validity
Threats to External Validity
Threats to Internal Validity of Multiple Regression Analysis
Omitted Variable Bias
Misspecification of the Functional Form of the Regression Function
Errors-in-Variables
Sample Selection
Simultaneous Causality
Sources of Inconsistency of OLS Standard Errors
Internal and External Validity When the Regression Is Used for Forecasting
Using Regression Models for Forecasting
Assessing the Validity of Regression Models for Forecasting
Example: Test Scores and Class Size
External Validity
Internal Validity
Discussion and Implications
Conclusion
The Massachusetts Elementary School Testing Data
Further Topics in Regression Analysis
Regression with Panel Data
Panel Data
Example: Traffic Deaths and Alcohol Taxes
Panel Data with Two Time Periods: "Before and After" Comparisons
Fixed Effects Regression
The Fixed Effects Regression Model
Estimation and Inference
Application to Traffic Deaths
Regression with Time Fixed Effects
Time Effects Only
Both Entity and Time Fixed Effects
The Fixed Effects Regression Assumptions and Standard Errors for Fixed Effects Regression
The Fixed Effects Regression Assumptions
Standard Errors for Fixed Effects Regression
Drunk Driving Laws and Traffic Deaths
Conclusion
The State Traffic Fatality Data Set
Standard Errors for Fixed Effects Regression with Serially Correlated Errors
Regression with a Binary Dependent Variable
Binary Dependent Variables and the Linear Probability Model
Binary Dependent Variables
The Linear Probability Model
Probit and Logit Regression
Probit Regression
Logit Regression
Comparing the Linear Probability, Probit, and Logit Models
Estimation and Inference in the Logit and Probit Models
Nonlinear Least Squares Estimation
Maximum Likelihood Estimation
Measures of Fit
Application to the Boston HMDA Data
Summary
The Boston HMDA Data Set
Maximum Likelihood Estimation
Other Limited Dependent Variable Models
Instrumental Variables Regression
The IV Estimator with a Single Regressor and a Single Instrument
The IV Model and Assumptions
The Two Stage Least Squares Estimator
Why Does IV Regression Work?
The Sampling Distribution of the TSLS Estimator
Application to the Demand for Cigarettes
The General IV Regression Model
TSLS in the General IV Model
Instrument Relevance and Exogeneity in the General IV Model
The IV Regression Assumptions and Sampling Distribution of the TSLS Estimator
Inference Using the TSLS Estimator
Application to the Demand for Cigarettes
Checking Instrument Validity
Instrument Relevance
Instrument Exogeneity
Application to the Demand for Cigarettes
Where Do Valid Instruments Come From?
Three Examples
Conclusion
The Cigarette Consumption Panel Data Set
Derivation of the Formula for the TSLS Estimator in Equation (12.4)
Large-Sample Distribution of the TSLS Estimator
Large-Sample Distribution of the TSLS Estimator When the Instrument Is Not Valid
Instrumental Variables Analysis with Weak Instruments
Experiments and Quasi-Experiments
Idealized Experiments and Causal Effects
Ideal Randomized Controlled Experiments
The Differences Estimator
Potential Problems with Experiments in Practice
Threats to Internal Validity
Threats to External Validity
Regression Estimators of Causal Effects Using Experimental Data
The Differences Estimator with Additional Regressors
The Differences-in-Differences Estimator
Estimation of Causal Effects for Different Groups
Estimation When There Is Partial Compliance
Testing for Randomization
Experimental Estimates of the Effect of Class Size Reductions
Experimental Design
Analysis of the STAR Data
Comparison of the Observational and Experimental Estimates of Class Size Effects
Quasi-Experiments
Examples
Econometric Methods for Analyzing Quasi-Experiments
Potential Problems with Quasi-Experiments
Threats to Internal Validity
Threats to External Validity
Experimental and Quasi-Experimental Estimates in Heterogeneous Populations
Population Heterogeneity: Whose Causal Effect?
OLS with Heterogeneous Causal Effects
IV Regression with Heterogeneous Causal Effects
Conclusion
The Project STAR Data Set
Extension of the Differences-in-Differences Estimator to Multiple Time Periods
Conditional Mean Independence
IV Estimation When the Causal Effect Varies Across Individuals
Regression Analysis of Economic Time Series Data
Introduction to Time Series Regression and Forecasting
Using Regression Models for Forecasting
Introduction to Time Series Data and Serial Correlation
The Rates of Inflation and Unemployment in the United States
Lags, First Differences, Logarithms, and Growth Rates
Autocorrelation
Other Examples of Economic Time Series
Autoregressions
The First Order Autoregressive Model
The p[superscript th] Order Autoregressive Model
Time Series Regression with Additional Predictors and the Autoregressive Distributed Lag Model
Forecasting Changes in the Inflation Rate Using Past Unemployment Rates
Stationarity
Time Series Regression with Multiple Predictors
Forecast Uncertainty and Forecast Intervals
Lag Length Selection Using Information Criteria
Determining the Order of an Autoregression
Lag Length Selection in Time Series Regression with Multiple Predictors
Nonstationarity I: Trends
What Is a Trend?
Problems Caused by Stochastic Trends
Detecting Stochastic Trends: Testing for a Unit AR Root
Avoiding the Problems Caused by Stochastic Trends
Nonstationarity II: Breaks
What Is a Break?
Testing for Breaks
Pseudo Out-of-Sample Forecasting
Avoiding the Problems Caused by Breaks
Conclusion
Time Series Data Used in Chapter 14
Stationarity in the AR(1) Model
Lag Operator Notation
ARMA Models
Consistency of the BIC Lag Length Estimator
Estimation of Dynamic Causal Effects
An Initial Taste of the Orange Juice Data
Dynamic Causal Effects
Causal Effects and Time Series Data
Two Types of Exogeneity
Estimation of Dynamic Causal Effects with Exogenous Regressors
The Distributed Lag Model Assumptions
Autocorrelated u[subscript t], Standard Errors, and Inference
Dynamic Multipliers and Cumulative Dynamic Multipliers
Heteroskedasticity- and Autocorrelation-Consistent Standard Errors
Distribution of the OLS Estimator with Autocorrelated Errors
HAC Standard Errors
Estimation of Dynamic Causal Effects with Strictly Exogenous Regressors
The Distributed Lag Model with AR(1) Errors
OLS Estimation of the ADL Model
GLS Estimation
The Distributed Lag Model with Additional Lags and AR(p) Errors
Orange Juice Prices and Cold Weather
Is Exogeneity Plausible? Some Examples
U.S. Income and Australian Exports
Oil Prices and Inflation
Monetary Policy and Inflation
The Phillips Curve
Conclusion
The Orange Juice Data Set
The ADL Model and Generalized Least Squares in Lag Operator Notation
Additional Topics in Time Series Regression
Vector Autoregressions
The VAR Model
A VAR Model of the Rates of Inflation and Unemployment
Multiperiod Forecasts
Iterated Muliperiod Forecasts
Direct Multiperiod Forecasts
Which Method Should You Use?
Orders of Integration and the DF-GLS Unit Root Test
Other Models of Trends and Orders of Integration
The DF-GLS Test for a Unit Root
Why Do Unit Root Tests Have Non-normal Distributions?
Cointegration
Cointegration and Error Correction
How Can You Tell Whether Two Variables Are Cointegrated?
Estimation of Cointegrating Coefficients
Extension to Multiple Cointegrated Variables
Application to Interest Rates
Volatility Clustering and Autoregressive Conditional Heteroskedasticity
Volatility Clustering
Autoregressive Conditional Heteroskedasticity
Application to Stock Price Volatility
Conclusion
U.S. Financial Data Used in Chapter 16
The Econometric Theory of Regression Analysis
The Theory of Linear Regression with One Regressor
The Extended Least Squares Assumptions and the OLS Estimator
The Extended Least Squares Assumptions
The OLS Estimator
Fundamentals of Asymptotic Distribution Theory
Convergence in Probability and the Law of Large Numbers
The Central Limit Theorem and Convergence in Distribution
Slutsky's Theorem and the Continuous Mapping Theorem
Application to the t-Statistic Based on the Sample Mean
Asymptotic Distribution of the OLS Estimator and t-Statistic
Consistency and Asymptotic Normality of the OLS Estimators
Consistency of Heteroskedasticity-Robust Standard Errors
Asymptotic Normality of the Heteroskedasticity-Robust t-Statistic
Exact Sampling Distributions When the Errors Are Normally Distributed
Distribution of [Beta subscript 1] with Normal Errors
Distribution of the Homoskedasticity-only t-Statistic
Weighted Least Squares
WLS with Known Heteroskedasticity
WLS with Heteroskedasticity of Known Functional Form
Heteroskedasticity-Robust Standard Errors or WLS?
The Normal and Related Distributions and Moments of Continuous Random Variables
Two Inequalities
The Theory of Multiple Regression
The Linear Multiple Regression Model and OLS Estimator in Matrix Form
The Multiple Regression Model in Matrix Notation
The Extended Least Squares Assumptions
The OLS Estimator
Asymptotic Distribution of the OLS Estimator and t-Statistic
The Multivariate Central Limit Theorem
Asymptotic Normality of [Beta]
Heteroskedasticity-Robust Standard Errors
Confidence Intervals for Predicted Effects
Asymptotic Distribution of the t-Statistic
Tests of Joint Hypotheses
Joint Hypotheses in Matrix Notation
Asymptotic Distribution of the F-Statistic
Confidence Sets for Multiple Coefficients
Distribution of Regression Statistics with Normal Errors
Matrix Representations of OLS Regression Statistics
Distribution of [Beta] with Normal Errors
Distribution of [Characters not reproducible]
Homoskedasticity-Only Standard Errors
Distribution of the t-Statistic
Distribution of the F-Statistic
Efficiency of the OLS Estimator with Homoskedastic Errors
The Gauss-Markov Conditions for Multiple Regression
Linear Conditionally Unbiased Estimators
The Gauss-Markov Theorem for Multiple Regression
Generalized Least Squares
The GLS Assumptions
GLS When [Omega] Is Known
GLS When [Omega] Contains Unknown Parameters
The Zero Conditional Mean Assumption and GLS
Instrumental Variables and Generalized Method of Moments Estimation
The IV Estimator in Matrix Form
Asymptotic Distribution of the TSLS Estimator
Properties of TSLS When the Errors Are Homoskedastic
Generalized Method of Moments Estimation in Linear Models
Summary of Matrix Algebra
Multivariate Distributions
Derivation of the Asymptotic Distribution of [Beta]
Derivations of Exact Distributions of OLS Test Statistics with Normal Errors
Proof of the Gauss-Markov Theorem for Multiple Regression
Proof of Selected Results for IV and GMM Estimation
Appendix
References
Answers to "Review the Concepts" Questions
Glossary
Index