Introduction to Econometrics 3rd Edition by James H. Stock – Ebook PDF Instant Download/DeliveryISBN: 0134448049, 9780134448046
Full dowload Introduction to Econometrics 3rd Edition after payment.
Product details:
ISBN-10 : 0134448049
ISBN-13 : 9780134448046
Author: James H. Stock
Learn more about modern Econometrics with this comprehensive introduction to the field, featuring engaging applications and bringing contemporary theories to life. Introduction to Econometrics, 4th Edition, Global Edition by Stock and Watson is the ultimate introductory guide that connects modern theory with motivating, engaging applications. The text ensures you get a solid grasp of this challenging subject’s theoretical background, building on the philosophy that applications should drive the theory, not the other way around. The latest edition maintains the focus on currency, focusing on empirical analysis and incorporating real-world questions and data by using results directly relevant to the applications. The text contextualises the study of Econometrics with a comprehensive introduction and review of economics, data, and statistics before proceeding to an extensive regression analysis studying the different variables and regression parameters
Introduction to Econometrics 3rd Table of contents:
Part One Introduction and Review
Chapter 1 Economic Questions and Data
1.1 Economic Questions We Examine
Question #1: Does Reducing Class Size Improve Elementary School Education?
Question #2: Is There Racial Discrimination in the Market for Home Loans?
Question #3: How Much Do Cigarette Taxes Reduce Smoking?
Question #4: By How Much Will U.S. GDP Grow Next Year?
Quantitative Questions, Quantitative Answers
1.2 Causal Effects and Idealized Experiments
Estimation of Causal Effects
Prediction, Forecasting, and Causality
1.3 Data: Sources and Types
Experimental versus Observational Data
Cross-Sectional Data
Time Series Data
Panel Data
Summary
Key Terms
Review the Concepts
Chapter 2 Review of Probability
2.1 Random Variables and Probability Distributions
Probabilities, the Sample Space, and Random Variables
Probabilities and outcomes
The sample space and events
Random variables
Probability Distribution of a Discrete Random Variable
Probability distribution
Probabilities of events
Cumulative probability distribution
The Bernoulli distribution
Probability Distribution of a Continuous Random Variable
Cumulative probability distribution
Probability density function
2.2 Expected Values, Mean, and Variance
The Expected Value of a Random Variable
Expected value
Expected value of a Bernoulli random variable
Expected value of a continuous random variable
The Standard Deviation and Variance
Variance of a Bernoulli random variable
Mean and Variance of a Linear Function of a Random Variable
Other Measures of the Shape of a Distribution
Skewness
Kurtosis
Moments
Standardized Random Variables
2.3 Two Random Variables
Joint and Marginal Distributions
Joint distribution
Marginal probability distribution
Conditional Distributions
Conditional distribution
Conditional expectation
The law of iterated expectations
Conditional variance
Bayes’ rule
The conditional mean is the minimum mean squared error prediction
Independence
Covariance and Correlation
Covariance
Correlation
Correlation and conditional mean
The Mean and Variance of Sums of Random Variables
2.4 The Normal, Chi-Squared, Student t, and F Distributions
The Normal Distribution
The multivariate normal distribution
The Chi-Squared Distribution
The Student t Distribution
The F Distribution
2.5 Random Sampling and the Distribution of the Sample Average
Random Sampling
Simple random sampling
i.i.d. draws
The Sampling Distribution of the Sample Average
Mean and variance of Y¯
Sampling distribution of Y¯ when Y is normally distributed
2.6 Large-Sample Approximations to Sampling Distributions
The Law of Large Numbers and Consistency
The Central Limit Theorem
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercise
Appendix 2.1 Derivation of Results in Key Concept 2.3
Appendix 2.2 The Conditional Mean as the Minimum Mean Squared Error Predictor
Chapter 3 Review of Statistics
3.1 Estimation of the Population Mean
Estimators and Their Properties
Estimators
Unbiasedness
Consistency
Variance and efficiency
Properties of Y¯
Bias and consistency
Efficiency
Y¯ is the least squares estimator of μY
The Importance of Random Sampling
3.2 Hypothesis Tests Concerning the Population Mean
Null and Alternative Hypotheses
The p-Value
Calculating the p-Value When σY Is Known
The Sample Variance, Sample Standard Deviation, and Standard Error
The sample variance and standard deviation
Consistency of the sample variance
The standard error of Y¯
Calculating the p-Value When σY Is Unknown
The t-Statistic
Large-sample distribution of the t-statistic
Hypothesis Testing with a Prespecified Significance Level
Hypothesis tests using a fixed significance level
What significance level should you use in practice?
One-Sided Alternatives
3.3 Confidence Intervals for the Population Mean
3.4 Comparing Means from Different Populations
Hypothesis Tests for the Difference Between Two Means
Confidence Intervals for the Difference Between Two Population Means
3.5 Differences-of-Means Estimation of Causal Effects Using Experimental Data
The Causal Effect as a Difference of Conditional Expectations
Estimation of the Causal Effect Using Differences of Means
3.6 Using the t-Statistic When the Sample Size Is Small
The t-Statistic and the Student t Distribution
The t-statistic testing the mean
The t-statistic testing differences of means
Use of the Student t Distribution in Practice
3.7 Scatterplots, the Sample Covariance, and the Sample Correlation
Scatterplots
Sample Covariance and Correlation
Consistency of the sample covariance and correlation
Example
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercises
Appendix 3.1 The U.S. Current Population Survey
Appendix 3.2 Two Proofs That y¯ Is the Least Squares Estimator of μy
Calculus Proof
Noncalculus Proof
Appendix 3.3 A Proof That the Sample Variance Is Consistent
Part Two Fundamentals of Regression Analysis
Chapter 4 Linear Regression with One Regressor
4.1 The Linear Regression Model
4.2 Estimating the Coefficients of the Linear Regression Model
The Ordinary Least Squares Estimator
OLS Estimates of the Relationship Between Test Scores and the Student–Teacher Ratio
Why Use the OLS Estimator?
4.3 Measures of Fit and Prediction Accuracy
The R2
The Standard Error of the Regression
Prediction Using OLS
Application to the Test Score Data
4.4 The Least Squares Assumptions for Causal Inference
Assumption 1: The Conditional Distribution of ui Given Xi Has a Mean of Zero
The conditional mean of u in a randomized controlled experiment
Correlation and conditional mean
Assumption 2: (Xi, Yi), i=1,…, n, Are Independently and Identically Distributed
Assumption 3: Large Outliers Are Unlikely
Use of the Least Squares Assumptions
4.5 The Sampling Distribution of the OLS Estimators
4.6 Conclusion
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercises
Appendix 4.1 The California Test Score Data Set
Appendix 4.2 Derivation of the OLS Estimators
Appendix 4.3 Sampling Distribution of the OLS Estimator
Representation of β^1 in Terms of the Regressors and Errors
Proof That β^1 Is Unbiased
Large-Sample Normal Distribution of the OLS Estimator
Some Additional Algebraic Facts About OLS
Appendix 4.4 The Least Squares Assumptions for Prediction
Chapter 5 Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals
5.1 Testing Hypotheses About One of the Regression Coefficients
Two-Sided Hypotheses Concerning β1
Testing hypotheses about the population mean
Testing hypotheses about the slope β1
Reporting regression equations and application to test scores
One-Sided Hypotheses Concerning β1
When should a one-sided test be used?
Application to test scores
Testing Hypotheses About the Intercept β0
5.2 Confidence Intervals for a Regression Coefficient
5.3 Regression When X Is a Binary Variable
Interpretation of the Regression Coefficients
Hypothesis tests and confidence intervals
Application to test scores
5.4 Heteroskedasticity and Homoskedasticity
What Are Heteroskedasticity and Homoskedasticity?
Definitions of heteroskedasticity and homoskedasticity
Example
Mathematical Implications of Homoskedasticity
The OLS estimators remain unbiased and asymptotically normal
Efficiency of the OLS estimator when the errors are homoskedastic
Homoskedasticity-only variance formula
What Does This Mean in Practice?
Which is more realistic, heteroskedasticity or homoskedasticity?
Practical implications
*5.5 The Theoretical Foundations of Ordinary Least Squares
Linear Conditionally Unbiased Estimators and the Gauss–Markov Theorem
Linear conditionally unbiased estimators
The Gauss–Markov theorem
Limitations of the Gauss–Markov theorem
Regression Estimators Other Than OLS
The weighted least squares estimator
The least absolute deviations estimator
*5.6 Using the t-Statistic in Regression When the Sample Size Is Small
The t-Statistic and the Student t Distribution
Use of the Student t Distribution in Practice
5.7 Conclusion
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercises
Appendix 5.1 Formulas for OLS Standard Errors
Heteroskedasticity-Robust Standard Errors
Homoskedasticity-Only Variances
Homoskedasticity-Only Standard Errors
Appendix 5.2 The Gauss–Markov Conditions and a Proof of the Gauss–Markov Theorem
The Gauss–Markov Conditions
The OLS Estimator β^1 Is a Linear Conditionally Unbiased Estimator
Proof of the Gauss–Markov Theorem
The Gauss–Markov Theorem When X Is Nonrandom
The Sample Average Is the Efficient Linear Estimator of E(Y)
Chapter 6 Linear Regression with Multiple Regressors
6.1 Omitted Variable Bias
Definition of Omitted Variable Bias
Example 1: Percentage of English learners
Example 2: Time of day of the test
Example 3: Parking lot space per pupil
Omitted variable bias and the first least squares assumption
A Formula for Omitted Variable Bias
Addressing Omitted Variable Bias by Dividing the Data into Groups
6.2 The Multiple Regression Model
The Population Regression Line
The Population Multiple Regression Model
6.3 The OLS Estimator in Multiple Regression
The OLS Estimator
Application to Test Scores and the Student–Teacher Ratio
6.4 Measures of Fit in Multiple Regression
The Standard Error of the Regression (SER)
The R2
The Adjusted R2
Application to Test Scores
Using the R2 and adjusted R2
6.5 The Least Squares Assumptions for Causal Inference in Multiple Regression
Assumption 1: The Conditional Distribution of ui Given X1i, X2i, …, Xki Has a Mean of 0
Assumption 2: (X1i, X2i, …, Xki, Yi), i = 1, …, n, Are i.i.d
Assumption 3: Large Outliers Are Unlikely
Assumption 4: No Perfect Multicollinearity
6.6 The Distribution of the OLS Estimators in Multiple Regression
6.7 Multicollinearity
Examples of Perfect Multicollinearity
Example 1: Fraction of English learners
Example 2: “Not very small” classes
Example 3: Percentage of English speakers
The dummy variable trap
Solutions to perfect multicollinearity
Imperfect Multicollinearity
6.8 Control Variables and Conditional Mean Independence
Control Variables and Conditional Mean Independence
6.9 Conclusion
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercises
Appendix 6.1 Derivation of Equation (6.1)
Appendix 6.2 Distribution of the OLS Estimators When There Are Two Regressors and Homoskedastic Errors
Appendix 6.3 The Frisch–Waugh Theorem
Appendix 6.4 The Least Squares Assumptions for Prediction with Multiple Regressors
Appendix 6.5 Distribution of OLS Estimators in Multiple Regression with Control Variables
Chapter 7 Hypothesis Tests and Confidence Intervals in Multiple Regression
7.1 Hypothesis Tests and Confidence Intervals for a Single Coefficient
Standard Errors for the OLS Estimators
Hypothesis Tests for a Single Coefficient
Confidence Intervals for a Single Coefficient
Application to Test Scores and the Student–Teacher Ratio
Adding expenditures per pupil to the equation
7.2 Tests of Joint Hypotheses
Testing Hypotheses on Two or More Coefficients
Joint null hypotheses
Why can’t I just test the individual coefficients one at a time?
The F-Statistic
The F-Statistic with q = 2 Restrictions
The F-statistic with q restrictions
Computing the heteroskedasticity-robust F-statistic in statistical software
Computing the p-value using the F-statistic
The overall regression F-statistic
The F-statistic when q = 1.
Application to Test Scores and the Student–Teacher Ratio
The Homoskedasticity-Only F-Statistic
Using the homoskedasticity-only F-statistic when n is small
Application to test scores and the student–teacher ratio
7.3 Testing Single Restrictions Involving Multiple Coefficients
7.4 Confidence Sets for Multiple Coefficients
7.5 Model Specification for Multiple Regression
Model Specification and Choosing Control Variables
Interpreting the R2 and the Adjusted R2 in Practice
7.6 Analysis of the Test Score Data Set
7.7 Conclusion
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercises
Appendix 7.1 The Bonferroni Test of a Joint Hypothesis
Bonferroni’s Inequality
Bonferroni Tests
Application to Test Scores
Chapter 8 Nonlinear Regression Functions
8.1 A General Strategy for Modeling Nonlinear Regression Functions
Test Scores and District Income
The Effect on Y of a Change in X in Nonlinear Specifications
A general formula for a nonlinear population regression function.1
The effect on Y of a change in X1
Application to test scores and district income
Standard errors of estimated effects
A comment on interpreting coefficients in nonlinear specifications
A General Approach to Modeling Nonlinearities Using Multiple Regression
8.2 Nonlinear Functions of a Single Independent Variable
Polynomials
Testing the null hypothesis that the population regression function is linear
Which degree polynomial should I use?
Application to district income and test scores
Interpretation of coefficients in polynomial regression models
Logarithms
The exponential function and the natural logarithm
Logarithms and percentages
The three logarithmic regression models
Case I: X is in logarithms, Y is not
Case II: Y is in logarithms, X is not
Case III: Both X and Y are in logarithms
A difficulty with comparing logarithmic specifications
Computing predicted values of Y when Y is in logarithms.3
Polynomial and Logarithmic Models of Test Scores and District Income
Polynomial specifications
Logarithmic specifications
Comparing the cubic and linear-log specifications
8.3 Interactions Between Independent Variables
Interactions Between Two Binary Variables
Application to the student–teacher ratio and the percentage of English learners
Interactions Between a Continuous and a Binary Variable
Application to the student–teacher ratio and the percentage of English learners
Interactions Between Two Continuous Variables
Application to the student–teacher ratio and the percentage of English learners
8.4 Nonlinear Effects on Test Scores of the Student–Teacher Ratio
Discussion of Regression Results
Summary of Findings
8.5 Conclusion
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercises
Appendix 8.1 Regression Functions That Are Nonlinear in the Parameters
Functions That Are Nonlinear in the Parameters
Logistic curve
Negative exponential growth
General functions that are nonlinear in the parameters
Nonlinear Least Squares Estimation
Application to the Test Score–District Income Relation
Appendix 8.2 Slopes and Elasticities for Nonlinear Regression Functions
Chapter 9 Assessing Studies Based on Multiple Regression
9.1 Internal and External Validity
Threats to Internal Validity
Threats to External Validity
Differences in populations
Differences in settings
Application to test scores and the student–teacher ratio
How to assess the external validity of a study
How to design an externally valid study
9.2 Threats to Internal Validity of Multiple Regression Analysis
Omitted Variable Bias
Solutions to omitted variable bias when the variable is observed or there are adequate control variables
Solutions to omitted variable bias when adequate control variables are not available
Misspecification of the Functional Form of the Regression Function
Solutions to functional form misspecification
Measurement Error and Errors-in-Variables Bias
Measurement error in Y
Solutions to errors-in-variables bias
Missing Data and Sample Selection
Solutions to selection bias
Simultaneous Causality
Solutions to simultaneous causality bias
Sources of Inconsistency of OLS Standard Errors
Heteroskedasticity
Correlation of the error term across observations
9.3 Internal and External Validity When the Regression Is Used for Prediction
9.4 Example: Test Scores and Class Size
External Validity
Comparison of the California and Massachusetts data
Test scores and average district income
Multiple regression results
Comparison of Massachusetts and California results
Internal Validity
Omitted variables
Functional form
Errors in variables
Sample selection
Simultaneous causality
Heteroskedasticity and correlation of the error term across observations
Discussion and Implications
9.5 Conclusion
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercises
Appendix 9.1 The Massachusetts Elementary School Testing Data
Part Three Further Topics in Regression Analysis
Chapter 10 Regression with Panel Data
10.1 Panel Data
Example: Traffic Deaths and Alcohol Taxes
10.2 Panel Data with Two Time Periods: “Before and After” Comparisons
10.3 Fixed Effects Regression
The Fixed Effects Regression Model
Extension to multiple X’s
Estimation and Inference
The “entity-demeaned” OLS algorithm
The “before and after” (differences) regression versus the binary variables specification
The sampling distribution, standard errors, and statistical inference
Application to Traffic Deaths
10.4 Regression with Time Fixed Effects
Time Effects Only
Both Entity and Time Fixed Effects
Estimation
Application to traffic deaths
10.5 The Fixed Effects Regression Assumptions and Standard Errors for Fixed Effects Regression
The Fixed Effects Regression Assumptions
Standard Errors for Fixed Effects Regression
10.6 Drunk Driving Laws and Traffic Deaths
10.7 Conclusion
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercises
Appendix 10.1 The State Traffic Fatality Data Set
Appendix 10.2 Standard Errors for Fixed Effects Regression
The Asymptotic Distribution of the Fixed Effects Estimator with Large n
The fixed effects estimator
Distribution and standard errors when n is large
Why isn’t the usual heteroskedasticity-robust estimator of Chapter 5 valid for panel data?
Extensions: Other applications of clustered standard errors
Distribution and Standard Errors When n Is Small
Chapter 11 Regression with a Binary Dependent Variable
11.1 Binary Dependent Variables and the Linear Probability Model
Binary Dependent Variables
The Linear Probability Model
Application to the Boston HMDA data
Shortcomings of the linear probability model
11.2 Probit and Logit Regression
Probit Regression
Probit regression with a single regressor
Probit regression with multiple regressors
Effect of a change in X
Application to the mortgage data
Estimation of the probit coefficients
Logit Regression
The logit regression model
Application to the Boston HMDA data
Comparing the Linear Probability, Probit, and Logit Models
11.3 Estimation and Inference in the Logit and Probit Models2
Nonlinear Least Squares Estimation
Maximum Likelihood Estimation
Statistical inference based on the MLE
Measures of Fit
11.4 Application to the Boston HMDA Data
11.5 Conclusion
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercises
Appendix 11.1 The Boston HMDA Data Set
Appendix 11.2 Maximum Likelihood Estimation
MLE for n i.i.d. Bernoulli Random Variables
MLE for the Probit Model
MLE for the Logit Model
Pseudo-R2
Standard Errors for Predicted Probabilities
Appendix 11.3 Other Limited Dependent Variable Models
Censored and Truncated Regression Models
Sample Selection Models
Count Data
Ordered Responses
Discrete Choice Data
Chapter 12 Instrumental Variables Regression
12.1 The IV Estimator with a Single Regressor and a Single Instrument
The IV Model and Assumptions
Endogeneity and exogeneity
The two conditions for a valid instrument
The Two Stage Least Squares Estimator
Why Does IV Regression Work?
Example 1: Philip Wright’s problem
Example 2: Estimating the effect on test scores of class size
The Sampling Distribution of the TSLS Estimator
Formula for the TSLS estimator
Sampling distribution of β^1TSLS when the sample size is large
Statistical inference using the large-sample distribution
Application to the Demand for Cigarettes
12.2 The General IV Regression Model
Included exogenous variables and control variables in IV regression
TSLS in the General IV Model
TSLS with a single endogenous regressor
Extension to multiple endogenous regressors
Instrument Relevance and Exogeneity in the General IV Model
The IV Regression Assumptions and Sampling Distribution of the TSLS Estimator
The IV regression assumptions
Sampling distribution of the TSLS estimator
Inference Using the TSLS Estimator
Calculation of TSLS standard errors
Application to the Demand for Cigarettes
12.3 Checking Instrument Validity
Assumption 1: Instrument Relevance
Why weak instruments are a problem
Checking for weak instruments when there is a single endogenous regressor
What do I do if I have weak instruments?
Assumption 2: Instrument Exogeneity
Can you statistically test the assumption that the instruments are exogenous?
The overidentifying restrictions test
12.4 Application to the Demand for Cigarettes1
12.5 Where Do Valid Instruments Come From?
Three Examples
Does putting criminals in jail reduce crime?
Does cutting class sizes increase test scores?
Does aggressive treatment of heart attacks prolong lives?
12.6 Conclusion
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercises
Appendix 12.1 The Cigarette Consumption Panel Data Set
Appendix 12.2 Derivation of the Formula for the TSLS Estimator in Equation (12.4)
Appendix 12.3 Large-Sample Distribution of the TSLS Estimator
Large-Sample Distribution of β^1TSLS When the IV Regression Assumptions in Key Concept 12.4 Hold
Appendix 12.4 Large-Sample Distribution of the TSLS Estimator When the Instrument Is Not Valid
Large-Sample Distribution of β^1TSLS When the Instrument Is Weak
Large-Sample Distribution of β^1TSLS When the Instrument Is Endogenous
Appendix 12.5 Instrumental Variables Analysis with Weak Instruments
Testing for Weak Instruments
Hypothesis Tests and Confidence Sets for β
Estimation of β
Appendix 12.6 TSLS with Control Variables
Chapter 13 Experiments and Quasi-Experiments
13.1 Potential Outcomes, Causal Effects, and Idealized Experiments
Potential Outcomes and the Average Causal Effect
Econometric Methods for Analyzing Experimental Data
The differences estimator
The differences estimator with additional regressors
Estimating causal effects that depend on observables
Randomization based on covariates
13.2 Threats to Validity of Experiments
Threats to Internal Validity
Failure to randomize
Failure to follow the treatment protocol
Attrition
Experimental effects
Small sample sizes
Threats to External Validity
Nonrepresentative sample
Nonrepresentative program or policy
General equilibrium effects
13.3 Experimental Estimates of the Effect of Class Size Reductions
Experimental Design
Deviations from the experimental design
Analysis of the STAR Data
Interpreting the estimated effects of class size
Additional results
Comparison of the Observational and Experimental Estimates of Class Size Effects
13.4 Quasi-Experiments
Examples
Example 1: Labor market effects of immigration
Example 2: Effects on civilian earnings of military service
Example 3: The effect of cardiac catheterization
The Differences-in-Differences Estimator
The differences-in-differences estimator
The differences-in-differences estimator with additional regressors
Differences-in-differences using repeated cross-sectional data
Instrumental Variables Estimators
Regression Discontinuity Estimators
Sharp regression discontinuity design
Fuzzy regression discontinuity design
13.5 Potential Problems with Quasi-Experiments
Threats to Internal Validity
Failure of randomization
Failure to follow the treatment protocol
Attrition
Experimental effects
Instrument validity in quasi-experiments
Threats to External Validity
13.6 Experimental and Quasi-Experimental Estimates in Heterogeneous Populations
OLS with Heterogeneous Causal Effects
IV Regression with Heterogeneous Causal Effects
Implications
Example: The cardiac catheterization study
13.7 Conclusion
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercises
Appendix 13.1 The Project STAR Data Set
Appendix 13.2 IV Estimation When the Causal Effect Varies Across Individuals
Appendix 13.3 The Potential Outcomes Framework for Analyzing Data from Experiments
Chapter 14 Prediction with Many Regressors and Big Data
14.1 What Is “Big Data”?
14.2 The Many-Predictor Problem and OLS
The Mean Squared Prediction Error
The First Least Squares Assumption for Prediction
The Predictive Regression Model with Standardized Regressors
The MSPE in the standardized predictive regression model
Standardization using the sample means and variances
The MSPE of OLS and the Principle of Shrinkage
The principle of shrinkage
Estimation of the MSPE
Estimating the MSPE using a split sample
Estimating the MSPE by m-fold cross validation
14.3 Ridge Regression
Shrinkage via Penalization and Ridge Regression
Estimation of the Ridge Shrinkage Parameter by Cross Validation
Application to School Test Scores
14.4 The Lasso
Shrinkage Using the Lasso
Computation of the Lasso estimator
Estimation of the shrinkage parameter by cross validation
A word of warning about the ridge and Lasso estimators
Application to School Test Scores
14.5 Principal Components
Principal Components with Two Variables
Principal Components with k Variables
The scree plot
Prediction using principal components
Application to School Test Scores
14.6 Predicting School Test Scores with Many Predictors
14.7 Conclusion
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercises
Appendix 14.1 The California School Test Score Data Set
Appendix 14.2 Derivation of Equation (14.4) for k=1
Appendix 14.3 The Ridge Regression Estimator When k=1
Appendix 14.4 The Lasso Estimator When k=1
Appendix 14.5 Computing Out-of-Sample Predictions in the Standardized Regression Model
Out-of-Sample Predictions Using the Standardized Regression Model of Equation (14.2) (Ridge and Lasso)
Out-of-Sample Predictions Using Principal Components Regression
Part Four Regression Analysis of Economic Time Series Data
Chapter 15 Introduction to Time Series Regression and Forecasting
15.1 Introduction to Time Series Data and Serial Correlation
Real GDP in the United States
Lags, First Differences, Logarithms, and Growth Rates
Autocorrelation
Other Examples of Economic Time Series
15.2 Stationarity and the Mean Squared Forecast Error
Stationarity
Forecasts and Forecast Errors
The Mean Squared Forecast Error
15.3 Autoregressions
The First-Order Autoregressive Model
Forecasts and forecast errors
Application to GDP growth
The pth-Order Autoregressive Model
Properties of the forecast and error term in the AR(p) model
Application to GDP growth
15.4 Time Series Regression with Additional Predictors and the Autoregressive Distributed Lag Model
Forecasting GDP Growth Using the Term Spread
The Autoregressive Distributed Lag Model
The Least Squares Assumptions for Forecasting with Multiple Predictors
15.5 Estimation of the MSFE and Forecast Intervals
Estimation of the MSFE
Method 1: Estimating the MSFE by the standard error of the regression
Method 2: Estimating the MSFE by the final prediction error
Method 3: Estimating the MSFE by pseudo out-of-sample forecasting
Application to GDP growth
Forecast Uncertainty and Forecast Intervals
Forecast intervals
Fan charts
15.6 Estimating the Lag Length Using Information Criteria
Determining the Order of an Autoregression
The F-statistic approach
The BIC
The AIC
A note on calculating information criteria
Lag Length Selection in Time Series Regression with Multiple Predictors
The F-statistic approach
Information criteria
15.7 Nonstationarity I: Trends
What Is a Trend?
Deterministic and stochastic trends
The random walk model of a trend
Stochastic trends, autoregressive models, and a unit root
Problems Caused by Stochastic Trends
Downward bias and nonnormal distributions of the OLS estimator and t-statistic
Spurious regression
Detecting Stochastic Trends: Testing for a Unit AR Root
The Dickey–Fuller test in the AR(1) model
Critical values for the ADF statistic
The Dickey–Fuller test in the AR(p) model
Testing against the alternative of stationarity around a linear deterministic time trend
Does U.S. GDP have a stochastic trend?
Avoiding the Problems Caused by Stochastic Trends
15.8 Nonstationarity II: Breaks
What Is a Break?
Problems caused by breaks
Testing for Breaks
Testing for a break at a known date
Testing for a break at an unknown date
Warning: You probably don’t know the break date even if you think you do
Application: Has the predictive power of the term spread been stable?
Detecting Breaks Using Pseudo Out-of-Sample Forecasts
Application: Did the predictive power of the term spread change during the 2000s?
Avoiding the Problems Caused by Breaks
15.9 Conclusion
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercises
Appendix 15.1 Time Series Data Used in Chapter 15
Appendix 15.2 Stationarity in the AR(1) Model
Appendix 15.3 Lag Operator Notation
Appendix 15.4 ARMA Models
Appendix 15.5 Consistency of the BIC Lag Length Estimator
BIC
Proof of (i) and (ii)
Proof of (i)
Proof of (ii)
AIC
Chapter 16 Estimation of Dynamic Causal Effects
16.1 An Initial Taste of the Orange Juice Data
16.2 Dynamic Causal Effects
Causal Effects and Time Series Data
Dynamic effects and the distributed lag model
Implications for empirical time series analysis
Two Types of Exogeneity
16.3 Estimation of Dynamic Causal Effects with Exogenous Regressors
The Distributed Lag Model Assumptions
Extension to additional X’s
Autocorrelated ut, Standard Errors, and Inference
Dynamic Multipliers and Cumulative Dynamic Multipliers
Dynamic multipliers
Cumulative dynamic multipliers
16.4 Heteroskedasticity- and Autocorrelation-Consistent Standard Errors
Distribution of the OLS Estimator with Autocorrelated Errors
HAC Standard Errors
The HAC variance formula
Other HAC estimators
Extension to multiple regression
16.5 Estimation of Dynamic Causal Effects with Strictly Exogenous Regressors
The Distributed Lag Model with AR(1) Errors
The conditional mean 0 assumption in the ADL and quasi-difference models
OLS Estimation of the ADL Model
GLS Estimation
Infeasible GLS
Feasible GLS
Efficiency of GLS
16.6 Orange Juice Prices and Cold Weather
16.7 Is Exogeneity Plausible? Some Examples
U.S. Income and Australian Exports
Oil Prices and Inflation
Monetary Policy and Inflation
The Growth Rate of GDP and the Term Spread
16.8 Conclusion
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercises
Appendix 16.1 The Orange Juice Data Set
Appendix 16.2 The ADL Model and Generalized Least Squares in Lag Operator Notation
The Distributed Lag, ADL, and Quasi-Difference Models in Lag Operator Notation
The Inverse of a Lag Polynomial
The OLS and GLS Estimators
Conditions for estimation of the ADL coefficients
Chapter 17 Additional Topics in Time Series Regression
17.1 Vector Autoregressions
The VAR Model
Inference in VARs
How many variables should be included in a VAR?
Determining lag lengths in VARs
Using VARs for causal analysis
A VAR Model of the Growth Rate of GDP and the Term Spread
17.2 Multi-period Forecasts
Iterated Multi-period Forecasts
The iterated AR forecast method: AR(1)
The iterated AR forecast method: AR(p)
Iterated multivariate forecasts using an iterated VAR
Direct Multi-period Forecasts
The direct multi-period forecasting method
Standard errors in direct multi-period regressions
Which Method Should You Use?
17.3 Orders of Integration and the Nonnormality of Unit Root Test Statistics
Other Models of Trends and Orders of Integration
Orders of integration terminology
How to test whether a series is I(2) or I(1)
Examples of I(2) and I(1) series: The price level and the rate of inflation
Why Do Unit Root Tests Have Nonnormal Distributions?
17.4 Cointegration
Cointegration and Error Correction
Vector error correction model
How Can You Tell Whether Two Variables Are Cointegrated?
Testing for cointegration when θ is known
Testing for cointegration when θ is unknown
Estimation of Cointegrating Coefficients
Extension to Multiple Cointegrated Variables
17.5 Volatility Clustering and Autoregressive Conditional Heteroskedasticity
Volatility Clustering
Realized Volatility
Autoregressive Conditional Heteroskedasticity
ARCH
GARCH
Estimation and inference
Application to Stock Price Volatility
17.6 Forecasting with Many Predictors Using Dynamic Factor Models and Principal Components2
The Dynamic Factor Model
The DFM: Estimation and Forecasting
Estimation of the DFM and the factors using principal components
Determining the number of factors
Forecasting using the estimated factors
Other uses of DFMs
Application to U.S. Macroeconomic Data
17.7 Conclusion
Summary
Key Terms
Review the Concepts
Exercises
Empirical Exercises
Appendix 17.1 The Quarterly U.S. Macro Data Set
Part Five Regression Analysis of Economic Time Series Data
Chapter 18 The Theory of Linear Regression with One Regressor
18.1 The Extended Least Squares Assumptions and the OLS Estimator
The Extended Least Squares Assumptions
Extended least squares Assumptions 1, 2, and 3
Extended least squares assumption 4
Extended least squares assumption 5
The OLS Estimator
18.2 Fundamentals of Asymptotic Distribution Theory
Convergence in Probability and the Law of Large Numbers
Consistency and convergence in probability
The law of large numbers
Proof of the law of large numbers
Some examples
The Central Limit Theorem and Convergence in Distribution
Convergence in distribution
The central limit theorem
Extensions to time series data
Slutsky’s Theorem and the Continuous Mapping Theorem
Application to the t-Statistic Based on the Sample Mean
18.3 Asymptotic Distribution of the OLS Estimator and t-Statistic
Consistency and Asymptotic Normality of the OLS Estimators
Consistency of Heteroskedasticity-Robust Standard Errors
Asymptotic Normality of the Heteroskedasticity-Robust t-Statistic
18.4 Exact Sampling Distributions When the Errors Are Normally Distributed
Distribution of β1 with Normal Errors
Distribution of the Homoskedasticity-Only t-Statistic
Where does the degrees of freedom adjustment fit in?
18.5 Weighted Least Squares
WLS with Known Heteroskedasticity
WLS with Heteroskedasticity of Known Functional Form
Example 1: The variance of u is quadratic in X
Example 2: The variance depends on a third variable
General method of feasible WLS
Heteroskedasticity-Robust Standard Errors or WLS?
Summary
Key Terms
Review the Concepts
Exercises
Appendix 18.1 The Normal and Related Distributions and Moments of Continuous Random Variables
Probabilities and Moments of Continuous Random Variables
The Normal Distribution
The normal distribution for a single variable
The bivariate normal distribution
The conditional normal distribution
Related Distributions
The chi-squared distribution
The Student t distribution
The F distribution
Appendix 18.2 Two Inequalities
Chebychev’s Inequality
The Cauchy–Schwarz Inequality
Chapter 19 The Theory of Multiple Regression
19.1 The Linear Multiple Regression Model and OLS Estimator in Matrix Form
The Multiple Regression Model in Matrix Notation
The Extended Least Squares Assumptions
Implications for the mean vector and covariance matrix of U
The OLS Estimator
The role of “no perfect multicollinearity.”
19.2 Asymptotic Distribution of the OLS Estimator and t-Statistic
The Multivariate Central Limit Theorem
Asymptotic Normality of β^
Derivation of Equation (19.12)
Heteroskedasticity-Robust Standard Errors
Heteroskedasticity-robust standard errors
Other heteroskedasticity-robust variance estimators
Confidence Intervals for Predicted Effects
Asymptotic Distribution of the t-Statistic
19.3 Tests of Joint Hypotheses
Joint Hypotheses in Matrix Notation
Asymptotic Distribution of the F-Statistic
Confidence Sets for Multiple Coefficients
19.4 Distribution of Regression Statistics with Normal Errors
Matrix Representations of OLS Regression Statistics
The matrices PX and MX
OLS predicted values and residuals
The standard error of the regression
Distribution of β^ with Independent Normal Errors
Distribution of su^2
Homoskedasticity-Only Standard Errors
Distribution of the t-Statistic
Distribution of the F-Statistic
The homoskedasticity-only F-statistic
19.5 Efficiency of the OLS Estimator with Homoskedastic Errors
The Gauss–Markov Conditions for Multiple Regression
Linear Conditionally Unbiased Estimators
The class of linear conditionally unbiased estimators
The OLS estimator is linear and conditionally unbiased
The Gauss–Markov Theorem for Multiple Regression
19.6 Generalized Least Squares1
The GLS Assumptions
GLS When Ω Is Known
GLS When Ω Contains Unknown Parameters
The Conditional Mean Zero Assumption and GLS
The role of the first GLS assumption
Is the first GLS assumption restrictive?
19.7 Instrumental Variables and Generalized Method of Moments Estimation
The IV Estimator in Matrix Form
The TSLS estimator
Asymptotic Distribution of the TSLS Estimator
Standard errors for TSLS
Properties of TSLS When the Errors Are Homoskedastic
The TSLS distribution under homoskedasticity
The class of IV estimators that use linear combinations of Z
Asymptotic efficiency of TSLS under homoskedasticity
The J-statistic under homoskedasticity
Generalized Method of Moments Estimation in Linear Models
GMM estimation
The asymptotically efficient GMM estimator
Feasible efficient GMM estimation
The heteroskedasticity-robust J-statistic
GMM with time series data
Summary
Key Terms
Review the Concepts
Exercises
People also search for Introduction to Econometrics 3rd:
stock and watson introduction to econometrics
introduction to econometrics with r
christopher dougherty introduction to econometrics
introduction to econometrics answer key
introduction to econometrics 3rd edition