Econometrics is an application of Statistics to economic data in order to give empirical content to economic relationships.M. Hashem Pesaran (1987). "Econometrics", , v. 2, p. 8 pp.. Reprinted in J. Eatwell et al., eds. (1990). Econometrics: The New Palgrave, p. 1 pp.. Abstract (2008 revision by J. Geweke, J. Horowitz, and H. P. Pesaran). More precisely, it is "the quantitative analysis of actual economic Phenomenon based on the concurrent development of theory and observation, related by appropriate methods of inference."Paul Samuelson, T. C. Koopmans, and Richard Stone (1954). "Report of the Evaluative Committee for Econometrica", Econometrica 22(2), p. 142. p-146], as described and cited in Pesaran (1987) above. An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships."Paul A. Samuelson and William D. Nordhaus, 2004. Economics. 18th ed., McGraw-Hill, p. 5. Jan Tinbergen is one of the two founding fathers of econometrics.Magnus, Jan & Mary S. Morgan (1987) The ET Interview: Professor J. Tinbergen in: 'Econometric Theory 3, 1987, 117–142.Willlekens, Frans (2008) International Migration in Europe: Data, Models and Estimates. New Jersey. John Wiley & Sons: 117. The other, Ragnar Frisch, also coined the term in the sense in which it is used today.• H. P. Pesaran (1990), "Econometrics", Econometrics: The New Palgrave, p. 2 , citing Ragnar Frisch (1936), "A Note on the Term 'Econometrics'", Econometrica, 4(1), p. 95.
Aris Spanos (2008), "statistics and economics", The New Palgrave Dictionary of Economics, 2nd Edition. Abstract.
A basic tool for econometrics is the multiple linear regression model. Econometric theory uses statistical theory and mathematical statistics to evaluate and develop econometric methods. Econometricians try to find that have desirable statistical properties including unbiasedness, efficiency, and consistency. Applied econometrics uses theoretical econometrics and real-world economic data for assessing economic theories, developing econometric models, analysing economic history, and forecasting.
For example, consider Okun's law, which relates GDP growth to the unemployment rate. This relationship is represented in a linear regression where the change in unemployment rate () is a function of an intercept (), a given value of GDP growth multiplied by a slope coefficient and an error term, :
The unknown parameters and can be estimated. Here is estimated to be 0.83 and is estimated to be -1.77. This means that if GDP growth increased by one percentage point, the unemployment rate would be predicted to drop by 1.77 * 1 points, Ceteris paribus. The model could then be tested for statistical significance as to whether an increase in GDP growth is associated with a decrease in the unemployment, as hypothesized. If the estimate of were not significantly different from 0, the test would fail to find evidence that changes in the growth rate and unemployment rate were related. The variance in a prediction of the dependent variable (unemployment) as a function of the independent variable (GDP growth) is given in polynomial least squares.
Econometrics uses standard statistical models to study economic questions, but most often these are based on observational data, rather than data from experiment. In this, the design of observational studies in econometrics is similar to the design of studies in other observational disciplines, such as astronomy, epidemiology, sociology and political science. Analysis of data from an observational study is guided by the study protocol, although exploratory data analysis may be useful for generating new hypotheses.Herman Wold (1969). "Econometrics as Pioneering in Nonexperimental Model Building", Econometrica, 37(3), pp. 369 -381. Economics often analyses systems of equations and inequalities, such as supply and demand hypothesized to be in equilibrium. Consequently, the field of econometrics has developed methods for identification and estimation of simultaneous equations models. These methods are analogous to methods used in other areas of science, such as the field of system identification in systems analysis and control theory. Such methods may allow researchers to estimate models and investigate their empirical consequences, without directly manipulating the system.
In the absence of evidence from controlled experiments, econometricians often seek illuminating natural experiments or apply Quasi-experiment to draw credible causal inference. The methods include regression discontinuity design, instrumental variables, and difference-in-differences.
This example assumes that the natural logarithm of a person's wage is a linear function of the number of years of education that person has acquired. The parameter measures the increase in the natural log of the wage attributable to one more year of education. The term is a random variable representing all other factors that may have direct influence on wage. The econometric goal is to estimate the parameters, under specific assumptions about the random variable . For example, if is uncorrelated with years of education, then the equation can be estimated with ordinary least squares.
If the researcher could randomly assign people to different levels of education, the data set thus generated would allow estimation of the effect of changes in years of education on wages. In reality, those experiments cannot be conducted. Instead, the econometrician observes the years of education of and the wages paid to people who differ along many dimensions. Given this kind of data, the estimated coefficient on years of education in the equation above reflects both the effect of education on wages and the effect of other variables on wages, if those other variables were correlated with education. For example, people born in certain places may have higher wages and higher levels of education. Unless the econometrician controls for place of birth in the above equation, the effect of birthplace on wages may be falsely attributed to the effect of education on wages.
The most obvious way to control for birthplace is to include a measure of the effect of birthplace in the equation above. Exclusion of birthplace, together with the assumption that is uncorrelated with education produces a misspecified model. Another technique is to include in the equation additional set of measured covariates which are not instrumental variables, yet render identifiable. An overview of econometric methods used to study this problem were provided by David Card (1999).
Print ISSN: 0747-4938
Online ISSN: 1532-4168
https://www.tandfonline.com/action/journalInformation?journalCode=lecr20
Like other forms of statistical analysis, badly specified econometric models may show a spurious relationship where two variables are correlated but causally unrelated. In a study of the use of econometrics in major economics journals, Deidre McCloskey concluded that some economists report p-value (following the Ronald Fisher tradition of tests of significance of point null hypothesis) and neglect concerns of type II errors; some economists fail to report estimates of the size of effects (apart from statistical significance) and to discuss their economic importance. She also argues that some economists also fail to use economic reasoning for model selection, especially for deciding which variables to include in a regression.Stephen Ziliak and Deirdre N. McCloskey (2004). "Size Matters: The Standard Error of Regressions in the American Economic Review", Journal of Socio-Economics, 33(5), pp. 527-46 (press +).
In some cases, economic variables cannot be experimentally manipulated as treatments randomly assigned to subjects. In such cases, economists rely on observational studies, often using data sets with many strongly associated , resulting in enormous numbers of models with similar explanatory ability but different covariates and regression estimates. Regarding the plurality of models compatible with observational data-sets, Edward Leamer urged that "professionals ... properly withhold belief until an inference can be shown to be adequately insensitive to the choice of assumptions".
Economic variables are observed in reality, and therefore are not readily isolated for experimental testing. Edward Leamer argued there was no essential difference between econometric analysis and or controlled trials, provided the use of statistical techniques reduces the specification bias, the effects of collinearity between the variables, to the same order as the uncertainty due to the sample size. Today, this critique is unbinding, as advances in identification are stronger. Identification today may report the average treatment effect (ATE), the average treatment effect on the treated (ATT), or the local average treatment effect (LATE). Specification bias, or selection bias can be easily removed, through advances in sampling techniques and the ability to sample much larger populations through improved communications, data storage, and Randomization. Secondly, collinearity can easily be controlled for, through instrumental variables. By reporting either ATT or LATE we can control for or eliminate heterogenous error, reporting only the effects on the group as defined.
Economists, when using data, may have a number of explanatory variables they want to use that are highly collinear, such that researcher bias may be important in variable selection. Leamer argues that economists can mitigate this by running statistical tests with different specified models and discarding any inferences which prove to be "fragile", concluding that "professionals ... properly withhold belief until an inference can be shown to be adequately insensitive to the choice of assumptions." Today, this is known as p-hacking, and is not a failure of econometric methodology, but is instead a potential failure of a researcher who may be seeking to prove their own hypothesis. P-hacking is not accepted in economics, and the requirement to disclose original data and the code to perform statistical analysis. However Sala-i-Martin argued, it's possible to specify two models suggesting contrary relation between two variables. The phenomenon was labeled emerging recalcitrant result phenomenon by Robert Goldfarb. This is known as two-way causality, and should be discussed with respect to the underlying theory that the mechanism is attempting to capture.
Kennedy (1998, p 1-2) reports econometricians as being accused of using sledgehammers to crack open peanuts. That is they use a wide range of complex statistical techniques while turning a blind eye to data deficiencies and the many questionable assumptions required for the application of these techniques.Kennedy, P (1998) A Guide to Econometrics, Blackwell, 4th Edition Kennedy quotes Stefan Valavanis's 1959 Econometrics textbook's critique of practice:
Econometric theory is like an exquisitely balanced French recipe, spelling out precisely with how many turns to mix the sauce, how many carats of spice to add, and for how many milliseconds to bake the mixture at exactly 474 degrees of temperature. But when the statistical cook turns to raw materials, he finds that hearts of cactus fruit are unavailable, so he substitutes chunks of cantaloupe; where the recipe calls for vermicelli he used shredded wheat; and he substitutes green garment die for curry, ping-pong balls for turtles eggs, and for Chalifougnac vintage 1883, a can of turpentine. (1959, p.83)Valavanis, Stefan (1959) Econometrics, McGraw-Hill
He looks at two well cited macroeconometric studies (Hansen & Singleton (1982, 1983), and Ben Bernanke (1986)), and argues that while both make brilliant use of econometric methods, both papers do not speak to formal theoretical proof. Noting that in the natural sciences, "investigators rush to check out the validity of claims made by rival laboratories and then build on them," Summers points out that this rarely happen in economics, which to him is a result of the fact that "the results of are rarely an important input to theory creation or the evolution of professional opinion more generally." To Summers:
The Austrian School holds that the counterfactual must be known for a causal relationship to be established. The changes due to the counterfactual could then be extracted from the observed changes, leaving only the changes caused by the variable. Meeting this critique is very challenging since "there is no dependable method for ascertaining the uniquely correct counterfactual" for historical data. For non-historical data, the Austrian critique is met with randomized controlled trials. Randomized controlled trials must be purposefully prepared, which historical data is not. The use of randomized controlled trials is becoming more common in social science research. In the United States, for example, the Education Sciences Reform Act of 2002 made funding for education research contingent on scientific validity defined in part as "experimental designs using random assignment, when feasible."Education Sciences Reform Act of 2002 , Pub. L. 107–279; Approved Nov. 5, 2002; 116 Stat. 1941, As Amended Through P.L. 117–286, Enacted December 27, 2022 "https://www.govinfo.gov/content/pkg/COMPS-747/pdf/COMPS-747.pdf" In answering questions of causation, parametric statistics only addresses the Austrian critique in randomized controlled trials.
If the data is not from a randomized controlled trial, econometricians meet the Austrian critique with methodologies, including identifying and exploiting natural experiments. These methodologies attempt to extract the counterfactual post-hoc so that the use of the tools of parametric statistics is justified. Since parametric statistics depends on any observation following a Gaussian distribution, which is only guaranteed by the central limit theorem in a randomization methodology, the use of tools such as the confidence interval will be outside of their specification: the amount of selection bias will always be unknown.
|
|