Meta-analysis is the statistical analysis of an entire empirical literature. It seeks to summarize, evaluate and analyse what we know about a given empirical question, phenomenon, or effect. Meta-regression analysis (MRA) is meta-econometrics, uses the very tools that produce economics research, and provides a rigorous, objective alternative to conventional narrative reviews in economics. MRA often reveals surprising truths about economics. To illustrate these methods, I discuss meta-analyses of the employment effect of the minimum wage, efficiency wages, the natural rate hypothesis and unemployment hysteresis, the last two of which provide a rigorous, empirical falsification of the natural rate hypothesis.

Full Text

1 INTRODUCTION

Meta-analysis is the statistical analysis of an entire research literature. It seeks to summarize, evaluate, and analyse what we know about a given empirical question, phenomenon, or effect. Meta-regression analysis (MRA) provides a rigorous, objective alternative to conventional narrative reviews in economics. MRA often reveals surprising truths about economics once publication selection and misspecification biases have been identified and accommodated.

Meta-analysis has a long history in psychology and medical research, where it often provides an authoritative assessment of what is known about a specific clinical treatment or drug. Because empirical economics is largely based on regression analysis, meta-regression analysis (MRA) was developed to evaluate and summarize economics research (Stanley/Jarrell 1989; Stanley 2001). As the name suggests, meta-regression analysis uses regression to explain the variation among reported regression estimates (or transformation of regression estimates). From the beginning, MRA was thought to be innately different from the type of meta-analyses conducted in other fields. Like meta-analysis, MRA includes and summarizes all empirical estimates of a given effect. However, unlike other meta-analyses in other disciplines, MRA always involves a multiple regression that accounts for routine misspecification biases and genuine systematic differences found among reported econometric estimates.

By now, many hundreds of MRAs have been conducted on economic topics, with over 100 new studies each year. For example, economic meta-regression analyses have found that:

there is weak evidence for conventional fiscal policy but stronger evidence that education and infrastructure expenditures are in fact stimulative (Nijkamp/Poot 2004);

empirical tests of Ricardian equivalence provide a strong rejection of its validity (Stanley 1998; Stanley 2001);

the price puzzle of a short term rise in prices following an unexpected monetary contraction disappears when misspecifications are filtered from the research base (Rusnák et al. 2013);

there is no policy-relevant adverse employment effect from raising the minimum wage (Doucouliagos/Stanley 2009);

the benefits of adopting the euro (that is, joining the European Monetary Union) are small and insignificant (Havranek 2010);

the combination of tests of the natural rate hypothesis and unemployment hysteresis constitutes a sophisticated Popperian falsification of the natural rate hypothesis (Stanley 2005; Stanley 2004).

The purpose of this paper is to provide a primer on meta-regression analysis for macroeconomists. I will outline MRA's basic structure and illustrate its use. A more comprehensive introduction may be found in Stanley/Doucouliagos (2012).

2 META-REGRESSION ANALYSIS

Since Leamer's ‘Let's take the con out of econometrics,’ economists have been acutely aware of the ubiquitous issue of misspecification bias in empirical economics (Leamer 1983). Meta-regression analysis was originally conceived as a systematic way to code and account for these potential biases in published research (Stanley/Jarrell 1989). Typically, data limitations (for example, missing explanatory variables, poor instruments) and other unavoidable model misspecifications make any reported econometric estimate vulnerable to nontrivial biases. MRA allows the reviewer to see across an entire research literature where some studies are susceptible to a specific misspecification bias (such as omitting a known relevant variable) while others are not. In this way, MRA can estimate and thereby correct the likely distortion that a given misspecification bias may have embedded in the empirical record.

For the purpose of illustrating MRA, I will refer to the meta-regression of the efficiency–wage hypothesis (Krassoi-Peach/Stanley 2009). The efficiency–wage hypothesis (EWH) is the idea that paying workers a higher than market-clearing wage will induce greater worker productivity. It is the opposite of the old Soviet-era joke: ‘They pretend to pay us, and we pretend to work.’ With EWH, we pay them well, and they work harder.

Our MRA of efficiency wages found that studies that controlled for the endogeneity of worker wages and productivity had larger efficiency wage effects, as measured by the wage elasticity of worker productivity. Correcting for this potential simultaneity bias is essential, because the reported correlations between wages and productivity may have been due to the reverse causation from worker productivity to wages. Since the marginalist revolution of the nineteenth century, economists have argued that productivity drives wages. By including all estimates of the wage elasticity of worker productivity, whether or not the simultaneity between wages and productivity was adequately addressed, we can estimate this important potential misspecification bias. Furthermore, doing so actually strengthens the evidence for the efficiency–wage hypothesis. Those studies that explicitly control for the potential simultaneity report larger wages effects on productivity, thereby dismissing the idea that support for EWH is merely the artifact of a misattribution of causation.

2.1 Conducting a meta-regression analysis: a sketch

Searching, reading, and coding an entire research literature represents at least 90 percent of the effort needed to complete an MRA. This process needs to be as transparent, comprehensive, and rigorous as possible. When conducted correctly, an MRA is replicable, which is the hallmark of science (Popper 1959). Space constraints do not permit a detailed and full description of all the necessary steps. A more complete discussion of conducting a meta-regression analysis and the routine issues and problems encountered along the way is given in Stanley/Doucouliagos (2012). Here, I can only offer a very brief and thus incomplete sketch:

Conduct a comprehensive electronic search for all relevant studies (published or not), using Google Scholar, ECONLIT, and other databases.

Include all relevant, comparable estimates. Choosing the appropriate comparable measure is one of the most important and difficult choices confronting the meta-analyst.

Code all of the studies carefully and comprehensively.

Recently, the Meta-Analysis of Economics Research Network (MAER-Net) and the Journal of Economic Surveys have published reporting guidelines for MRAs in economics (Stanley et al. 2013). We expect that these guidelines will be widely adopted and serve as minimum standards of quality for future MRAs. Although they detail exactly what a good MRA is expected to report, fulfilling these guidelines implicitly constrains how an MRA must be conducted.

Once the literature has been coded, it should be graphed and summarized using basic descriptive statistics. Although quite elementary, it is amazing how revealing and helpful this first peek at the research literature can be. Our experience has found that funnel graphs (see Section 2.2 below) are especially useful.

The last step of this laborious process is to perform a meta-regression analysis. The conventional meta-regression model is:

(Stanley/Jarrell 1989). Where effect_{i} is the estimated elasticity, regression coefficient, or empirical effect from study i, and ${Z}_{ki}$ are the explanatory or ‘moderator’ variables, which often code for a potential misspecification bias in addition to genuine heterogeneity of the economic phenomenon in question. Moderator variables (${Z}_{ki}$) routinely include:

variables that distinguish which type of econometric model, methods, and techniques were employed;

dummy variables that code for the omission of theoretically relevant variables in the research study investigated;

regional or aggregation level (for example, region, country, market, industry);

data types (panel, cross-sectional, time series);

year of the data used and/or publication year; and

Turning to the efficiency–wage hypothesis as an illustration, we identified over 100 potential empirical papers through our electronic search. Although we included all studies containing comparable estimates, regardless of language, only 75 estimates from 14 studies could be identified. Given the immense interest in efficiency–wage theory, it is surprising to find only 14 empirical studies that offer comparable estimates of efficiency–wage elasticity. However, EWH is not unique; one often finds surprisingly few genuinely empirical studies of macroeconomic theories. Although the efficiency–wage literature contains several different empirical outcome measures, we chose the wage elasticity of production, ${\widehat{\eta}}_{i}$, because it is the most commonly reported measure and thereby maximizes the MRA's sample size. The average reported wage elasticity of worker productivity is 0.63 (p < 0.001). See Krassoi-Peach/Stanley (2009) for a more detailed discussion of this meta-regression analysis of the efficiency–wage hypothesis.

2.2 Funnel graphs and publication bias

Though a simple scatter diagram, the funnel plot has often been found to reveal much about economics research (Stanley/Doucouliagos 2010). A funnel graph is a plot of an estimated effect and its precision (the inverse of the estimate's standard error). It is called a ‘funnel’ graph because it should look roughly like an inverted funnel (see Figure 1). Estimates on the bottom typically come from smaller samples and are less reliable, hence more widely spread out. Those on the top should be tightly dispersed because they have small standard errors. Known heteroskedasticity determines the funnel's shape. Figure 1 presents the funnel plot for 1424 minimum-wage elasticities of employment in the US, and Figure 2 displays the wage elasticities of worker productivity.

Figure 1 has a shape that is often observed across both micro and macroeconomics research. It resembles a funnel in the sense that the top is well defined and highly concentrated. As precision becomes smaller (and therefore the standard error gets larger), the reported estimates spread out, more or less, as expected. In the absence of publication selection bias, funnel graphs should be approximately symmetric. Clearly, Figure 2 is not symmetric, but rather highly skewed to the right. Such skewedness has been taken as an indication of publication selection for a statistically significant effect (Sutton et al. 2000; Sterne/Egger 2001). Publication selection is the process of selecting which research findings to report on the basis of their statistical significance or their consistency with conventional economic theory. Card and Krueger (1995a) claim that:

Reviewers and editors may be predisposed to accept papers consistent with the conventional view.

Researchers may use the presence of a conventionally expected result as a model selection test.

Everyone may possess a predisposition to treat ‘statistically significant’ results more favorably.

The preference for statistically significant results is a widely recognized fact throughout the social sciences.

When researchers report a t-ratio, they are assuming that the estimate (the numerator) is independent of its standard error (the denominator). Otherwise, this ratio does not have a t-distribution, and the reported econometric results do not mean what researchers claim. This property causes the funnel to be symmetric around the true value of the underlying parameter. If, on the other hand, researchers select which results to report, especially those that are statistically significant in the desired direction, then the reported distribution will be skewed. Thus publication selection bias will be reflected by an asymmetric funnel graph. Of course, there are other reasons for an asymmetric funnel plot (Stanley/Doucouliagos 2012), including heterogeneity that is coincidentally correlated with the standard error. However, meta-regression analysts always model potential heterogeneity explicitly using a multiple MRA – recall equation (2.1) and see Section 2.4 below.

The most common way to model publication selection is by using a meta-regression model on the estimate's standard error:

(Egger et al. 1997; Stanley 2008; Stanley/Doucouliagos 2012). Where effect_{i} is an individual empirical estimate, SE_{i} is its standard error, ${\text{\beta}}_{1}$SE_{i} represents publication selection bias, and ${\text{\beta}}_{0}$ estimates the overall average effect corrected for publication bias. ‘With publication selection, researchers who have small samples and low precision will be forced to search more intensely across model specifications, data, and econometric techniques until they find larger estimates’ hence ‘such considerations suggest that the magnitude of the reported estimate will depend on its standard error…’ (Stanley/Doucouliagos 2012: 60). Because we know that the variance of effect_{i} varies from estimate to estimate, meta-regression model (Equ. 2.2) will have heteroskedasticity and should, therefore, be estimated by weighted least squares. Table 1 reports this WLS-MRA (weighted least squares meta-regression analysis) with weights equal to precision squared (or 1/$S{E}_{i}^{2}$) for the two separate areas of research displayed in Figures 1 and 2.

Table 1WLS meta-regression model (Equ. 2.2)Notes: *Cells report coefficient estimates for Equation 2.2. The dependent variable is an estimated elasticity. The t-values are reported in parenthesis are from heteroskedasticity-robust standard errors. FAT is a test for publication selection bias. PET is a test for the existence of a genuine effect corrected for selection bias. n is the number of observations.

Testing H_{0}: $\text{\beta}=0$ is called the ‘funnel asymmetry test’ (FAT) and is consistent with selection for negative employment from minimum wage (t = −4.49; p < 0.001) and for positive efficiency wage effects (t = 1.80; one tail p < 0.05). In other words, both funnel graphs are significantly asymmetric (Figures 1 and 2). In both cases there seems to be a preference to confirm the effect being investigated and, as a result, to exaggerate the underlying true empirical effect. So, the next logical question is whether there are any genuine minimum-wage or efficiency–wage effects remaining after likely publication selection is accommodated.

Before we turn to this important question, we need to discuss further exceptions to the expectation that a funnel is symmetric in the absence of publication (or reporting) selection. First, as mentioned above, funnel asymmetry might reflect some fortuitous heterogeneity that just happens to be correlated with the standard error (or sample size). In economics, this is routinely addressed through explanatory multivariate MRAs that are always used to explain the wide variation found among reported empirical estimates – see Section 2.4 below. A second exception to the expectation of funnel symmetry occurs when estimated regression coefficients are transformed nonlinearly. For example, this happens in nonmarket valuation of environmental services (Stanley/Rosenberger 2009). Third, estimates of the regression coefficient for a lagged dependent variable, AR(1), are well known to have a nonstandard distribution with small-sample bias. However, these exceptions in economics are rather well-defined and therefore are easily managed. To recognize alternative reasons for an asymmetric funnel plot, medical researchers now call ‘publication bias’ ‘small-study effects’ (Rücker et al. 2011; Sterne et al. 2011). Regardless of what we call these biases, we need to filter out all such biases as best we can, and the meta-regression methods developed to accommodate ‘publication bias’ can do just that.

2.3 Testing and correcting the underlying empirical effect

The conventional t-test on the intercept in MRA Equation (2.2) (H_{0}: $\text{\beta}=0$) is a powerful test for a genuine empirical effect, regardless of the presence of ‘publication’ or ‘reporting’ or ‘small-study’ bias (Stanley 2008). This ‘precision-effect test’ (or PET) gives a different assessment of the presence of an authentic effect across these two areas of research. Minimum wage shows little sign of a genuinely adverse employment effect (t = −1.09; p > 0.05), while there is a very strong signal of an efficiency–wage effect (t = 25.5; p < 0.001).

Having established a genuine empirical effect beyond the effects of selection, the next question is often: how large is it? The estimated intercept from Equation (2.2), ${\widehat{\text{\beta}}}_{0}=0.30$ = 0.30, provides a corrected estimate, which unfortunately is known to be biased downward when there is a non-zero effect (Stanley 2008). A less biased correction uses ${\widehat{\text{\beta}}}_{0}$ from a modified version of Equation (2.2), where the variance, $S{E}_{i}^{2}$, is substituted for the standard error, SE_{i} (Stanley/Doucouliagos 2007; 2012; 2013). This estimate has been called PEESE for precision-effect estimate with standard error, and it gives virtually the same corrected estimate (0.32) for the efficiency–wage elasticity. When PET fails to find evidence for an authentic effect, as in the case of the employment effect of minimum wages, PEESE should not be used. PEESE has been adopted by medical research (Moreno et al. 2009a; 2009b; 2011).

Lastly, a very simple rough approximation merely averages the top 10 percent of the estimates from the funnel graph – Top 10 (Stanley et al. 2010). Top 10 also gives virtually the same estimate of efficiency–wage elasticity (0.33). The motivation for the Top 10 is that those estimates on the top of the funnel are the most accurate and hence of the highest quality. Thus, they will be the least affected by publication bias. Although the Top 10 often works rather well as the first approximation, it was proposed as a statistical joke (or paradox) to emphasize just how important publication biases are likely to be (Stanley et al. 2010). Regardless of which corrected estimate of efficiency–wage elasticity we use, the overall corrected value is now half of the average reported value (0.63) in this research literature. Often publication or small-study bias in economics exaggerates economic effects many-fold (Doucouliagos/Stanley 2013; Stanley/Doucouliagos 2012). As we see in the minimum-wage literature, seemingly important effects can be manufactured out of nothing.

In economics, we can never stop with these simple models of publication bias and its correction. We know that our empirical econometric estimates are vulnerable to many potential misspecification biases and that economic effects are likely to have genuine heterogeneity. Thus, all rigorous meta-analysis in economics uses multiple meta-regression analysis (MRA) to explain the observed variation among reported empirical estimates.

2.4 Explaining economics research using multiple meta-regression analysis

Reported economic tests and estimates are always found to vary greatly from study to study, much more so than what is expected from sampling error alone. The central purpose of conducting a meta-regression analysis is to explain this wide variation and to correct likely misspecification and selection biases in the process. To accommodate the rich complexity of economics research, MRA models (Equ. 2.1 and 2.2) can be combined and expanded to get:

The Z-variables represent heterogeneity or misspecification bias, and the K-variables allow for differential publication selection bias. For example, it is possible that estimates reported in the top journals contain greater selection bias because these journals are more selective. Young et al. (2008) have suggested that there is a ‘winner's curse’ among reported scientific findings. Competition among researchers for the scarce space in the top academic journals allows editors and reviewers to demand ‘more extreme, spectacular results’ (Young et al. 2008: 3), and Costa-Font et al. (2013) confirm this winner's curse in two areas of health economics.

For minimum-wage employment effects, we coded 22 moderator variables and allowed each of them to have genuine heterogeneity (that is, to be a Z-variable) and to affect the intensity of selection (that is, to be a K-variable) (Doucouliagos/Stanley 2009). All 44 variables were included in our MRA and the one with the largest p-value was removed, one at a time, until all remaining variables were statistically significant. This general-to-specific (G-to-S) modeling process identified eleven Z-variables that are related to heterogeneity and three K-variables that are associated with differential selection for statistical significance. After substituting any defensible values into this multiple MRA, no evidence of a practically significant adverse employment effect was found. Our overall finding is that this extensive area of research contains no evidence of an adverse employment effect from minimum wage raises. This finding is very robust. See Doucouliagos/Stanley (2009) for a more detailed discussion of the modeling and interpretation of this meta-regression analysis. Of course, these policy implications apply only to the modest single- and double-digit percent raises that have been studied in the US minimum-wage literature. They should not be extrapolated to a doubling or tripling of the minimum wage. Nonetheless, a separate meta-analysis of the UK minimum-wage literature also fails to find an overall adverse employment effect (de Linde Leonard et al. 2013).

When MRA model (Equ. 2.3) is applied to efficiency wages, G-to-S leads to the multiple MRA reported in Table 2. Whether potential simultaneity between wages and productivity is addressed (Simul) is quite important. Critics of the efficiency wage hypothesis (EWH) have suggested that increased productivity leads to higher wages, not the other way around, and that it is this failure to account fully for rent sharing that gives false evidence for EWH (Blanchard/Summers 1986). However, our meta-regression finds just the opposite. Those studies that explicitly accommodate the simultaneity between wages and worker productivity report larger EWH effects (t = 3.73; p < 0.001). Also, studies that include a measure of capital (Cap) find larger efficiency-wage elasticities (t = 2.77; p < 0.01). This too strengthens EWH. Because studies in this literature estimate an enhanced production function, omitting capital would cause the other estimated coefficients (such as the efficiency–wage elasticity) to be biased. Again, studies that avoid these important sources of bias report marginally larger efficiency–wage elasticities. Together, these two effects imply an efficiency–wage elasticity of 0.31 (F_{(2,68)} = 314.7; p < 0.001), which is nearly identical to the simple corrections for publication bias reported in Table 1 and discussed in Section 2.3.

Table 2Multiple MRA of the efficiency–wage hypothesis – model (Equ. 2.3)Note: t-values are reported in parenthesis.

Like the simple FAT-PET-MRA (Table 1), this multiple MRA (Table 2) contains clear indications of selection for statistically positive efficiency–wage elasticities. Overall, there is evidence of substantial publication bias (${\widehat{\alpha}}_{0}=1.87$ = 1.87; t = 3.59; p < 0.01) that is much larger still when productivity and wages are aggregated to the industry level (Industry·Se; t = 2.39; p < 0.05) or if value added is used as the measure of worker productivity (ValAdded·Se; t = 2.20; p < 0.05). Lastly, selection for significantly positive efficiency–wage elasticities is less for those studies that use relative wages (RelWage·Se; t = −3.42; p < 0.01). When potential publication selection is filtered from the empirical literature on efficiency wages using this multiple MRA, we estimate the efficiency–wage elasticities to be 0.31, CI: (0.28; 0.33).

It is also routine to accommodate any potential dependence among reported estimates. Otherwise, the MRA results might be biased and their significance overstated. Column 2 of Table 2 allows for dependence among the estimates reported within a given study, and it confirms all of the WLS-MRA findings reported in column 1 of Table 2. This random-effect multilevel is the same as a random-effects unbalanced panel. The last column of Table 2 reports the same MRA model using a robust regression that minimizes the impact of any one or few studies. It too confirms the results discussed above.

The crucial overall message is that there is clear support for the efficiency–wage hypothesis whether or not one corrects for publication bias, whether or not we employ a simple or complex MRA, whether or not within-study dependence is accommodated, whether or not outliers and influence points are omitted. The existence of an efficiency–wage effect is as robust and as clear as the absence of an adverse minimum-wage employment effect.

3 DOES CONVENTIONAL ECONOMICS ADD UP?

We shall take it as falsified only if we discover a reproducible effect which refutes the theory. In other words, we only accept the falsification if a low-level empirical hypothesis which describes such an effect is proposed and corroborated. This kind of hypothesis may be called a falsifying hypothesis. (Popper 1959: 86–87)

How does the empirical evidence stack up for the neoclassical theory of the firm? In economics, there is great variation in the outcomes of any empirical investigation, and any experiment, test or observation is subject to many potential biases. Thus, single studies are unlikely to be convincing. However a comprehensive, rigorous and objective (that is, replicable) analysis of a mature area of empirical economics research must be taken as credible evidence. Otherwise, empirical evidence has no role to play, and economics cannot progress.

Our meta-regression analysis of the employment effect of raising the US minimum wage offers such an evaluation of a mature area of empirical economics (Doucouliagos/Stanley 2009). Its findings are very robust and clear. There are no meaningful adverse employment effects from raising the minimum wage, confirming Card/Krueger (1995a and 1995b). These findings fly in the face of profit maximization and the neoclassical theory of the firm.

One central implication of profit maximization and the theory of the firm is that there will be a downward-sloping demand for labor and therefore an adverse employment effect when the minimum wage is raised. Conventional economists themselves consider this implication so central that Card and Krueger's original finding of an absence of adverse employment effect from the minimum wage caused much controversy in the 1990s, and an unprecedented letter was sent to all the members of AEA urging them to dismiss Card and Krueger's research.

What might explain the clear absence of an employment effect? The efficiency–wage hypothesis can supply the missing explanation. With efficiency wages, the traditional labor demand relation is radically altered. Higher wages leads to higher productivity, which shifts the conventional neoclassical labor demand curve upwards and compensates for higher labor costs. ‘[E]fficiency–wage theory can provide a plausible explanation of the absence of any adverse employment effect (Akerlof 1982; 2002)’ (Doucouliagos/Stanley, 2009: 422).

As shown above, a meta-regression analysis of the efficiency–wage literature finds a strong and robust confirmation of EWH. Thus we have two logically linked, comprehensive and rigorous meta-regression analyses of empirical economics literatures. The first clearly rejects a main implication of neoclassical theory of the firm. The second finds an equally strong confirmation of a rival hypothesis that can explain the findings from the first. Together, one could argue that these two MRAs offer a comprehensive and rigorous rejection of neoclassical profit maximization.

However, it is unlikely that neoclassical economists will accept this interpretation of these two very clear empirical records. In the past, conventional economics has been very resilient and flexible, often by modifying conventional theory to absorb any criticism. Here, the efficiency–wage hypothesis may be rendered ‘neoclassical’ by allowing workers' effort to be another ‘input’ in the production function that is endogenously influenced by wages and then to maximize profit in the usual way. This defense, however, has its own implications that can, in turn, be tested. When worker productivity is allowed to be determined by wages, the output elasticity with respect to wages must be equal to labor's share, ℓs.

To see this, recall the conventional formulation of production and profit functions. When efficiency wages are modeled, the production function includes labor-enhanced effort, e(w), Q = f(e(w)L,K). As always, profit (π) is: $\text{\pi}=PQ-wL-rK$. To simplify, let Y be production measured in some monetary unit (that is, Y = PQ). This is how production is measured empirically.^{1} It is rather easy to see that the first-order conditions for profit maximization require that $\partial Y\text{/}\partial w=L$. Now, if we multiply this first order condition by w and divide by Y, we find that profit maximization forces the wage elasticity of production to equal to labor's share, ℓs.

When wages increase by x percent, total costs will go up by xℓs percent, at the margin; thus, the value of production must also increase by xℓs percent to ‘break even,’ at the margin, and thereby to maximize profits. Equation (3.1) tells us that the very elasticity that is typically reported in the efficiency–wage literature must be equal to labor's share if EWH is to be consistent with profit maximization.

So is the wage elasticity of productivity actually equal to the labor share? A widely held ‘stylized fact’ is that the labor share is approximately two-thirds. Labor's share has been falling recently in the US and should not, therefore, be considered a constant. Nonetheless, it remains close to 0.6 or higher (Krueger 1999). Our meta-regression of efficiency wages estimates this wage elasticity of production to be about half of this value — quite close to 0.3, regardless of which corrected estimate one chooses. Here again, we find an empirical refutation of neoclassical profit maximization, even if wages are allowed to affect worker productivity.

A second pair of similarly linked meta-regression analyses have been claimed to provide a clear falsification of the natural rate hypothesis (NRH) (Stanley 2004; 2005). In 2001, several electronic databases were searched for any test or empirical study of either ‘NAIRU’ (non-accelerating inflation rate of unemployment) or the ‘natural rate.’ Thirty-four tests of the restrictions that NRH implies for macroeconomic relations were identified (Stanley 2005). After converting these test results to a common metric that is N(0,1) under the null hypothesis that NRH is correct, I found that the average test statistic provides a clear rejection of NRH. More revealing is that the strength of the evidence against NRH gets stronger as the sample size (or degrees of freedom) increases. This is the empirical trace of statistical power that only arises when the null hypothesis is false. Further ancillary research patterns corroborate this central finding. Those studies that fail specification tests or omit inflation as an explanatory variable provide marginally greater support for NRH (Stanley 2005).

Also in 2001, a second meta-regression analysis was conducted on NRH's falsifying hypothesis – unemployment hysteresis (Stanley 2004). Full hysteresis is the idea that the unemployment rate has a unit root or, equivalently, that is it nonstationary. In the simple auto-regressive, AR(1), model of the unemployment rate, ${U}_{t}$, we have

and ${\gamma}_{1}\ge 1$. ‘If the unemployment rate has a unit root, it would have no tendency to return to any previous reference point. Hence, “hysteresis”, defined as a unit root, [is] a “falsifying hypothesis” to the NRH’ (Stanley 2004: 591). When the unemployment rate is nonstationary, there is no NAIRU towards which the unemployment is being pulled. Rather, it is the perfectly flexible NAIRU that is adjusting to past unemployment rates (Stanley 2002; 2004).

Overall, there is clear evidence of nonstationarity among the 99 estimates of unemployment persistence identified by Stanley (2004). Both the rate of convergence as the sample size increases and the point of convergence are consistent with unemployment hysteresis. Those econometric models that have more information or pass additional specification tests are less likely to reject unemployment hysteresis.

Again, we have two logically linked, comprehensive and rigorous meta-regression analyses. The first clearly rejects a core mainstream macroeconomic theory – the natural rate hypothesis. The second finds an equally strong confirmation of a rival hypothesis, unemployment hysteresis, that can explain the findings from the first. Together, one could argue that these two MRAs offer a sophisticated and comprehensive rejection of the natural rate hypothesis (Stanley 2004). If two linked meta-analyses of mature areas of research cannot falsify an economic theory, then what can?

4 CONCLUSION

Meta-regression analysis is a very powerful tool for understanding economics research. It provides an objective summary and evaluation of empirical economics. But meta-regression analysts never stop there. MRA can also offer an explanation of diverse, often disparate reported research findings and correct the empirical record for obvious misspecification and selection biases.

Meta-regression analysis takes both economics research and econometrics as they are. Rather than using external philosophical or economic criteria for evaluating the current state of a given area of economics research, basic statistics and econometrics are turned inward to integrate, explain, and understand the economics research record itself. Meta-regression analysis is meta-econometrics.

Individual MRAs need to be as objective, comprehensive and rigorous as possible. By using econometric methods, meta-regression analysis summarizes, explains, tests, and corrects reported econometrics results. In this way, economics can become genuinely empirical. Mainstream economists cannot object to an internal evaluation of their research record when it uses the very tools that produced this research record.

This is not to deny the important theoretical critique of the Cambridge capital controversy (Cohen/Harcourt 2003). Meta-analysis is empirical, and it must accept empirical evidence as reported if it is to summarize and evaluate a given area of economics research. By accepting, at least provisionally, the current practice of using monetary measures of production to calculate efficiency-wage elasticities, we find evidence that conflicts with the neoclassical theory of the firm and profit maximization.

REFERENCES

AkerlofG.A., 'Labor contracts as partial gift exchange' (1982) 97Quarterly Journal of Economics: 543-569.

AkerlofG.A.Labor contracts as partial gift exchange)| false

CohenA. & HarcourtG.C., 'Retrospectives: whatever happened to the Cambridge capital theory controversies?' (2003) 17Journal of Economic Perspectives: 199-214.

CohenA.HarcourtG.C.Retrospectives: whatever happened to the Cambridge capital theory controversies?)| false

de Linde LeonardM., StanleyT.D. & DoucouliagosH., 'Does the UK minimum wage reduce employment? A meta-regression analysis' () British Journal of Industrial Relationsforthcoming.

de Linde LeonardM.StanleyT.D.DoucouliagosH.Does the UK minimum wage reduce employment? A meta-regression analysis)| false

DoucouliagosC.(H.) & StanleyT.D., 'Theory competition and selectivity: are all economic facts greatly exaggerated?' (2013) 27Journal of Economic Surveys: 316-339.

DoucouliagosC.(H.)StanleyT.D.Theory competition and selectivity: are all economic facts greatly exaggerated?)| false

MorenoS.G., SuttonA.J., AdesA., StanleyT.D., AbramsK.R., PetersJ.L. & CooperN.J., 'Assessment of regression-based methods to adjust for publication bias through a comprehensive simulation study' (2009a) 9(2) BMC Medical Research MethodologyURL: http://www.biomedcentral.com/1471-2288/9/2.

MorenoS.G.SuttonA.J.AdesA.StanleyT.D.AbramsK.R.PetersJ.L.CooperN.J.Assessment of regression-based methods to adjust for publication bias through a comprehensive simulation study)| false

MorenoS.G., SuttonA.J., TurnerE.H., AbramsK.R., CooperN.J, PalmerT.M. & AdesA.E., 'Novel methods to deal with publication biases: secondary analysis of antidepressant trials in the FDA trial registry database and related journal publications' (2009b) 339British Medical Journal: 494-498.

MorenoS.G.SuttonA.J.TurnerE.H.AbramsK.R.CooperN.JPalmerT.M.AdesA.E.Novel methods to deal with publication biases: secondary analysis of antidepressant trials in the FDA trial registry database and related journal publications)| false

StanleyT.D., 'Meta-regression methods for detecting and estimating empirical effects in the presence of publication bias' (2008) 70Oxford Bulletin of Economics and Statistics: 103-127.

StanleyT.D.Meta-regression methods for detecting and estimating empirical effects in the presence of publication bias)| false

StanleyT.D. & DoucouliagosC.(H.), 'Identifying and correcting publication selection bias in the efficiency–wage literature: Heckman meta-regression', (School Working Paper, Economics Series 2007–11, Deakin University, 2007).

StanleyT.D.DoucouliagosC.(H.)Identifying and correcting publication selection bias in the efficiency–wage literature: Heckman meta-regression2007School Working Paper, Economics Series 2007–11, Deakin University)| false

StanleyT.D., DoucouliagosHristos, GilesMargaret, HeckemeyerJost H., JohnstonRobert J., LarochePatrice & NelsonJon P., 'Meta-analysis of economics research reporting guidelines' (2013) 27Journal of Economic Surveys: 390-394.

StanleyT.D.DoucouliagosHristosGilesMargaretHeckemeyerJost H.JohnstonRobert J.LarochePatriceNelsonJon P.Meta-analysis of economics research reporting guidelines)| false

StanleyT.D., JarrellS.B. & DoucouliagosC.(H.), 'Could it be better to discard 90% of the data? A statistical paradox' (2010) 64American Statistician: 70-77.

StanleyT.D.JarrellS.B.DoucouliagosC.(H.)Could it be better to discard 90% of the data? A statistical paradox)| false

SterneJ.A. & EggerM., 'Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis' (2001) 54Journal of Clinical Epidemiology: 1046-1055.

SterneJ.A.EggerM.Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis)| false

SterneJ.A.C., SuttonA.J., IoannidisJ.P.A., TerrinN., JonesD.R. & LauJ., 'Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomized controlled trials' (2011) 343British Medical Journal: d4002.

SterneJ.A.C.SuttonA.J.IoannidisJ.P.A.TerrinN.JonesD.R.LauJ.Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomized controlled trials)| false