1 INTRODUCTION
The General Theory of Employment, Interest and Money (Keynes 1936) is sometimes credited with creating modern macroeconomics. Today that observation appears highly questionable for two reasons. The first relates to policy. We have once again found ourselves in a global liquidity trap of just the kind that The General Theory was designed to explain and avoid. Yet policy-makers have prolonged that liquidity trap by doing precisely the opposite of what The General Theory recommended. Can anyone doubt that Keynes would be turning in his grave seeing the current obsession with fiscal austerity?
There is of course nothing that compels policy-makers to base their decisions on macroeconomic theory, and a large part of what is currently going on may just be politicians using populist analogies between state and household budgets to achieve goals to do with the size of the state. Yet we have to ask whether they would be able to get away with that if macroeconomists were united in their opposition to fiscal austerity. Instead, the economics profession appears much more divided. This in turn reflects the second reason why the importance of The General Theory to modern macroeconomics might be questioned. The reality is that most academic macroeconomists no longer regard The General Theory as the defining text of their discipline. If they had to name one, they might be more likely to choose one of the seminal texts of the New Classical revolution: Lucas and Sargent (1979), which is aptly titled ‘After Keynesian Macroeconomics’.
Much discussion of current divisions within macroeconomics focuses on the ‘saltwater/freshwater’ divide. This understates the importance of the New Classical Counter Revolution (hereafter NCCR). It may be more helpful to think about the NCCR as involving two strands. The one most commonly talked about involves Keynesian monetary and fiscal policy. That is of course very important, and plays a role in the policy reaction to the recent Great Recession. However I want to suggest that in some ways the second strand, which was methodological, is more important. The NCCR helped completely change the way academic macroeconomics is done.
Before the NCCR, macroeconomics was an intensely empirical discipline: something made possible by the developments in statistics and econometrics inspired by The General Theory. After the NCCR and its emphasis on microfoundations, it became much more deductive. As Hoover (2001, p. 72) writes, ‘[t]he conviction that macroeconomics must possess microfoundations has changed the face of the discipline in the last quarter century’. In terms of this second strand, the NCCR was triumphant and remains largely unchallenged within mainstream academic macroeconomics.
To understand the position of The General Theory today, and why so many policy-makers felt they had to go back to it to understand the Great Recession, we need to understand the NCCR, and why it was so successful.
One explanation, which I consider in Section 2, could be summed up in Harold Macmillan‘s phrase ‘events, dear boy, events’. Just as the inability of economists to understand the Great Depression gave rise to The General Theory and the Keynesian consensus that followed, the Great Inflation of the 1960s and 1970s undermined that Keynesian consensus. The suggestion is that the NCCR was a response to the empirical failure of the Keynesian consensus. I will argue instead that the NCCR was primarily driven by ideas rather than events.
Section 3 considers the methodological revolution brought about by the NCCR and the microfoundations research strategy that it championed. Section 4 argues that this research methodology weakened the ability of macroeconomics to respond to the financial crisis and the Great Recession that followed. Although microfounded macroeconomic models are quite capable of examining financial and real economy interactions, and there is a huge amount of recent work, it is important to ask why so little was done before the crisis. I will suggest that a focus on explaining only partial properties of the data and an obsession with internal consistency are partly to blame, and this focus comes straight from the methodology.
Many economists involved with policy have commented that they found texts written prior to the NCCR, like The General Theory, or texts written by those outside the post-NCCR mainstream, more helpful during the crisis than mainstream macroeconomics based on microfounded models. Does that mean that macroeconomics needs to discard the innovations brought about by the NCCR, just as that revolution wanted to discard so much of Keynesian economics? In Section 5 I argue that that would make exactly the same mistake as the NCCR made. We do not need a discipline punctuated by periodic revolutions. The mistake that the discipline made in the 1980s and 1990s was to cast aside, rather than supplement, older ways of doing things. As experience in the UK shows, it is quite possible for microfounded modelling to coexist with other ways of modelling and estimation, and furthermore that different approaches can learn from each other.
2 IDEAS, NOT EVENTS
There is a widely held view of macroeconomic revolutions that goes as follows. The General Theory arose out of the failure of the then mainstream economic view (classical economics) to explain the Great Depression (persistently high involuntary unemployment). The New Classical Counter Revolution (NCCR) arose out of the failure of the mainstream (by then Keynesian macro) to explain the Great Inflation and stagflation (high unemployment and inflation). Some have used this logic to suggest the global financial crisis might spur another revolution.
There is a major problem with this view. Whereas the Keynesian revolution explained why the Great Depression happened (as a failure of aggregate demand), the NCCR did not explain stagflation. What subsequently explained stagflation came from within the existing Keynesian paradigm: a combination of Friedman's vertical long-run Phillips curve and the reluctance of governments and central banks to use policy to raise unemployment sufficiently to control inflation. In contrast, the Real Business Cycle models that were the initial result of the NCCR had no explanation of inflation at all beyond naive neutrality.
We can set this out more formally as follows. A standard account of scientific revolutions can be simplified to the following sequence:
Theory A explains body of evidence X.
Important additional evidence Y comes to light (or just happens).
Theory A cannot explain Y, or can only explain it by means which seem contrived or ‘degenerate’. (All swans are white, and the black swans you saw in New Zealand are just white swans after a mud bath.)
Theory B can explain X and Y.
After a struggle, theory B replaces A.
For stage 3, traditional Keynesian theory could and subsequently did account for stagflation. The way it did this is usually associated with Friedman's 1968 presidential address, or with Phelps, although as Forder (2014) points out the need to augment the Phillips curve with a term in expected inflation was not an insight unique to these two authors. He also shows that the idea that policy-makers had previously been working with a non-inflation augmented Phillips curve in the belief that they could get lower unemployment at the cost of only a small permanent increase in inflation seems to be largely a myth.
Lakatos (1970) talks about progressive and degenerate responses of a dominant theory to new events, but I think Zinn (2013) argues convincingly that changes to Keynesian theory to account for stagflation were progressive. For example, Tobin (1980) contains a clear account of how a restrictive monetary policy could reduce inflation without permanently raising unemployment using the NAIRU concept. I would argue that this adaptation was still empirically inadequate and further progress needed rational expectations, but as I note below it is perfectly possible to incorporate rational expectations within the traditional Keynesian framework. The idea of the long-run vertical Phillips curve came from thinking about microeconomic theory, but innovations in traditional Keynesian macroeconomics had always come from an eclectic mixture of microeconomic theory and evidence, as Fair (2012) points out.
More critically, stage 4 did not happen: New Classical models were not able to explain the behaviour of output and inflation in the 1970s and 1980s, let alone the Great Depression. In the contest to explain stagflation, traditional theory adapted in a progressive way beat the newcomer hands down. So, according to the standard account, the NCCR should have been a revolution that was very quickly seen to fail. Yet the opposite is the case.
To understand why, we just need to study what could be regarded as the seminal text of the NCCR: Lucas and Sargent (1979 – hereafter LS), aptly titled ‘After Keynesian Macroeconomics’. The article itself is both clear and well argued. Any macroeconomist under the age of 50 will probably recognise much of this discussion as what they were taught as part of the macroeconomics postgraduate course.
LS start their article with references to stagflation and the failure of Keynesian theory. A fundamental rethink is required. What follows next is crucial. If the Counter Revolution was all about stagflation, we might expect an account of why conventional theory failed to predict stagflation: the equivalent, perhaps, to the discussion of classical theory in The General Theory. Instead we get something much more general: a discussion of why identification restrictions typically imposed in the Structural Econometric Models (SEMs)1 of the time are incredible from a theoretical point of view, and an outline of the Lucas critique. The essential criticism in LS is methodological: the way empirical macroeconomics has been done since Keynes is flawed.
There is no attempt to link this stagflation failure to the identification problems discussed earlier in their text. Indeed, they go on to say that they recognise that particular empirical failures (by inference, like stagflation) might be solved by changes to particular equations within traditional econometric models. Of course that is exactly what mainstream macroeconomics was doing at the time, with the expectations augmented Phillips curve.Though not, of course, designed as such by anyone, macroeconometric models were subjected to a decisive test in the 1970s. A key element in all Keynesian models is a trade-off between inflation and real output: the higher is the inflation rate, the higher is output (or equivalently, the lower is the rate of unemployment). For example, the models of the late 1960s predicted a sustained U.S. unemployment rate of 4% as consistent with a 4% annual rate of inflation. Based on this prediction, many economists at that time urged a deliberate policy of inflation. Certainly the erratic ‘fits and starts’ character of actual U.S. policy in the 1970s cannot be attributed to recommendations based on Keynesian models, but the inflationary bias on average of monetary and fiscal policy in this period should, according to all of these models, have produced the lowest unemployment rates for any decade since the 1940s. In fact, as we know, they produced the highest unemployment rates since the 1930s. This was econometric failure on a grand scale. (Lucas and Sargent 1979, p. 56)
This is why LS (ibid., p. 57) go on to say: ‘We have couched our criticisms in such general terms precisely to emphasise their generic character and hence the futility of pursuing minor variations within this general framework’. The rest of the article is about how, given additions like a Lucas supply curve, classical ‘equilibrium’ analysis may be able to explain the ‘facts’ about output and unemployment that Keynes thought classical economics was incapable of doing. It is not about how these models are, or even might be, better able than traditional Keynesian analysis to explain the particular problem of stagflation.
Reading the paper as a whole, I think it would be fair to say that these two parts were not equal. The focus of the paper is about the lack of a sound theoretical or econometric basis for traditional Keynesian structural econometric models, rather than the failure to predict or explain stagflation.First, and most important, existing Keynesian macroeconometric models are incapable of providing reliable guidance in formulating monetary, fiscal and other types of policy. This conclusion is based in part on the spectacular recent failures of these models, and in part on their lack of a sound theoretical or econometric basis.
Trying to see the NCCR as being primarily inspired by empirical events fails to understand both the nature of that revolution and also why it was so successful. It may also inspire false hopes among some who believe that the global financial crisis must lead to a new revolution in macroeconomic thought and practice.
3 TWO STRANDS OF THE REVOLUTION
Although the NCCR used the failure at the time to control inflation as useful ammunition, the revolution itself was about ideas that had little to do with that empirical failure. There was one idea in particular that was central to their critique, and that was rational expectations. Rational expectations were used in what can be seen as two strands of the NCCR: an attack on Keynesian theory and policy, and an attack on the methodology of macroeconomic analysis. From the perspective of today, the first strand of the revolution was ultimately unsuccessful, but the second strand was triumphant.
As there are still some macroeconomists who argue against rational expectations, it is important to understand why the concept was so appealing to the new generation of macroeconomists in the 1980s. It was not because rational expectations performed better empirically. Instead the attraction was theoretical. Rational expectations is about the optimal use of information in the formation of expectations. In microeconomic theory optimisation is standard: utility maximisation, profit maximisation, etc. Rational expectations just apply the same logic to an agent‘s forecasting. It is as natural a starting point as profit or utility maximisation. Of course, you could argue that in particular cases information is costly, or processing it is costly, but you could equally argue that it is unrealistic for firms to know their demand curve when maximising profits. Lucas (1987) described rational expectations as a ‘consistency axiom’, and to most students studying micro as well as macro theory that is exactly what it appeared to be.
In practice macroeconomists building models were faced with a binary choice: assume rational expectations or some form of adaptive expectations. To continue to choose the latter treated agents as naive, on a par with assuming that price mark-ups on average costs are forever fixed, or the marginal propensity to consume from current income was always constant. Critically, it was inconsistent with how microeconomic theory typically treated agents when they made other decisions. Of course, in reality agents did not know the true model but instead learnt about it, and so a large literature on learning developed, to which Sargent has been a major contributor. The drawbacks of assuming adaptive expectations were vividly exposed when rational expectations were added to a traditional Phillips curve.
As we have already noted, stagflation could be explained by ideas already within mainstream Keynesian analysis: an expectations augmented Phillips curve which was vertical in the long run. In that Phillips curve, inflation at time t depends on expectations of inflation at time t, and the deviation of unemployment/output from its natural rate. This formulation was widely used, not just in large-scale econometric models but also in smaller analytical models used by macroeconomic theorists. If you wanted a version of the Keynesian model that had something to say about inflation (and by the 1970s most people did) this was the model to which most people turned.
Before the NCCR, expected inflation was proxied by some combination of past inflation rates. If instead you use rational expectations, it is easy to show that deviations from the natural rate are random. If that is the case, Keynesian economics becomes irrelevant. It was no wonder that many Keynesian macroeconomists at the time saw rational expectations (and therefore all things New Classical) as an existential threat. For a time the New Classical attack on the traditional Keynesian model seemed fatal.
What we now know is that the problem lay not with rational expectations or Keynesian economics, but with the traditional Phillips curve. If you replace expectations of inflation at time t by expectations of inflation at time t + 1, as is done in the New Keynesian Phillips curve, then with rational expectations deviations from the natural rate are no longer random, but can follow a persistence that has to be at the heart of any business cycle theory. If you try to derive a Phillips curve formally using microeconomic theory, then invariably it is expected inflation one period ahead, not expectations of current inflation, that should appear in the Phillips curve.
As a result, although the NCCR led some to briefly announce the death of Keynesian economics, over the next two decades a body of analysis was developed under the umbrella of New Keynesian theory that demonstrated that the basic insight in The General Theory about the importance of aggregate demand in driving business cycles is quite compatible with rational expectations, and for that matter other theoretical developments brought in by the NCCR. Indeed, for most of those interested in monetary policy, New Keynesian economics became the dominant mainstream view, leading some in the early 2000s to announce a new consensus in macroeconomics. That proved a little premature, but it certainly reflected the view of economists working within and with central banks.
New Keynesian theory essentially involves incorporating price stickiness into the Real Business Cycle (RBC) models that the NCCR had inspired. So New Keynesian models typically assume rational expectations and a consumption Euler equation based on intertemporal optimisation by consumers. More importantly, New Keynesian models are claimed to be as microfounded as RBC models. In other words, they follow the methodology outlined by LS. This development is sometimes described as New Keynesian economists being forced to dress Keynesian ideas in New Classical clothes to enable their work to be accepted. While that might be true in some cases, it is probably misleading in most. New Keynesian economists adopted the methodology they did because they thought it was the right way to proceed.
This shows us that, although the NCCR revolution failed in ending Keynesian analysis, it succeeded in fundamentally changing the way academic macroeconomics is done. Once again rational expectations was crucial. To see why, we need to recall how academic macroeconomists typically plied their trade in the 1970s. A large body of work involved mostly single equation time series econometric estimation of aggregate equations for key relationships like consumption, investment and inflation. These estimated equations were the building blocks for a small number of large-scale econometric models (often called structural econometric models, or SEMs), sometimes maintained by policy-makers or private sector forecasters, and occasionally by academics. (For an example of a SEM maintained by an academic, see Fair 2012). Theoretical analysis of macroeconomic systems typically involved analysing small models involving aggregate relationships, where the individual equations were justified using an eclectic mixture of reference to this econometric work and (sometimes informal) appeals to theory. A seminal example is Blinder and Solow (1973).
The main focus of LS was an attack on SEMs. What became known as the Lucas critique showed that these models, by failing to clearly identify structural relationships and deep parameters, could give misleading answers if policy rules changed. In particular, if inflation expectations were modelled implicitly using terms in lagged inflation rather than with rational expectations, these estimated relationships might embody a particular policy regime, and might then be misspecified if the policy regime changed. LS was followed by Sims (1980), entitled ‘Macroeconomics and Reality’, where he questioned the basis of the identification restrictions typically imposed in time series econometrics. Once again, rational expectations were an important part of this critique.
Both of these critiques were successful because they also suggested an alternative way of doing academic work which avoided these criticisms. In the case of LS it was to explicitly derive models from optimisation by representative agents, and for Sims it was to estimate complete systems in an atheoretical way using vector autoregressions (VARs). The impact on academic practice in the United States was revolutionary. In the top journals, single equation econometric estimation was largely replaced by VARs, and all structural models (whether parametrised or not) had to be based on explicit microfoundations, now known as DSGE models.
It is no exaggeration to describe this as a methodological revolution. The way academic macroeconomics was done before the NCCR was empirically orientated: the emphasis was put on consistency with the data, or ‘external consistency’. Theoretical analysis using small aggregate models often justified the specification of individual aggregate equations with an appeal to econometric results as much as microeconomic theory. With SEMs, it was rare to keep an equation within the model when it had been ‘rejected by the data’, however that was defined. In that sense, external consistency was almost a criterion for admissibility. In contrast, empirical relationships were often justified by very informal theoretical arguments, and sometimes (particularly when it came to dynamics) no theoretical justification was provided at all. Some time-series econometricians used the ideas of Karl Popper to justify their approach.
RBC models, and their successor, dynamic stochastic general equilibrium (DSGE) models, are very different. Here it is essential that aggregate equations can be derived from microeconomic theory, and furthermore the theory behind each equation in the model has to be mutually consistent: what is often described as ‘internal consistency’. For acceptance in good academic journals, internal consistency is usually an admissibility criterion. In contrast external consistency is not required: it would not be unusual for a paper to include little or no reference to the data, beyond the fact that constraints in an optimisation problem were ‘plausible’ and that the properties of the model helped explain some empirical puzzle. This is a methodology that is much more deductive, and is closer to Daniel Hausman's (1992) account of microeconomics. Reference to the data is not completely absent from this methodology (just as reference to microeconomic theory was not absent from its predecessor), because the goal of new models is to describe some feature of the data (a puzzle) that existing theory is unable to provide. In that sense, the microfoundations (DSGE) research programme is progressive: the aim is to reduce the number of empirical puzzles.
Initially, the NCCR could only promise a research programme, based on these microfounded macromodels, that might at some later stage be able to be empirically successful enough to guide policy. As policy-makers still saw the world through Keynesian eyes, RBC models were of little use. Ironically it was the advent of New Keynesian analysis, and DSGE models, that allowed the methodological revolution begun by New Classical economists to be completed. Now DSGE models could be used by central banks, and a DSGE model is currently the core model used by the Bank of England. (The US Fed, and the European Central Bank, are more eclectic in retaining a traditional SEM alongside DSGE models.)
Many economists would argue that the essential message of The General Theory therefore lives on in New Keynesian models. Others would disagree, but this is not a debate I want to enter here. The key point is that most academic macroeconomists today would see the foundation of their discipline as not coming from The General Theory, but as coming from basic microeconomic theory – arguably the ‘classical theory’ that Keynes was so keen to cast aside. The General Theory might be seen as having the insight that price stickiness and aggregate demand are critical in understanding the business cycle, but not as having a profound influence on the very nature of macroeconomics as a discipline.
4 MICROFOUNDATIONS AND THE GREAT RECESSION
Many have complained that DSGE models were of little use in understanding both the financial crisis and the Great Recession that followed. Some felt they understood more by going back to (re)read The General Theory. This is sometimes put down to a false sense of security that came from the Great Moderation. Research focused too much on getting the details of monetary policy in ‘normal’ times right, believing that abnormal crises were a thing of the past.
I want to suggest a more critical account. The narrowness of the research focus came in part from the methodology itself. In giving this critical account it is not my intention to argue that microfoundations modelling has not produced many important insights and achievements – in my view it clearly has, which is why I spent over a decade working with models of this kind. The problem as I see it is more that in becoming the only accepted way of doing serious macroeconomic research and policy analysis, it (quite deliberately) crowded out other more traditional approaches that – had they persisted – might have left the discipline in a better position to understand the impact of the financial crisis. Furthermore, these alternative approaches might have encouraged the development of DSGE models in what we now know to be more fruitful directions.
The fundamental drawback of the microfoundations project, as set out in the NCCR, is the priority given to internal theoretical consistency. Internal consistency becomes an admissibility criterion: papers with models that are not internally consistent, or where some relationships are ‘ad hoc’, should not appear in good journals. This has a number of consequences. First, lines of enquiry that are almost inevitably going to fail are pursued because empirical facts can be ignored. The most obvious example is RBC theory itself. The evidence that the business cycle is characterised by increases in involuntary unemployment in a recession is diverse and overwhelming, so attempting to fit models to the data that ignore this evidence is in great danger of being a waste of time.
A danger in any approach that allows clear empirical facts to be put on hold to be addressed at a later date (some might say just ignored) is that it makes it easier for ideological bias to influence research programmes. Once again RBC modelling is a clear case in point. For at least some who have pursued the RBC approach, the implication of these models – that business cycles require no state intervention to counteract them – seems to be a feature rather than a reflection of some empirical failing that needs to be addressed. Pfleiderer (2014) talks about ‘chameleons models’: models built on assumptions with dubious connections to the real world but which lead to conclusions that are uncritically applied to understanding our economy. Although Pfleiderer's main concern is models in finance, I think RBC models qualify as chameleon models in macroeconomics.
Another more practical issue involved in microfounded modelling is that it is often difficult to ensure that a complete model is internally consistent. It took at least a decade for New Keynesian ideas to be analysed in sufficient depth so that modellers could argue that something like Calvo contracts could be regarded as microfounded. That has two consequences.
First, it leads to an inevitable temptation to put difficult issues to one side unless there is a compelling puzzle to solve that involves them. A clear case of that, which I will explore below, is how the financial sector interacts with the real economy. Second, it mitigates against complexity, even when there may be a good case that complexity is essential. This is particularly true if researchers take what I have called (Wren-Lewis 2011) a microfoundations purist position.
This purist position is exemplified by Chari et al. (2008). They argue that a lot of current DSGE modelling, including New Keynesian analysis, is not well founded in terms of deep parameters and clearly identifiable shocks. As I note in Wren-Lewis (2011), New Keynesian models do in an important sense compromise the ideal of internal consistency, because they allow a proof of internal consistency by indirect reference (sometimes referred to by modellers as ‘short cuts’ or ‘tricks’). For example it is suggested that Calvo contracts reflect what would happen if firms faced menu costs, but menu costs are not directly included in the model. Chari et al. believe it is better to stick with simple models where the microfoundations are clear, and give policy advice using these models. It is a testament to the success of the NCCR that arguments of this kind can be made by economists with a deservedly high reputation, because it amounts to little more than saying that if we do not understand something, it is best when formulating policy to assume that something does not exist! While there is good evidence from the journals that probably a majority of macroeconomists would not take this purist position, the perils of refereeing means that the purists’ influence on practice outweighs their numbers.
In contrast, other more traditional forms of modelling are far more adaptable. Innovations that improve the ability to explain the data can be incorporated into a SEM much more quickly, because there is no requirement to check whether this innovation has implications for other relationships. The properties of small aggregate models can be altered very quickly to reflect empirical regularities if there is no prior requirement to derive every relationship from individual optimisation behaviour in a mutually consistent way.
Of course, adaptation of this kind to SEMs or small aggregate models is not ideal, because internal consistency is not investigated. However that does not mean that such models are worse than DSGE models at generating realistic policy advice, because the DSGE models just ignore the empirical problem. (This is discussed further in Section 5.) Equally important, it shows that it is foolish to argue that one approach is always superior to the other. There is no reason why both cannot be pursued together. DSGE analysis can attempt to microfound relationships that appear empirically robust in SEMs, or important from a system perspective in analysis using small analytic models, while these models and SEMs can gradually incorporate advances in theory that come from DSGE analysis.
So far this is all abstract, with no clear indication that any of this mattered prior to the financial crisis. However in at least one, quite central, area of analysis we have clear indications that it may well have mattered. Macroeconomists have for some time understood the empirical deficiencies of the simple optimising model of consumer behaviour. In particular, aggregate consumption appears to show excess sensitivity to temporary changes in current income. One obvious reason for this, again commonly understood, is that agents may be credit-constrained.
This prompts an obvious question: are these credit constraints constant or do they change over time, and if they change what governs these changes? To answer that question within a DSGE model requires developing a microfounded model of the financial sector. That is hard, and perhaps would be regarded as a time-consuming diversion if your focus is on using your DSGE model just to examine business cycle properties. As a result, the way that many DSGE modellers handled this problem was to postulate the existence of a fixed proportion of ‘rule of thumb’ consumers, who simply consumed all of any change in their aggregate income.
Since the financial crisis, of course, a huge amount of effort has gone into modelling the interactions between the financial and real sectors in a microfounded way. The key question is why most of this work did not occur before the crisis. The answer you will generally receive is that there was no compelling reason to do so, because obviously few believed a crisis would happen. However that ignores a very simple point. As work associated with John Muellbauer and Chris Carroll (for example Aron et al. 2010 and Carroll et al. 2012) demonstrates, it is very difficult to account for broad trends in the savings ratio over the last 30 years in the US or UK without taking changes in credit conditions into account. These trends in the savings ratio, which are quite inconsistent with the standard intertemporal model, were quite apparent before the financial crisis, yet as a result of detrending techniques they can be filtered out by empirical work using DSGE models.
In contrast, SEMs do not pre-filter the data, and so explaining these trends should be a key requirement for any consumption function in a model of this type. If more academics had been actively involved in SEMs and single equation econometric modelling, where fitting the unfiltered time series for consumption over a reasonable length of time is seen as important, it is unlikely that this would not have become more widely known. Those operating these SEMs would have understood that to simply treat credit conditions as an exogenous trend, when it was so important in determining the behaviour of such a critical macroeconomic aggregate, was heroic and potentially dangerous. As a result, research would almost certainly have been directed towards understanding this key financial influence, which in turn should have led to a better understanding of financial/real interactions. Macroeconomics was only able to ignore all this because of the microfoundations methodology that downgrades the importance of external consistency.
If you need to understand financial/real interactions to account for the pre-crisis time series behaviour of consumption in both the US and UK, then it is very misleading to claim that mainstream macroeconomics largely ignored these interactions because there was no empirical reason to do otherwise. In fact these interactions could be ignored because mainstream methodology required only partial external consistency. Of course, even if these alternative approaches had received greater attention it is unlikely the global financial crisis could have been predicted, let alone prevented. However the more serious criticism of mainstream DSGE models at the time was that they were unable to analyse the nature and persistence of the subsequent recession, which depended a great deal on changing credit conditions and consumers’ reaction to them.
5 REVOLUTION OR EVOLUTION
Seen in terms of methodology rather than economic policy, the NCCR was highly successful within academia. Microfoundations, which previously had been seen as a gradualist improvement within traditional practice, became the only acceptable way among academics to model the economy. With the advent of New Keynesian theory, estimated versions of DSGE models began to be used by policy-makers to provide empirical policy advice, in some cases displacing more traditional empirical models. Gradually the idea took hold that you could only do serious policy analysis if you simulated a DSGE model.
More empirically based structural econometric models (SEMs) continued to be used, both in policy-making institutions and by private sector forecasters. However, academic involvement with them in the United States was discouraged, as it was believed that these models had been fatally discredited by the NCCR. Articles analysing the time series properties of individual macroeconomic aggregates became quite rare in the top journals.
I argued in the previous section that this methodological revolution directly contributed to macroeconomic research ignoring important interactions between the real and financial sectors, which in turn left it poorly placed to understand the consequences for the economy of the financial crisis. This is why many policy-makers who worked through this period can be so dismissive of DSGE modelling, and why many felt it was much more useful to go back to classics like The General Theory.
Although there has been widespread criticism of DSGE models from policy-makers and others, there is little pressure from within academic macroeconomics for change because the methodology is progressive and researchers are devoting a good deal of time to examining real/financial interactions. The methodology is progressive. I would argue that what should change is the view that all other, more empirically based, methods are inherently inferior. Students are still taught that other methodologies were fatally flawed as a result of the NCCR. This is simply wrong.
DSGE models maintain internal consistency by sacrificing external consistency. Of course the ideal is to have models that are both internally consistent (are fully microfounded) and which are externally consistent (explain the properties of the data you want models of this type to explain). But that goal, even if it could ever be achieved, is a long way off, and in the meantime there is a trade-off: a frontier, if you like, with internal and external consistency on the two axes (see Pagan 2003). You can think of DSGE models as being where that frontier intersects the internal consistency axis, and simple VARs as where it intersects the external consistency axis. There is no logical reason why other points on that frontier should be deemed academically unacceptable.
One confusion is to link DSGE models with policy analysis, and more data-based methods with forecasting. This is an unhelpful distinction. It comes from a view that a model is either theoretically coherent or it is not: we cannot have degrees of theoretical coherence. In terms of theory, there are either DSGE models, or (almost certainly) incorrect models. But policy-makers do not just want stories that are internally consistent. They also want stories that are relevant to the real world! A DSGE model is more likely to be subject to specification errors because it ignores real-world complications, as Fair (2012) documents.
For example, would it have been better after the financial crisis to use a DSGE model where finance played no role in real decisions, or a more empirically based model where these interactions were captured in a rough and ready way? The choice is obvious to policy-makers. Appeals by DSGE modellers that you need to give academic economists time to build a DSGE model that incorporates a financial sector just avoids the point. The claim many make is not just that microfoundations is a progressive research programme, but that it is the only proper way of doing policy analysis. At any particular point in time that is simply false.
A less fictitious example is provided by New Keynesian theory itself. Before New Keynesian theory was developed, microfounded models made no allowance for sticky prices. It is nonsense to suggest that at this time RBC models had to be better as a guide for monetary policy compared to a SEM that incorporated price rigidity by allowing some inertia in price and wage setting. The SEM may have suffered from the Lucas critique and poorly identified equations, but that does not mean the RBC alternative was better.
The idea that the only proper model for policy analysis is a microfounded model has other logical flaws. It may be possible that aggregate relationships can adequately capture a degree of heterogeneity that it would be ridiculous to try to capture using microfoundations. A good example is provided by Carroll (2001). He shows how thousands of simulations, examining the optimal savings behaviour of agents with different ages and asset or income profiles facing uncertainty, approximates features of the aggregate permanent income consumption function proposed by Friedman. This aggregate relationship based on individuals optimising is not, and cannot be, derived analytically as part of a microfounded model.
The microfoundations project tries to cope with such problems using what could be called tricks. The New Keynesian Phillips curve derived from Calvo contracts can be seen in this way. It is based on a firm where the probability of changing prices each period is fixed, and this is justified by reference to other work that examines the optimal response to menu costs. There can be no pretence that this model actually represents how all (or even most) firms operate. The empirical evidence suggests there are many different reasons for price rigidity, of which menu costs are just one. The reason the Calvo contract model is widely used is that it is tractable and implies a New Keynesian Phillips curve, which appears to fit the data. So a specification is chosen not because it represents the typical behaviour of a representative firm, but because it produces an aggregate relationship which works empirically. This makes a pretence at the sanctity of internal consistency a bit of a charade, which is part of the criticism in Chari et al. (2008) of New Keynesian models. The correct response is not, as Chari et al. suggest, to ignore price rigidities, but to recognise that a microfoundations purist position is untenable and that the claim that DSGE models are inherently superior to alternatives is false.
For these reasons, there is absolutely no reason why DSGE models could not have developed alongside more traditional methods of analysis. Furthermore, for a time that did happen, not in the United States but in the United Kingdom, where academic involvement in SEMs continued until the late 1990s. In particular, the UK‘s social science research council provided a sustained period of funding for a ‘Macromodelling Bureau’, which brought together all the major SEMs in the UK, and compared their properties in a critical way (see Smith 1990). The funding of the Bureau only came to an end when other UK academics argued that this was not an efficient use of research funding, where efficiency included the ability to publish papers in the top (mainly US) journals.
The models the Bureau analysed in the 1990s were not the same models that LS criticised so effectively. Many embodied rational expectations as routine, and in that and other ways attempted to reduce the impact of the Lucas critique (see Church et al. 2000). They began to look more like the DSGE models that were being developed at the same time. However they remained quite distinct from these models, because most were concerned to include aggregate equations that were able to track the time-series properties of the data on an equation-by-equation basis, and partly as a result they were almost certainly internally inconsistent. (Inconsistency also came from the need to be a ‘horse for all courses’: see Currie 1985). It is a model of this kind that, for example, is currently used by the US Fed for forecasting and policy analysis (Brayton et al. 2014).
On a personal note, my own model of the UK economy, discussed in Wren-Lewis et al. (1996), included a consumption function based on intertemporal optimisation where a proportion of consumers were credit-constrained. That proportion was not constant, but was an exogenous data series based on particular financial variables. If funding for that model had continued into this century, that exogenous variable would have almost certainly been endogenised by modelling the financial sector, as I suggested in the previous section.
These developments show that it was quite possible for SEMs to evolve, incorporating many of the theoretical ideas that were part of the NCCR, but remain quite different from both DSGE models and VARs. Rather than being the old-fashioned dinosaurs imagined by many academics, they represent a way of modelling the economy that policy-makers find useful, and which in turn can inform those working with DSGE models. In particular, because the need to track individual time series is more important in SEMs than in DSGE models, they and the econometric estimation that accompanies them can provide insights into important empirical regularities that DSGE analysis can miss.
If a revolution was not logically inevitable, and an evolution that had allowed DSGE models to develop alongside other ways of doing macroeconomics had been both possible and preferable, it is important to ask why the evolutionary path was not followed. Were there other reasons, besides any mistaken logical necessity, why the evolutionary road was not taken? The answer is that there are many. The following reasons are speculative and are difficult to document let alone quantify, but I mention them here simply to point out that there many alternative explanations besides (erroneous) logical necessity as to why a methodological revolution occurred.
An analogy that Greg Mankiw (2006) made popular when discussing similar issues involves scientists and engineers. I prefer to compare macroeconomics to medicine, and so Paul Romer's recent discussion2 using biomedicine and comparing bench science and clinical studies has more appeal. Here DSGE modellers are the scientists, and those using other approaches the engineers or clinicians. The way I would use the analogy is that engineers prefer models that work, whereas scientists look for a deeper understanding.
There may be a tendency among many academic macroeconomists to want to be scientists. In what has become a highly mathematical discipline, making a mathematical error is one of the most embarrassing mistakes you can make. Developing a model where the microeconomic theory that is used to justify one part of the model conflicts with that used to justify another (internal inconsistency) perhaps comes close in terms of the lexicon of errors to avoid. There is an undeniable appeal in telling a story that is complete and at peace with itself, rather than one where there are awkward inconsistencies and unresolved issues.
Another obvious appeal to a new generation of macroeconomists following the NCCR methodology was the opportunity it gave them to rediscover old truths using new techniques. Even if the economists associated with the old truths were referenced, the analysis could be described as doing the ‘macroeconomics properly’. For academics hungry for publication opportunities this was difficult to ignore.
One other undoubted appeal that the NCCR had was that it allowed macroeconomics to be brought back under the microeconomics umbrella. New Keynesian economists can talk about how business cycles involve an externality (price rigidity reflecting the cost of adjusting individual prices also has an impact on overall demand), and this market failure requires state intervention. This is language their micro colleagues can relate to. (There is, however, a certain irony that at the same time as macroeconomics was putting more emphasis on micro theory, many areas of microeconomics were becoming more empirical.)
It was also clear that there was much to criticise in the existing methodology. SEMs could easily become unmanageable constructs that were difficult to relate to simpler models. However, it is possible to overcome that problem, by for example the method of ‘theoretical deconstruction’ proposed in Wren-Lewis et al (1996). As we have already noted with the Phillips curve, informal theorising could sometimes lead to serious misspecification. The theoretical insights that New Classical economists brought to the table were impressive: besides rational expectations, there was a rationalisation of permanent income and the life-cycle models using intertemporal optimisation, time inconsistency and more.
One final point that should be mentioned is ideology, which links the two strands of the NCCR that I talked about in Section 3. Some at least of the New Classical revolutionaries, or those that came to their side, wanted to overthrow the market interventionism that they thought Keynesian policy typified. They wanted not to modify Keynesian economics but to overthrow it. That required a revolution, and a revolution in methodology which allowed empirical evidence to be partially ignored was a convenient means of achieving that.
One feature of all these explanations is that their force has gradually diminished over time. Old truths have been rediscovered and put into the language of microfounded models. Microeconomists have long since stopped celebrating the return of macro to the fold, and many tire of endless DSGE simulations. Time series econometrics has been neglected to an extent that there is now plenty of scope for new discoveries to be made. An ideologically motivated hope that Keynesian economics could be overthrown has been shown to be incorrect.
It has occasionally been suggested that the financial crisis and Great Recession might prompt a new revolution in the way macroeconomics is done. That may embody an element of wish fulfilment, but it could also represent a misreading of history that I discussed in Section 2. The microfoundations methodology is entrenched, and it also appears to those who use it (rightly in my view) progressive, so it is unlikely that its practitioners will down tools and start afresh. What we can perhaps instead hope for is a renewed interest in time-series econometrics, together with an acceptance that for policy advice other ways of modelling may be of some use. A new revolution, that replaces current methods with older ways of doing macroeconomics, seems unlikely and I would argue is also undesirable. The discipline does not need to advance one revolution at a time.
6 CONCLUSION
To understand modern academic macroeconomics, it is no longer essential that you start with The General Theory. It is far more important that you read Lucas and Sargent (1979), which is a central text in what is generally known as the New Classical Counter Revolution (NCCR). That gave birth to DSGE models and the microfoundations programme, which are central to mainstream macroeconomics today.
The NCCR can be thought of as having two strands. The first strand attempted to supplant Keynesian policy, and it failed. New Keynesian models have become dominant among those that study and control the business cycle, although that view is disputed by some academics and is often ignored by politicians. The second strand of the NCCR was methodological, and transformed the way academic macroeconomics is done.
I argue that the NCCR went too far in replacing more empirically based methods of analysis. Although the microfoundations programme is progressive (unless you are a purist), by making theoretical consistency an essential prerequisite it downgrades the importance of empirical evidence. This goes against the spirit of The General Theory, but more importantly I argue that it left macroeconomics ill-prepared for the financial crisis. More traditional methods of analysis are quite capable of being pursued alongside DSGE modelling, and are not fatally flawed despite what is often taught. If there had been more academic involvement in these alternative approaches, the importance of the financial sector for the real economy would have received more attention before the financial crisis. We do not need another methodological revolution as a response to this crisis, but instead a resurrection of the older methods that were inspired by The General Theory.
- 1↑
Unlike the term DSGE, the term SEM is not universally used. Fair (2012) talks about Cowles Commission (CC) models.
REFERENCES
Aron J., Duca J.V., Muellbauer J., Murata K. & Murphy A. , '‘Credit, Housing Collateral and Consumption: Evidence from the UK, Japan and the US’ ' (2010 ) 58 (3 ) Review of Income and Wealth : 397 -423 .
Blinder A.S. & Solow R.M. , '‘Does Fiscal Policy Matter?’ ' (1973 ) 2 Journal of Public Economics : 319 -337 .
Brayton, F., Laubach, T. and Reifschneide, D. (2014) ‘The FRB/US Model: A Tool for Macroeconomic Policy Analysis’, Federal Reserve, available at http://www.federalreserve.gov/econresdata/notes/feds-notes/2014/a-tool-for-macroeconomic-policy-analysis.html
Carroll C.D. , '‘A Theory of the Consumption Function, with and without Liquidity Constraints’ ' (2001 ) 15 (3 ) Journal of Economic Perspectives : 23 -45.
Carroll, C.D, Sommer, M. and Slacalek, J. (2012) ‘Dissecting Saving Dynamics: Measuring Wealth, Precautionary and Credit Effects’, IMF Working paper 2012/219
Chari, V.V., Kehoe, P.J. and McGrattan, E.R. (2008) ‘New Keynesian Models: Not Yet Useful for Policy Analysis’, NBER Working Papers 14313
Church K.B., Sault J.E., Sgherri S. & Wallis K.F. , '‘Comparative Properties of Models of the UK Economy’ ' (2000 ) 171 National Institute Economic Review : 106 -122 .
Currie D. , '‘Macroeconomic Policy Design and Control Theory – A Failed Partnership?’ ' (1985 ) 95 Economic Journal : 285 -306.
Fair R.C. , '‘Has Macro Progressed?’ ' (2012 ) 34 Journal of Macroeconomics : 2 -10 .
Forder J. , Macroeconomics and the Phillips Curve , (Oxford University Press, Oxford 2014 ).
Hausman D.M. , The Inexact and Separate Science of Economics , (Cambridge University Press , Cambridge, UK 1992 ).
Hoover K.D. , The Methodology of Empirical Macroeconomics , (Cambridge University Press, Cambridge, UK 2001 ).
Keynes J.M. , The General Theory of Employment, Interest and Money , (Macmillan, London 1936 ).
Lakatos I. , '‘Falsification and the Methodology of Scientific Research Programmes’ ', in Imne Lakatos & Alan Musgrave (eds), Criticism and the Growth of Knowledge: Volume 4: Proceedings of the International Colloquium in the Philosophy of Science , (Cambridge University Press , Cambridge, UK 1970 ) 91 -196.
Lucas R. , Models of Business Cycles , (Blackwell, Oxford 1987 ).
Lucas, R.E. Jr and Sargent, T.J. (1979) ‘After Keynesian Macroeconomics’, Quarterly Review, Federal Reserve Bank of Minneapolis, Spring issue, 49–72
Mankiw N. Gregory , '‘The Macroeconomist as Scientist and Engineer’ ' (2006 ) 20 (4 ) Journal of Economic Perspectives : 29 -46.
Pagan, A. (2003) ‘Report on Modelling and Forecasting at the Bank of England’, Bank of England
Pfleiderer, P. (2014) ‘Chameleons: The Misuse of Theoretical Models in Finance and Economics’, mimeo, Stanford University
Sims C. , '‘Macroeconomics and Reality’ ' (1980 ) 48 Econometrica : 1 -48.
Smith R.P. , '‘The Warwick ESRC Macroeconomic Modelling Bureau: An Assessment’ ' (1990 ) 6 International Journal of Forecasting : 301 -309.
Tobin J. , '‘Stabilisation Policy Ten Years After’ ' (1980 ) 11 (1 ) Brookings Journal : 19 -90.
Wren-Lewis S. , '‘Internal Consistency, Nominal Inertia and the Microfoundations of Macroeconomics’ ' (2011 ) 18 (2 ) Journal of Economic Methodology : 129 -146.
Wren-Lewis S., Darby J., Ireland J. & Ricchi O. , '‘The Macroeconomic Effects of Fiscal Policy: Linking an Econometric Model with Theory’ ' (1996 ) 106 Economic Journal : 543 -559.
Zinn, J. (2013) ‘Stagflation and the Rejection of Keynesian Economics: A Case of Naive Falsification’, mimeo, University of California, Santa Barbara