European Journal of Economics and Economic Policies: Intervention

On economic paradigms, rhetoric and the micro-foundations of macroeconomics

John S.L. McCombie * and Ioana Negru *

Keywords: paradigm, rhetoric, representative agent model, macroeconomic methodology


This paper considers from a methodological point of view why disputes in macroeconomics over its fundamental assumptions regularly occur. It analyses the problem first using Kuhn's concept of the paradigm and then draws on McCloskey's use of rhetorical analysis. It finds the latter of limited help due to the problem of incommensurability between paradigms. It concludes with a discussion of the debate over the need for rethinking the issue and rigour of micro-foundations in macroeconomic theory, notably in terms of the representative agent model.

Full Text


Unlike microeconomics, macroeconomics for many years has seemed to be in a perpetual state of what Nordhaus (1983) termed a ‘macroconfusion’. As Solow pointed out in 1983, and which remains equally true today, the controversies in macroeconomics, whether it was between the post-Keynesians, New Keynesians, Monetarists, or, more recently, over the assumptions of rational expectations and the empirical relevance of the New Classical School, were about the fundamentals of the subject. This is something those undertaking research in, for example, the physical sciences find difficult to comprehend. How could a subject that had become progressively more formal and rigorous over the last few decades, that had developed sophisticated statistical techniques, and that had aspirations of being a science still not have come to any agreement, say, on whether or not the concept of involuntary unemployment makes any theoretical sense? (Keynes 1936; Lucas/Sargent 1979; De Vroey 2004.)

Yet, paradoxically, in recent years there has been a synthesis of the two dominant theories, namely the New Keynesian and the New Classical Economics, such that many saw macroeconomic disputes as a thing of the past (Chari/Kehoe 2006; Goodfriend 2007; Blanchard 2008). The consensus became known as the New Consensus Macroeconomics in applied economics and policy circles (Meyer 2001; Arestis 2007) or the New Neoclassical Synthesis 1 in theoretical circles (Goodfriend/King 1997).

The synthesis represents a consensus because rigidities arising from the New Keynesian assumptions of monopolistic competition, optimal mark-ups and menu costs are now included in the New Classical real business cycle model, which is taken as the benchmark model. As such, the assumption of rational expectations and the need to explain economic phenomena in terms of constrained optimisation within a microfoundations framework of the representative agent is accepted as essential by both theories (Goodfriend 2004). The Ricardian equivalence theorem is often assumed, thereby ruling out tout court the effectiveness of fiscal policy. At the macroeconomic policy level, the synthesis provides the theoretical rationale for inflation targeting and a simple reduced-form macroeconomic model illustrating this can be found in Meyer (2001).

The subprime crisis of 2007 and the accompanying dramatic fall in output and rise in unemployment exposed, for some, the limitations of the New Neoclassical Synthesis as destructively as the Great Depression had of what Keynes termed the Classical economics. Blanchflower (2009) and Buiter (2009), two former members of the Bank of England's Monetary Policy Committee, were particularly scathing about the usefulness of this approach, including the dynamic stochastic general equilibrium (DSGE) models, in understanding the subprime crisis. Well-known criticisms of DSGE models are that no firm could go bankrupt because of the transversality condition and finance was treated in a very rudimentary way; there is no banking system. Indeed the frontiers of research using these models is about whether changes in unemployment are due to shocks to the labour market mark-up or shocks to the preference of workers in terms of the value of the marginal disutility of work. This has no relevance for understanding the subprime crisis.

The debate over the usefulness of the various macroeconomic models and policies was carried out by the leading protagonists not in the rarefied atmosphere of the academic journals in terms of fully specified models, but in the newspapers, and on the internet including the blogs (for example, Cochrane 2009; Krugman 2009). The level of discourse in the blogs was not at a technically advanced level, as it was designed to influence the intelligent layman. Krugman, for example, considers that most of the insights of Keynesian economics (the importance of fiscal policy, the ineffectiveness of monetary policy in the face of a liquidity trap, etc.) could be illustrated with the basic IS-LM model. The rhetorical style of the academic economics article gave way to discourse more reminiscent of the political arena. See, for example, DeLong's blog of 20 September 2009 where he criticises Levine, Cochrane, Lucas, Prescott, Fama, Zingales and Bodrin of making ‘freshman (ok, sophomore) mistakes’ about Keynesian macroeconomics. 2

As a result of the crisis, governments in both the US and the UK resorted to the use of fiscal stimulus and Keynesian policies (Blinder 2013). Keynes, the General Theory, and Minsky (1975) were rediscovered (Posner 2009; Skidelsky 2009). This return to the work of Keynes (published nearly 80 years ago), even though it was for a short time, should have been as improbable – as The Economist pointed out – as if the laws of physics had suddenly broken down and natural scientists felt compelled to rush back to read Newton's Philosophiae Naturalis Principia Mathematica to see where they had gone wrong. Indeed, one could be forgiven for wondering what kind of a subject economics is.

The purpose of this paper is to discuss why, in spite of the developments and increasing technical sophistication of macroeconomics, these fundamental controversies still occur. It extends the argument of McCombie/Pike (2013), who argued that the repeated controversies in macroeconomic theory can best be understood in terms of Kuhn's (1970; 1977; 1999) concept of the paradigm, especially using his more recent insights. 3 However, a difficult question is why different economic paradigms coexist for long periods of time. A related question is what persuades economists in their theory choice. We discuss McCloskey's use of rhetorical analysis in this context, but find it does not lead to any definitive conclusions. This is because the degree of plausibility of an economic approach is paradigm dependent. We illustrate this by reference to the use of the representative agent in economic modelling. The economic paradigm determines the way the economic system is viewed, but why some economists find one approach more convincing than another is outside the scope of this paper.


The early work of Kuhn was subject to a number of criticisms, which for reasons of space we will not discuss here. However, later developments by Kuhn (1977; 1999) concerning the paradigm, or ‘disciplinary matrix’ as he later termed it, have largely answered these critiques and the concept should not be regarded as passé (Hoyningen-Huene 1993). The application of the concept to economics has been recently discussed by McCombie/Pike (2013) and an introduction to the subject is given by McCombie (2001). Kuhn approached the methodology of the physical sciences not as a methodologist, but as a historian of science. Scientific theories are not immediately abandoned after a single, or even several, refutations (the Duhem-Quine thesis), pace Popper. It is possible to identify specific paradigms, within which certain assumptions become untestable by fiat; the ‘paradigmatic pseudo-assumptions’ (McCombie/Pike 2013). In the natural sciences, these are usually assumptions that were previously subject to empirical testing but are now accepted by fiat. In economics, we may extend this term to cover such assumptions in the neoclassical paradigm as the optimisation by agents and the existence of perfectly competitive markets.

The paradigm is acquired by scientists by demonstration and not by any normative methodological rules. It sets the agenda and provides the scientist not only with what are seen as legitimate problems or ‘puzzles’ to solve, but also the means to do this. The paradigm protects the scientist from the necessity of having continuously to question the foundations of the discipline, which could lead to a sense of nihilism. However, one of the key insights of Kuhn is the incommensurability of certain key concepts between competing paradigms. Strong incommensurability is where a concept of one paradigm has no meaning in another (such as utility in the Marxian paradigm or social class in neoclassical economics). Weak incommensurability is where the same concepts or models are used in two paradigms, but where their interpretations are irreconcilable. A good example is the meaning of the Cambridge capital theory controversies. Compare, for example, the views of Fisher (2005) (the controversies were merely a subset of a more general aggregation problem) and Harcourt (1976) (‘what is involved is the relevant “vision” of the economic system and the historical processes associated with its development’ (p.29)). Most paradigmatic debates in economics involve weak incommensurability. For example, the post-Keynesian and New Classical Economics share many of the same economic concepts (and, indeed, the same notation), but, for the latter, the concept of involuntary unemployment is theoretically meaningless, whereas it is central to the post-Keynesian paradigm.

The anomalies which are thrown up by the paradigm in the natural sciences are quietly shelved until they become so substantial that they cannot be ignored and lead to a scientific revolution and the adoption of a new incipient paradigm. Kuhn originally likened this to a gestalt switch. Because of incommensurability between paradigms, the reason for the succession of one paradigm by another cannot be ascribed to ‘objective’ reasons; it is irreducibly a subjective social phenomenon. In spite of this, Kuhn argued that if a scientist were to consider two theories, there are sufficient criteria that ‘would enable an uncommitted observer to distinguish the more recent theory time after time’ (Kuhn 1970: 205) and, crucially, one of these criteria was the ‘accuracy of prediction, particularly of quantitative prediction’ (ibid.: 206). ‘Scientific development is, like biological, a unidirectional and irreversible process’ (ibid.: 206).

However, in economics there is the persistence over long periods of time of radically different paradigms and previously generally discarded paradigms have made a comeback, a reswitching, albeit in a more sophisticated form. While we can definitely tell which of two economic theories is the later merely by its degree of formalism and mathematical technique, if we were to reconstruct it in verbal terms or to compare the two theories in terms of their conclusions, could we be so certain? Certainly, in no sense is the development of economics ‘a unidirectional and irreversible process’. Paradigmatic crisis occurs in the natural sciences by the build-up of anomalies, largely as the result of repeated controlled experiments. But in economics there is no equivalent. Econometric techniques are never conclusive, as the early Keynes–Tinbergen debate showed (see Garrone/Marchionatti 2004). Summers (1991: 130) cogently put the issue as follows: ‘I invite the reader … to identify a meaningful hypothesis about economic behaviour that has fallen into disrepute because of a formal statistical test’.

Nevertheless, it is important not be too dogmatic about this. Within the paradigm, there is plenty of scope for the role of econometrics in both solving and extending the paradigmatic puzzles, even if it is never going to produce a paradigmatic crisis.

If there are no ‘objective canons’ by which competing paradigms can be judged, then in order to understand the process of change in economic theory, we need to understand why some arguments have proved to be persuasive to many economists, and others less so. One approach that initially seemed promising was that of McCloskey (1985; 1994). She has argued that it is necessary to inquire into the ‘economics conversations’, using the techniques of literary criticism. This is rhetorical analysis, where the term is used in the Aristotelian sense of ‘wordcraft’, or an inquiry into the structure of argument. Every economist practices rhetoric, whether or not he or she knows it. Rhetoric can be in the form of mathematical models, an applied econometric study or a more verbal analysis. It is the study and practice of persuasion, which is seen by McCloskey to be an alternative to epistemology.

McCloskey controversially puts forward the view that rhetorical analysis alone is sufficient to analyse theory choice and one can dispense with Methodology with a capital ‘M’. One does not need the canons of conventional economic methodology to determine whether economics progresses, or to provide any normative prescriptions or methodological rules. These would presumably include such diverse approaches as Popper's falsification and critical realism. McCloskey (1985: 29) argues that ‘the overlapping conversations provide the standards. It is a market argument. There is no need for philosophical law-making or methodological regulation to keep the economy of the intellect running just fine’ (emphasis added). The argument that McCloskey puts forward is that competition among ideas will ensure that, in the long run, progress (however defined) will occur. Consequently, McCloskey uses the metaphor of laissez-faire neoclassical economics to argue that ‘if only economists would acknowledge that the persuasiveness of their arguments hinged upon rhetorical considerations, those orthodox theories now in ascendant would be preserved, if not actually strengthened’ (Mirowski 1988: 120).

However, as many commentators have pointed out (for example, Mirowski 1988), there is a serious self-referential problem in McCloskey's use of this analogy. It is based on the analogy with neoclassical economics that free markets lead to the most efficient, and with a small step and a few exceptions, to the optimal allocation of resources. Just as unfettered market forces will lead to the economic survival of the fittest, namely the profit maximising firm (Alchian 1950), 4 so the competition for economic ideas will also lead to the survival of the fittest theories. Consequently, neoclassical economics is used by McCloskey for the justification of not just the dominance of neoclassical economics, but also of its ‘optimality’.

As we mentioned above, this was McCloskey's original thesis. However, her subsequent writings are more nuanced and more recently she seems implicitly to have substantially qualified her view. There is ‘good’ and ‘bad’ rhetoric. As Solow (1988: 33) stresses, ‘some methods of persuasion are more worthy than others. That is what I fear the analogy to conversation tends to bury’. He argues, however, that a metaphor ‘is not good or bad, it is more or less productive’ (ibid.: 33, emphasis in the original). But a metaphor, and the accompanying rhetoric, may actually be damaging to the extent that it takes economics up a blind alley.

Nevertheless, it is not clear precisely how we are to determine whether or not rhetoric is, in this sense, ‘bad’ and even whether or not it has led economics into a dead end. For example, McCloskey (1985) deconstructs the seminal article by Muth (1961) that for many years went unnoticed, largely because of the opaque and convoluted way in which it was written. Yet it later became the basis of the rational expectations revolution. From the point of view of the New Classical Economics theory, Muth's rhetorical approach (the style in which he wrote the paper) was ‘bad’ because it possibly delayed, according to this paradigm, economists from recognising this paper as one of the most important developments in macroeconomics in the past 50 years. Yet from a post-Keynesian viewpoint it was ‘bad’ for precisely the opposite reason, namely that it was eventually so persuasive to many economists. It imposed the damaging assumption of ergodicity on macroeconomics and led to the view that, with the assumption of perfectly flexible prices, ‘involuntary unemployment’ was a meaningless theoretical concept (Lucas 1978). McCloskey's response would presumably be that it is the ‘justly influential’ (of the economics profession) who are the final arbiters. But this immediately runs into two fundamental problems.

The first is: who are the justly influential? The second, and more important, problem is that the difference between what is seen as ‘good’ and ‘bad’ rhetoric cannot be divorced from the paradigmatic context, as is implicit in our distinction between the New Classical and post-Keynesian economics above.

As Kuhn (1970) pointed out, the paradigm will always be used in its own defence. The fact that the majority of economists subscribe to one paradigm is no guarantee that this will lead to progress. McCloskey (1994: 87–88) herself concedes that:

For students of science in the here and now it is naïve to think that power, analogy, upbringing, story, prestige, style, interest, and passion cannot block science for years, decades, centuries. The naïve view is that science is rational in a rationalist sense, that is, non-rhetorical and non-sociological, understandable in our rationalist terms now, not at dusk. The history and sociology and the rhetoric of science says it isn't so.

McCloskey's own subsequent work ironically provides two further persuasive examples of the failure of her free-market analogy. For over 20 years, she has long criticised the economics profession for its failure to distinguish between Fisherian statistical significance and economic significance, the latter term being used in the sense of importance (what she terms the ‘Kleinian vice’). It cannot be an ‘open debate’ when, according to McCloskey, many econometricians agree, in private, that the distinction is important, but repeatedly fail to mention it in print. It took more than 2 decades before there was even one major published comment about her argument (Hoover/Siegler 2008a; 2008b; see also the reply by McCloskey/Ziliak 2008). Of course, it could be argued that, on balance, this rather belated conversation has indeed made economics more productive, but can this really be the case if it is true that ‘all the econometric findings since the 1930s need to be done over again’ (McCloskey/Ziliak 2008: 47)? If this latter position is correct, then it represents a major failure of the economic (or rather econometric) conversation. She also has written about what she calls the ‘Samuelson vice’, the excessive concentration on formal axiomatic models or the notion that ‘proofs of existence’ are scientific, both of which dominate a large proportion of neoclassical economic theory (see the summary of both these views in McCloskey 1997).

Consequently, the economic conversation is far from an infallible process, because, as shown in McCloskey's own more recent writings (for example, McCloskey 1997), there is no guarantee that rhetoric will ensure progress or that fundamental criticisms are even discussed: they may simply be ignored. Because of the appeal to authority, where the ‘authority’ is the dominant paradigm, the critique will be dismissed as unimportant. But the paradigm is itself partially determined by sociological forces; by the dictates of peer review and the decisions on research grant applications. There is no universal economics conversation.

One of the major issues in macroeconomics (at least from the post-Keynesians' perspective) is whether or not there is a need for macroeconomics to have sound micro-foundations (King 2012). The New Classical and Neo Keynesian economists consider it so self-evident that they do not see the need explicitly to justify this necessity, including the most stringent form of reductionism, the use of the representative agent model. This displays all the hallmarks of incommensurability between the two paradigms.


The debate over the importance of micro-foundations can be traced back to Marshall's use of the representative firm, and the criticisms that it received from many economists and, especially, Robbins (Hartley 1997).

The need for micro-foundations is a form of reductionist methodology in that it is based on the premise that aggregate relationships can, and indeed must, be explained in terms of their constituent components. However, right from the start, it is necessary to distinguish between three different types of reductionism (Hoover 2009).

The first is the view that there is no useful distinction between microeconomics and macroeconomics, of which Hoover cites Lucas (1987: 107–108) as a proponent. This imperative stems from the marginal revolution and the paradigmatic pseudo-assumption that, as all economic outcomes are ultimately the result of human actions, any scientific explanation must be couched solely in terms of an individual agent's optimising behaviour. The second is the view that macroeconomics is essentially just a subfield of microeconomics, distinguished only by the material it covers. The third admits different methods between macroeconomics and microeconomics and ‘sees macroeconomics only as a pragmatic compromise with the complexity of applying microeconomics to economy-wide problems. This view asserts that macroeconomics reduces to microeconomics in principle but, because the reduction is difficult, we are not there yet’ (Hoover 2009: 388).

We may term the first two types strong and the last one weak reductionism. Finally, there is the ‘emergent methodology’ that holds that reductionism can never be completely successful, because there are emergent properties leading to, for example, the fallacy of composition. In other words, aggregate outcomes cannot be explained totally from knowledge solely of the actions of the constituent parts of a system. In economics, a widely cited example of an emergent property is the Keynesian paradox of thrift. This does not deny that some form of reductionism is possible or that it is desirable, but simply that it is not necessarily the whole story.

The emergent methodology is essentially the approach undertaken by Keynes and the post-Keynesians and need not concern us in detail here, important though it is. This approach includes the need to give some sort of intuitive explanation of macroeconomic phenomena in terms of an individual's behaviour. Even Keynes resorted to an explanation in terms of individual preferences, although not within an explicit optimising context. For example, the amount that a community spends on consumption depends ‘partly on the subjective needs and the psychological propensities and the habits of the individuals comprising it’ (Keynes 1936: 91). Keynes's ‘fundamental psychological law’ (ibid.: 96) is that consumption increases with income, but not by as much as the increase in income and this is also seen as reflecting individuals' decisions. The derivation of the speculative demand for money requires people to have different expectations of the likely movement of the rate of interest. But these are not really micro-foundations in the modern sense of the term. They are merely justifications for the form, or the ‘normal shape’ (ibid.: 96), of the aggregate consumption function and the liquidity preference. Post-Keynesians often make use of weak reductionism. For example, Trevithick (1992: 111–113) uses the representative firm in his discussion of the procyclicality of wages, as does Kaldor (1961) (see Harcourt 2008: 117).

Strong and weak reductionism (what Wren Lewis (2007) terms the purist and pragmatic approaches respectively) uses the explicit functional forms of a household's utility function and a firm's production function within the context of a formal mathematical, albeit conceptually simple, model. This specific form of reductionism gives rise to the representative agent model, which is used in order to make the mathematical solutions of the model tractable. Given the complexity of constructing mathematical models with heterogeneous individuals, institutions and production technologies, recourse is often made to the representative agent model, where the economy is simply taken to be a blown-up version of the representative agent model.

As Hartley (1997) has persuasively argued, there was never a defining moment when this approach was introduced into New Classical Economics, and indeed he has argued that it was not essential for this approach as the first New Classical models were not based upon the representative agent. However, undoubtedly part of the reason for its introduction arose from the Lucas critique, which showed that many of the parameters in the Keynesian macroeconomic models will be affected by changes in macroeconomic policy, as agents learn and anticipate these. What is required is an analysis of the way that agents make optimising decisions as the economic environment changed. Econometric testing should only be of the ‘deep’ structural parameters that are invariant to changes in the policy or the macro economy in general. These deep parameters were specified as those of the agent's utility function and the production technology and Lucas (1976) assumes that these parameters are constant.

However, as Hartley has pointed out, in real business cycle theory where cycles are driven by productivity shocks, these are also likely to change the underlying technology. Moreover, parameters of the utility function, such as the rate of intertemporal substitution, the rate of discount or time preference and the marginal propensity to consume, may well change under different policy regimes. Hartley (1997: 40–47) examines the Sargent (1981) model and convincingly shows that none of the parameters of the model can be considered to be deep or invariant with respect to past or future regime changes. In fact, paradoxically, notwithstanding the theoretical importance of the Lucas critique, it has not been an important factor in accounting for the failure of econometric macroeconomic models.

Notwithstanding this, Wren Lewis (2007) has shown that the acceptance of the need for theoretical micro-foundations has been the major reason for the development of New Neoclassical Synthesis. The differences between the New Keynesians (but not the post-Keynesians) and the New Classical economists are now merely a matter of degree, rather than of a fundamental nature. Both the New Keynesian and New Classical approaches insist on explicit optimising models of individual consumers and firms, where internal consistency of the model takes precedence over empirical refutation – the latter is merely seen as a guidepost to future theoretical work. (Wren Lewis gives as an example the almost ubiquitous use of the uncovered interest parity assumption in international macroeconomic models, in spite of its dubious empirical validity.) Moreover, the use of calibration models at the expense of statistical testing presents a major shift in the empirical methodological framework.

Previously, an important dividing line between the Neo Keynesians (but not the post-Keynesians) and the New Classical theories had been on the importance of price stickiness. This has now gone as a defining demarcation line through modelling it as an optimising outcome within the representative agent framework in terms of menu costs. Whether price stickiness should be incorporated into the model has been debated more on theoretical than empirical grounds. On the one hand, the New Classical purists deny the legitimacy of incorporating inflation inertia into their approach, no matter how empirically important it is, because it cannot yet be theoretically explained in terms of a convincing optimisation process. The ‘pragmatists’ argue that this, in effect, throws out the baby with the bath water. Calvo (1983) contracts can be introduced into the model, even though this runs counter to firms' profit maximising (in Calvo pricing, firms are assumed to change prices with a fixed probability). Whether or not price stickiness should be included therefore depends upon which of these views is adopted. (Post-Keynesians, following Keynes (1939), deny that price stickiness has any major role in explaining unemployment.) Consequently, in the New Neoclassical Synthesis, the New Classical real business cycle approach is taken as the benchmark model, where price rigidities are deemed to be unimportant. The New Keynesian version of the synthesis incorporates these rigidities into the benchmark model.

There are, however, a number of major problems with this reductionist methodology. The first problem that the representative agent model attempts to overcome is that, while it may be possible to prove the existence of an equilibrium (and this was the central concern of Debreu (1959)), it is not possible to prove that an equilibrium will be unique and stable even in the simple case of an exchange economy without production (Kirman 1989). Flexible prices may be of no help in attaining equilibrium. Generally, for stability, the excess demand function for a good should slope downwards so that the lower the price of a good, the greater is the excess demand. This is required because the Walrasian auctioneer would reduce excess demand by calling out a higher price. The problem is that, even if we start from a simple exchange economy, the only conditions that follow from the assumption of well-behaved individual preferences are that the aggregate excess demand functions will be continuous, Walras's law holds (the sum of the values of all excess demands across all markets must equal zero at some positive price), and the functions are homogeneous of degree zero in prices. Nothing else is implied. Even the Weak Axiom of Revealed Preference does not carry over.

The Sonnenschein–Mantel–Debreu theorem proves that there is nothing in individual choice theory that precludes multiple equilibria and the result that as the price falls, so excess demand could also fall. The only constraint is that, for a high price, the excess demand should be negative and, as the price tends towards zero, the excess demand becomes infinite. An intuitive explanation is that the increasing scarcity of a resource could lead to a decline in its price as demand for products using this resource intensively falls. In other words, income effects more than offset the axiom of gross substitutability. As Davidson (2007: 31) points out, ‘Arrow and Hahn (1971, pp. 15, 127, 215, 305) have demonstrated, however, if the gross substitution is removed as an axiom universally applicable to all markets, then all mathematical proofs of the existence of a general equilibrium solution, where all market – including the labor market – clears are jeopardized’.

The use of the representative agent model overcomes this because, for the individual, individual excess demand functions do have both a unique and stable equilibrium. But this merely assumes away the problem. Moreover, as Kirman (1992: 123, emphasis in the original) has noted, in the case of policy changes ‘the representative constructed before the change may no longer represent the economy after the change’. Even if this does not occur (a ‘pious hope’ according to Kirman), it is perfectly possible for the representative agent to make the same choices as those of the aggregative individuals before and after, say, a price change but for the preferences of the representative agent to be completely at variance with those of the individuals he represents. (For an intuitive example, see Kirman 1992: 124.)

The production side of the model faces equally serious problems. It is well known that micro-production functions obeying all the standard assumptions of neoclassical production theory cannot be aggregate to give a well-behaved aggregate production function, except under the most implausible of assumptions (Fisher 1992). Furthermore, plausible empirical estimates of aggregate production functions cannot be taken as implying their existence because of the presence of an underlying accounting identity (Felipe/McCombie 2013).

Kirman's (1992: 119) conclusions are extremely damaging for the New Neoclassical Synthesis paradigm: ‘The way to develop appropriate microfoundations for macroeconomics is not to be found by starting from the study of individuals in isolation, but rests in an essential way on studying the aggregate activity resulting from the direct interaction between different individuals. Even if this is too ambitious a project in the short run, it is clear that the “representative” agent deserves a decent burial, as an approach to economic analysis that is not only primitive, but fundamentally erroneous’. Moreover, the conclusions for macroeconomics are far-reaching. Concepts such as macroeconomic equilibrium, the natural rate, and the rate of movement back to equilibrium all make assumptions about uniqueness and stability, ‘yet … such assumptions have no theoretical justification’ (Kirman 1989: 137).

The Sonnenschein–Mantel–Debreu criticisms did not arise from the heterodox critics of neoclassical theory, but from the very economists who had done so much to develop general equilibrium theory in the first place. In this sense, it was ‘a palace revolution’ or an intra-paradigmatic critique. Nevertheless, these damaging theoretical results had very little impact on the New Neoclassical Synthesis. Indeed the methodology of the latter is an instrumentalist approach; the criterion of success is the successful empirical implementation through calibration, rather than econometric testing. The accuracy of the assumptions, per se, is irrelevant. Primacy is given to the construction of artificial models that closely mimic the observed path of the economy (Lucas 1978). Indeed, at times it seems as if econometric testing is irrelevant. What matters is that there should be a fully articulated model, based on the paradigmatic pseudo-assumptions, that has been shown to be capable of replicating the path of the economy. It is not that the New Classical model can ‘satisfactorily account for all the main features of the observed business cycle. Rather we have simply argued that no sound reasons have yet been announced which even suggest that these models are, as a class, incapable of providing a satisfactory business cycle’ (Lucas/Sargent 1979: 14).

An interesting insight into the early role of statistical testing with respect to the New Classical economics, where it can be seen that the paradigm has an important influence in the way the results are interpreted, is given by a test of the natural rate hypothesis by Sargent (1973). The data tend to reject the hypothesis, yet Sargent (1973: 462) himself concludes that the evidence is not strong enough to persuade anyone to give up ‘a strongly held belief in the natural rate hypothesis’. It is merely a paradigmatic anomaly, to be set aside until further evidence becomes available. But this is to be too charitable to Sargent, who later cites the results in support of the natural rate hypothesis (Hartley 1997: 88).

There are further problems with the use of the reductionist methodology of the representative agent. Consideration of the single individual, devoid of social context and institutions, excludes the interactions of individuals with each other and the way this shapes, and is shaped by, the social institutions. (Introducing heterogeneity of exogenous preferences does not meet this criticism. However, agent-based modelling is an improvement.) The attraction of the use of the representative agent to many neoclassical economists is the putative generality of consumer theory, in that it is not contextually bound. But this is also a great weakness as it rules out conformism, herd behaviour and many behavioural traits that social psychology emphasises and which can have serious macroeconomic consequences. (See, for example, the essays in Gallegati/Kirman 1999.)

Ironically, it also excludes the actions of a relatively small group of individuals within a particular social institution, such as the banking system, that can have an impact on the function of the economy by virtue of the functional dependence of the economy on this institution. For example, the decisions of a financial trader using other people's money, which may be optimal from his point of view, are likely to be very different from those of the self-employed worker risking his own money, given the incentive structure provided by the financial sector (Rajan 2005).

Indeed it is surprising that, for a methodology advocating that macroeconomics needs sound micro-foundations, no attempt is made to determine directly how sound they are. The substantial literature on behavioural economics is ignored; mere introspection is deemed sufficient. Moreover, once the unrealistic assumption of optimisation due to risk is replaced by decision-making under uncertainty, and the economy is viewed as non-ergodic (Davidson 2007: 31–35), the whole concept of optimality in decision-making loses its raison d'être.


The key difference between post-Keynesian and the New Neoclassical economists is that the former dispute the generality of a reductionist methodology – a view held in many other disciplines, including biology (Brigandt/Love 2012) – while the latter do not. An analogy may make this reasoning clearer in a purely formal system, namely the hierarchies of geometry. In an often-quoted passage, Keynes drew an analogy between the Classical economics and Euclidean geometers:

The classical theorists resemble Euclidean geometers in a non-Euclidean world, who, discovering that in experience straight lines apparently parallel often meet, rebuke the lines for not keeping straight – as the only remedy for the unfortunate collisions which are occurring. Yet, in truth, there is no remedy except to throw over the axiom of parallels and to work out a non-Euclidean geometry. (Keynes 1936: 16)

This analogy has more relevance than perhaps Keynes realised for the issue of reductionism and emergence in economics. Following Felix Klein (1939), geometries may be ranked in a hierarchical order, with the top tier being topology, followed by projective geometry, affine geometry and Euclidean-metrical geometry. Each geometry may be transformed into another tier by means of a mapping function. In this transformation some properties will change, while there will be invariance in that some properties will not alter. For example, Medawar (1974) gives the example of a circle, (x, y) = 0. If this is mapped by a change in coordinates such that xʹ = f 1(x, y) and yʹ = f 2(x, y) into an ellipse, the transformation is invariant to the extent that it represents a closed line that divides the plane into an outside and an inside of the line. But in this transformation, the concept of a circle, per se, has no meaning.

If we consider the relationship of affine geometry, the initial coordinates are mapped into the new coordinates by means of linear integral functions. In this case the transformations are very similar to Euclidian geometry except that the degree of expansion or contraction of the three dimensions of space is not necessarily the same. As Medawar points out, it is not meaningful to speak of a circle or a square except as special cases of ellipses and rectangles. In projective geometry, the mapping relations are fractional linear functions, where invariant relationships are those that are linear, but parallelism is not invariant. Finally, in topology the relationships between the old and the new transformations are the most general. All that is required is that the mapping function maps single-valued points into each other both ways and the functions are continuous. Medawar likens this to drawing on a sheet of flexible rubber that may be stretched in any way but not torn (otherwise the relationships would not be continuous). In topology there are no such concepts as straight lines, and properties such as parallelism do not exist. Returning to the hierarchy of geometries, following Medawar (1974: 61) we have the following hierarchy: (i) topology; (ii) projective geometry; (iii) affine geometry; and (iv) Euclidean-metrical geometry.

Each geometry is a special case of the one above it, and as we descend the hierarchy, the theorems become more specific and restrictive, but in Medawar's word ‘richer’. ‘This progressive enrichment occurs not in spite of the fact that we are progressively restricting the range of transformations, but precisely because we are doing so’ (Medawar 1974: 61, author's emphases). Furthermore, every statement that is true in one geometry is true in the geometry below it. We can also see how the properties emerge with these restrictions. The concept of a straight line, for example, is incommensurable with topology.

Analogies are instructive. The representative agent, and neoclassical choice theory which is devoid of any social context, is the most general way of viewing economics and is analogous to topology. Indeed, this is what some neoclassical economists see as the great strengths of choice theory. But it is only when this is placed in the context of a monetary economy where the savings and investment are not undertaken by the same representative agent and the world is not ergodic that the concept of involuntary unemployment ‘emerges’ as an analytical concept, pace Lucas. Here, as with the geometries, when we restrict the generality of the analysis, it becomes richer and more informative.


This paper has examined the factors affecting why some economic approaches or paradigms at a particular time have attracted more adherents than others. The insights of Kuhn imply that, in economics, because of incommensurability and differences concerning the fundamental assumptions of the various paradigms, there is never likely to be any definitive empirical evidence that affects theory choice. Instead, it is more likely to be subjective factors such as the persuasiveness of the economic conversation. McCloskey suggests that this may be analysed by the tools of rhetorical analysis and, at one time, she seemed to suggest that this will be sufficient to ensure progress in economics. However, we argue here that she was too sanguine. Her own work on statistical versus economic significance and its reception provides evidence this is not true. Thus, the relevance of the representative agent model, notwithstanding the criticisms levelled at its relevance for analysing the subprime crisis, is likely to continue to be central to the major macroeconomic paradigm, even though some economists consider it to have little relevance to the real world.

  • 1

    This is a reference back to Samuelson's ‘neoclassical synthesis’ of the 1960s; namely, the combination of the Keynesian demand-oriented approach and the supply side given by the neoclassical aggregate production function.

  • 2

    The crisis has generated a number of technical papers on its causes that have now appeared in the academic journals. But the debate about the causes has been at a much more fundamental level.

  • 3

    See Negru (2013) for an alternative interpretation of the concept of the paradigm.

  • 4

    This is due more to Friedman (1953) as Alchian (1950) ironically had severe reservations about this argument.



John S.L. McCombie - Cambridge Centre for Economic and Public Policy, Department of Land Economy, University of Cambridge, UK

Ioana Negru - Lord Ashcroft International Business School, Anglia Ruskin University, Cambridge, UK