The Godley–Tobin Memorial Lecture
Paul KrugmanDistinguished Professor, CUNY Graduate Center, USA

Search for other papers by Paul Krugman in
Current site
Google Scholar
Full access

I had the great good fortune to be taught by and to know James Tobin. He was a modest man, not much given to telling anecdotes about himself — and when he did, they tended to be self-deprecating. I remember him talking about one family trip on which he happened to be recognised by the proprietor of a resort, who said ‘Aha! The enemy of Milton Friedman!’

There was, of course, much more to Tobin than that. But he did indeed debate macroeconomics with Friedman. During the two men’s lifetimes, much of the world considered Friedman the winner; for a while, Friedman was more or less canonised, and not just among conservatives. Meanwhile, Tobin’s work receded into the background of macroeconomics as taught in most schools.

But events have suggested that this judgement was premature. As someone who read the great man’s classic papers and, maybe, soaked up a bit of his influence in the Cowles Foundation coffee room, I’ve been struck by the extent to which people trying to make sense of what’s going on, from the vicissitudes of monetary policy since the financial crisis to inflation in the Covid era, are turning to — or, often unknowingly, reinventing — Tobin’s key ideas.

Nor is it just a matter of specific models. Tobinomics was and is a distinctive way of thinking about the world. What I want to do in this lecture is lay out what, in my view, was distinctive about Tobin’s approach, and show how, in two particular instances — monetary policy and inflation in a disrupted economy — it yields powerful insights that too few people have appreciated.


James Tobin was, obviously, a Keynesian in the sense that he believed that workers can and do suffer from involuntary unemployment, and that government activism, both monetary and fiscal, is necessary to alleviate this evil. But he wasn’t what people used to call a hydraulic Keynesian, someone who imagined that you could analyse the economy by positing mechanical relationships between variables like personal income and consumer spending, leading to fixed, predictable multipliers on policy variables like spending and taxes. (The famous Phillips machine at the London School of Economics turned this metaphor into literal reality: it was an economic model in which the relationships between income and spending were represented, not by ad hoc equations, but by a water-filled system of pipes and pistons.)

Instead, Tobin was also a neoclassical economist. That is, he believed that you get important insights into the economy by thinking of it as an arena in which self-interested individuals interact, and in which the results of those interactions can usefully be understood by comparing equilibria — situations in which no individual has an incentive to change behaviour given the behaviour of other individuals.

Neoclassical analysis can be a powerful tool for cutting through the economy’s complexity, for clarifying thought. But using it well, especially when you’re doing macroeconomics, can be tricky. Why? It’s like the old joke about spelling ‘Mississippi’: the problem is knowing when to stop.

What I mean is that it’s all too easy to slip into treating maximising behaviour on the part of individuals and equilibrium in the sense of clearing markets not as strategic simplifications but as true descriptions of how the world works, not to be questioned in the face of contrary evidence. Notably, perfectly maximising individuals wouldn’t have money illusion, while perfectly clearing markets wouldn’t have involuntary unemployment. So if you’re a neoclassical economist who doesn’t know when to stop, you end up denying that there can be recessions, or that, say, monetary policy can have real effects, even though it takes only a bit of real-world observation to see that these propositions are just false.

So part of the art of producing useful economic models is knowing when and where to place limits on your neoclassicism. And strategic placing of limits is a large part of what Tobinomics is about.

What do I mean by placing limits? Tobin was, first of all, willing to ditch the whole maximisation-and-equilibrium approach when he considered it of no help in understanding economic phenomena — which was the case for his views on labour markets and inflation, which I’ll get to later in this paper.

Where he did adopt a neoclassical approach, he did so using two strategies that economists need to relearn. First, he was willing to be strategically sloppy — to use the idea of self-interested behaviour as a guide to how people might behave without necessarily deriving everything from explicit microfoundations. Second, he was willing to restrict the domain of his neoclassicism — applying it to asset markets but not necessarily to goods markets or the labour market.

For those not already familiar with Tobin’s work, I’m sure this sounds obscure and abstract. So let me try to make it clearer with a discussion of the core of Tobin’s work, summarised in his 1969 paper ‘A General Equilibrium Approach to Monetary Theory,’ but actually beginning with a remarkable 1963 paper that somehow never received formal journal publication.


In 1963 Tobin circulated a Cowles Foundation discussion paper titled ‘Commercial banks as creators of “money”’ that became one of his most influential contributions — it’s his ninth-ranked paper on Google Scholar — without ever appearing in a refereed journal. Since these were the days when journals actually disseminated research — these days whole literatures can rise and fall before the initial paper finally makes it into print — that was a much bigger deal. But the paper was truly seminal, and its insights provided powerful guidance 45 years later amidst the global financial crisis.

As Tobin noted, the way instructors taught monetary economics in the classroom — in fact, the way they still teach it to this day — was remarkably mechanical. ‘Money’ was defined as the sum of currency in circulation and bank deposits; banks were assumed to hold a fixed fraction of their deposits as reserves, and the public to hold a fixed fraction of any loan it received in cash, depositing the rest back into a bank. The result was to create a rigid relationship, the money multiplier, between the monetary base, the sum of currency and bank reserves — which the Federal Reserve can expand or shrink through open market operations — and the money supply.

If the idea of a fixed money multiplier sounds like a monetary version of hydraulic Keynesianism, it should. What’s odd is that this particular version of mechanical, hydraulic economics owes a lot to none other than Milton Friedman, the great enemy of the Keynesian revolution. More on that later.

In any case, Tobin argued that this whole approach was suspect, because it involved abandoning the way we normally do economics. People evidently don’t view currency and bank deposits as perfectly equivalent; they choose how much cash to keep in their wallet and how much to leave in the bank. So while simply summing up currency and deposits and calling it the ‘money supply’ might — might — be a useful approximation, it wasn’t measuring a fundamental variable.

Even more important, every step in the conventional money multiplier story — like everything else in economics — involves choices at the margin. Consumers make tradeoffs between the advantages of cash in their wallets, still useful for small purchases, and the advantages of holding liquid wealth in the form of deposits against which you can write checks (or these days access with a debit card or a smartphone). Banks make tradeoffs between holding reserves, which let them meet demands for withdrawals without scrambling for funds, and lending deposits out and earning interest on those loans.

To be sure, banks face reserve requirements, which have at times been binding, so that sometimes they don’t really make choices at the margin. But not always, and consumers don’t face any comparable limits.

So Tobin argued that we should do monetary economics the same way we do ordinary economics: think of households and banks making choices at the margin that, among other things, reflect incentives — notably the interest rates on deposits and loans. And since these interest rates themselves would be affected by household and bank choices, we should think of the eventual outcome as, yes, an equilibrium in which interest rates clear the asset markets.

Among other things, he argued that reserve requirements were far less important in ensuring monetary stability than many people believed at the time. No, the money multiplier wouldn’t become infinite if banks weren’t forced to hold some deposits as reserves; they would do that anyway, because there were and are reasons to hold some deposits, even if they yield less than other assets.

If you wonder whether any of these observations are relevant, consider how we should think about monetary policy at a time of very low interest rates.

This issue was crucial in the 1930s, then was largely forgotten until Japan began experiencing persistent deflation in the 1990s. Why couldn’t Japan just end the deflation by printing money? Many economists noted that although Japan had increased its monetary base, broader monetary aggregates weren’t going up; this led quite a few analysts to attribute Japan’s problems to banking problems, because they assumed that a functioning banking system would have a normal money multiplier.

But as Tobin could have told us, you shouldn’t expect the money multiplier to work the way it’s often taught when interest rates are near zero. Banks would not mechanically lend deposits out; since returns to lending were very low, they would probably just sit on them, increasing their reserves. And to some extent, households might hold on to cash as well, although putting your money in a bank has advantages over and above the fact that you normally receive some interest.

I wrote about all this in my 1998 paper on the liquidity trap. But I made the analysis much harder because at the time I wasn’t willing to be strategically sloppy. More on that in the next section.

In any case, the importance of a Tobinesque approach became truly apparent in the wake of the 2008 financial crisis. At the time, the Federal Reserve and its counterparts abroad made large asset purchases, greatly increasing the monetary base. And I can’t tell you how many people — including economists — were sure that this would lead to runaway inflation. Their implicit or in some cases explicit analysis went like this:

Soaring monetary base → soaring money supply → soaring prices

But as Tobin could have told them — and I, who knew my Tobin, did tell them — even the first link in this chain wasn’t going to happen in the face of extremely low interest rates. In particular, in a zero-rate world, banks would just sit on reserves rather than lending them out.

Sure enough, after the financial crisis, a huge rise in the monetary base had relatively little impact on the usual measures of the money supply. The US monetary base rose 380 percent between December 2007 and 2014; M2, Friedman’s preferred measure of money, rose only 60 percent. And much of that rise probably reflected substitution rather than true monetary expansion: with the run on shadow banking, many firms shifted their cash from assets like repo, overnight loans that aren’t counted in the money supply, to good old-fashioned bank deposits, which are.

So Tobin’s approach to monetary economics was exactly what we needed to understand monetary policy after the crisis. Why, then, did so many economists fail to understand that? Part of the answer, I believe, is that they had forgotten the virtues of strategic sloppiness, and become unable to think clearly in situations in which it was hard to derive individual behaviour from rigorous analysis.


Tobin’s monetary models of the 1960s look, at first sight, like pure neoclassicism. He (and William Brainard, joint author of much of the work) posited the existence of a number of assets — including the liabilities of financial intermediaries — that people could hold. Their decisions about the composition of their portfolios — the shares of their wealth held in different assets — were assumed to depend on incentives, namely the rates of return these assets offered. And he assumed that asset markets would reach portfolio equilibrium: asset prices would rise or fall to the point where people wanted to hold exactly the amount out there.

Now, even in formulating the problem this way, Tobin was, in a sense, cheating a bit. He modelled equilibrium in asset markets, not necessarily in the economy as a whole; by and large he took everything outside the asset markets as given, or, if he went beyond, he typically assumed a more or less Keynesian real economy in the background.

The linkage from financial markets to the real economy, by the way, came through Tobin’s famous q — the price of capital compared with its replacement value. A high q gave firms an incentive to invest, which then drove employment and income via multiplier events.

But even granting Tobin his limitation of the neoclassical domain, there was a further bit of cheating involved. For Tobin didn’t derive portfolio preferences from maximisation. He just made reasonable-seeming assumptions about what those preferences might look like.

The thing is, we do have a way to derive portfolio preferences from an underlying process of maximisation: the Capital Asset Pricing Model, which derives involves rational agents making a tradeoff between risk and return. And Tobin knew all about that; in fact, in 1958 he wrote one of the seminal papers leading to the CAPM, ‘Liquidity preference as behavior toward risk.’

But what Tobin wanted to do in his analysis of asset markets was comparative statics — to assess the impact of shocks, including policy changes, by comparing equilibria before and after the shock. And here’s the thing about trying to do comparative statics in a CAPM-type framework: it is, to use the technical term, a huge pain. A change in monetary policy might well change all the variances and covariances that underly an optimal risk versus return portfolio allocation. Maybe you’d want to try taking that into account if you really, truly believed that CAPM was true. But of course it isn’t; it’s a clever approach that helps you think about some problems, but not an undeniable truth that must be incorporated into models intended to address other problems.

So Tobin chose to be somewhat ad hoc, to assume portfolio choices that were plausible, that reflected the factors we’d expect to matter for utility-maximising individuals, but weren’t explicitly derived from maximisation. The result was a framework that arguably wasn’t rigorous, for some definitions of rigor, but was tractable, and could serve as a powerful intuition pump.

Now, Tobin didn’t invent this kind of ad hoc neoclassicism. If I had to name a single inventor, it would be J.R. Hicks, who did exactly the same thing — produce a tractable model by positing plausible behaviour not exactly derived from maximisation — when he developed the IS-LM model. But Tobin applied the method to a new and important set of problems.

Among other things, Tobin’s approach made it clear that monetary aggregates like M1 and M2, which include the liabilities of financial intermediaries, are very much endogenous variables. Which brings us to his dispute with Milton Friedman.

Friedman, of course, argued that monetary policy was the principal driver of business cycles; his case rested largely on the clear correlation that used to exist between monetary aggregates and GDP. But as Tobin pointed out, notably in his 1970 paper ‘Money and income: Post hoc ergo propter hoc?’, this correlation, and even the timing of the apparent money–income relationship, could quite easily reflect reverse correlation — money responding to income, not the other way around.

Oh, and a Tobinesque framework helps refute Milton Friedman’s views about the Great Depression. When he was being careful, Friedman didn’t exactly say that monetary policy caused the Depression, although he often managed to give that impression. What he did assert was that the Fed could have prevented the big fall in monetary aggregates that accompanied the Depression if only it had increased the monetary base by enough to offset the effects of bank failures. Tobin-type analysis said, however, that the mechanical money multiplier view is wrong — and hence that the Fed probably couldn’t have prevented a fall in broadly defined money. Events following the 2008 financial crisis seem to confirm that view.

Yet for all its usefulness, Tobin’s approach to monetary economics faded into the background over time, to the point where, as I’ve already mentioned, many economists failed to realise that in a low-interest rate environment, big increases in the monetary base needn’t translate into big rises in broader monetary aggregates.

Why did Tobin’s approach fade away? As someone who was, in a way, part of the process, I’d say that strategic ad-hocness became unfashionable. Indeed, it became really hard to publish papers in macro that didn’t derive everything explicitly from maximisation. As I also noted, to my own later embarrassment, I didn’t simply cite Tobin in my 1998 analysis of Japan’s inability to expand broad money aggregates; instead I engaged in an awkward and gratuitously complicated effort to explain the issue in terms of a contrived maximisation problem.

Let’s hope that younger researchers won’t make the same mistake. In fact, I’ve been encouraged to see some recent analyses – for example, of the effects of a strong dollar on world markets — return to Tobin-style strategic sloppiness rather than shackling themselves to a definition of rigour that would have largely prevented them from addressing the issues at all.


As I mentioned above, ad hoc portfolio preferences weren’t the only place where Tobin was neoclassical up to a point but knew when to stop. He didn’t insist on full maximisation; he also restricted the assumption of full equilibrium, with prices moving quickly to match supply and demand, to asset markets. In much of his work he put asset markets in the foreground, with developments in goods markets in the background. But he pretty clearly thought in terms of sticky, slowly adjusting prices in the real economy.

Again, he wasn’t the first economist to do that — you could argue that it’s the implicit approach in both Keynes and Hicks. But the distinction between rapidly clearing asset markets, modelled at least vaguely in terms of rational behaviour, and disequilibrium in goods markets was especially clear in Tobin’s work.

And that same distinction has played a big role in one of my home fields, international macroeconomics.

I’m not sure whether people outside international macro are aware of, or at any rate fully appreciate the impact of, Rudi Dornbusch’s 1976 paper ‘Expectations and exchange rate dynamics.’ If you don’t know it, Dornbusch combined three things:

  • A simple version of the monetary system, in which an increased money supply leads to a lower interest rate.

  • Expected interest parity: the interest rate on domestic bonds is equal to the interest rate on foreign bonds plus the expected rate of change of the exchange rate.

  • Slow adjustment of goods prices, so that an increased money supply only gradually translates into a rise in the price level.

Dornbusch used this framework to arrive at a striking conclusion: an increase in the money supply leads to a more than proportional depreciation of the currency — overshooting. Why? Because the currency must depreciate past its long run level, so that it can be expected to rise, generating enough expected appreciation to offset the fall in interest rates.

At the time, Dornbusch saw this as a way to understand why floating exchange rates — which were new at the time — had proved so much more volatile than economists and policymakers expected and hoped. But here’s the funny thing: nobody believes that this is the true explanation of exchange rate volatility, among other things because modern central banks don’t set money supplies, they set interest rates, possibly following something like a Taylor rule.

So why did the Dornbusch overshooting model matter so much? In part because the overshooting story is still how most international economists think about the relationship between interest rates and exchange rates — cut your interest rate and your currency must fall enough that people expect it to rise in the future.

But in a more meta sense, Dornbusch showed that you could tell interesting stories in a sticky-price model — in fact, you could even make use of the important idea that investors are forward-looking. And what this did was help keep international macroeconomics sane. Domestic macroeconomics went through a long period in which it was almost impossible to publish sensible papers, because you couldn’t justify the unmistakable reality of price stickiness with maximising behaviour; international macro never went through an equilibrium business cycle phase, because Dornbusch-type analysis allowed realistic descriptions of goods markets to stay respectable.

The relationship to Tobinism is a bit indirect, but I would argue real and important. Instantaneous market-clearing equilibrium is a reasonable assumption about asset markets, and can lead to important insights; it’s a terrible assumption elsewhere in the economy. And being willing to make this distinction — to be neoclassical part of the way, but knowing where to stop — remains key to doing useful macroeconomics.

But what did Tobin have to say about markets for goods and labour? There too his insights look much better, indeed prescient, now that sufficient time has passed.


Milton Friedman’s monetarism, while it still raises its head now and then, had by and large been consigned to the dustbin of intellectual history. Almost nobody pays attention to monetary aggregates these days. However, another idea, the natural rate of unemployment — advanced by Friedman and Edmund S. Phelps in the late 1960s — remains a powerful force in policy debates.

What Friedman and Phelps argued was that there is no long-run tradeoff between unemployment and inflation, because higher inflation will eventually get baked into expectations and raise the actual inflation rate associated with any given rate of unemployment. This view seemed to receive spectacular confirmation with the experience of stagflation in the 1970s.

Tobin was, however, sceptical. In his 1972 presidential address at the American Economic Association, ‘Inflation and Unemployment,’ he didn’t reject the idea that expectations can drive inflation. But he argued against a pure maximising view in which labour market behaviour didn’t depend on what was happening to nominal wages. Instead, he argued that it’s really hard to get workers to accept nominal wage cuts — harder than it is to get them to accept real wage declines produced by inflation.

This was, I’d argue, another case of using neoclassical economics but knowing when to stop. Maximisation might say that only real wages matter, but real-world observation says otherwise — and when nifty theory and reality collide, reality should win.

And Tobin was definitely right about reality. In the aftermath of the 2008 financial crisis, many countries, the U.S. included, saw the distribution among workers of nominal wage changes develop a large spike at zero. That is, in a depressed economy there were many workers in labour markets where the equilibrium wage — the wage that would have matched supply and demand — had fallen from the year before. But employers were very reluctant to actually cut wages; except in extreme cases like Greece, the previous year’s wage rate served as an effective floor on this year’s wage.

More generally, you can say that there’s a lot of nonlinearity in the response of wages to unemployment: it takes a much bigger change in unemployment to induce a given wage cut than it take to induce a comparable wage rise.

Now add in the fact that the economy is always changing; there are always rising and falling industries and occupations. This means that when inflation is very low, there are always sectors that are ‘trying’ to cut wages, but not succeeding. And as Tobin pointed out, this implied that there is, in fact, a tradeoff between inflation and unemployment when inflation is low. Of course, many people are aware of this argument; it’s one of the reasons the Fed targets two percent inflation, not zero.

But it’s an argument that has become even more relevant lately. That may seem off, since at the time of writing the U.S. economy is going through a bout of inflation. But we’ve also gone through a period of huge economic dislocation in the aftermath of the pandemic, with big shifts not just in the total amount of stuff people buy but big changes in the composition of what they buy.

And these big shifts make Tobin’s point about nonlinearity highly relevant after all. If part of the economy is overheated while another part is depressed, you would expect to see inflation, because excess demand does more to increase prices than excess supply does to reduce them. The combination of dislocation and nonlinearity probably isn’t the only reason we’re seeing a lot of inflation, but it’s surely part of the reason.

Oh, and it’s also a reason to hope that inflation will, to some extent, subside of its own accord as the economy settles down, that we won’t need a large rise in unemployment.


As I hope I’ve managed to convey, James Tobin’s papers remain well worth studying for their content. Modern economists trying to make sense of everything from quantitative easing to the effects of a strong dollar can learn a lot from his modelling of how monetary policy interacts with asset markets. In fact, in many cases they may want to directly import his old models into their analysis; I’ve seen quite a few economists who never studied Tobin basically reinvent analyses he worked out 45 years ago, sometimes in inferior form.

But as I also hope I’ve managed to convey, Tobinomics goes beyond specific models. It’s an attitude — one in which you use models but don’t let them use you.

Economics is a long way from being anything like an exact science. It does, however, have an advantage over other social sciences in that much of what we study involves relatively simple human motives — the desire for gain — and relatively simple human interactions — buying and selling on markets. One way to think about neoclassical economics is that it imagines an economy in which these simple motives and simple interactions are all there are, and then assumes that everything works perfectly. And it’s a powerful tool for insight.

There are other ways to do economics. You can try to go behavioural — trying to be more realistic about what people do. You can try agent-based modelling — trying to be more realistic about interactions. So far, however, these approaches don’t offer enough power to replace neoclassical modelling.

For now, then, we have to do what Tobin did: use neoclassical analysis, but be prepared to bend or even break the rules when your reality sense demands it. And it’s a good thing.


  • Dornbusch R. , '‘Expectations and exchange rate dynamics’ ' (1976 ) 84 (6 ) Journal of Political Economy : 1161 -1176.

  • Krugman P. , '‘It’s baaack: Japan’s slump and the return of the liquidity trap’ ' (1998 ) 29 (2 ) Brookings Papers on Economic Activity : 137 -206.

  • Tobin J. , '‘Liquidity preference as behavior toward risk’ ' (1958 ) 25 (2 ) The Review of Economic Studies : 65 -86.

  • Tobin, J. (1963), ‘Commercial banks as creators of “money”’, Cowles Foundation Discussion Paper no. 159.

  • Tobin J. , '‘A general equilibrium approach to monetary theory’ ' (1969 ) 1 (1 ) Journal of Money, Credit and Banking : 15 -29.

  • Tobin J. , '‘Money and income: Post hoc ergo propter hoc?’ ' (1970 ) 84 (2 ) Quarterly Journal of Economics : 301 -317.

  • Tobin J. , '‘Inflation and unemployment’ ' (1972 ) 62 (½ ) American Economic Review : 1 -18.

Since 2022 Since May 2022 Past 30 Days
Abstract Views 16 16 0
Full Text Views 1446 1446 539
PDF Downloads 916 916 453