#30 mitchell dsge批判
https://nam-students.blogspot.com/2019/06/30-mitchell-dsge.html
参考:
Buiter, W.(2009) "The Unfortunate Uselessness of Most State of the Art Academic Monetary Economics", Financial
Times,March. Available at: http://economistsview.typepad.com/economistsview/2009/03/the-unfortunate-use-lessness-of-most-state-of-the-art-academic-monetary-economics.html , accessed 25 September 2018
Mainstream macroeconomics in a state of ‘intellectual regress’ – Bill Mitchell – Modern Monetary Theory 2017/1/3
http://bilbo.economicoutlook.net/blog/?p=35118 ☆☆Mainstream macroeconomic fads – just a waste of time
HISTORY OF MACROECONOMIC THOUGHT
490
Their justification was that when interest rates reach the 'zero bound' (at zero or close to it), the space for f
ther monetary policy interventions becomes limited. In this situation, fiscal policy targeted at promoting higher
expectations of inflation may be effective in stimulating spending.
The idea is that in recession, policymakers must create the expectation that a fiscal expansion will spur infla-
tion that will increase future tax revenue to 'pay for' the current deficit. In that case, rational agents will not react
negatively to a fiscal deficit. Earlier versions of mainstream theory invoked the concept of Ricardian Equivalence
to eschew the use of fiscal deficits. Ricardian Equivalence states that when governments run fiscal deficits, the
spending stimulus forthcoming is thwarted by private households increasing their own saving (reducing spend-
ing) in order to build up a reserve to pay higher taxes that will be needed to retroactively pay for' the deficir
The NMC realises that in some circumstances (where fiscal deficits lead to higher inflationary expectations) this
offsetting private behaviour will not be forthcoming.
More importantly, according to NMC, the central bank's low interest rates in the absence of fiscal expansion
would actually tighten the govern ment's fiscal stance because it would pay less interest on its bonds. Thus, fiscal
policy must be intentionally relaxed to work with monetary policy when it has reached the lower bound at near-
zero interest rates. The claim is that after the GFC governments generally failed to do this, which is why monetary
policy did not work.
These considerations are believed to relieve govern ment, albeit temporarily, from the dreaded 'budget con-
straint' (addressed in Chapter 22), but only in the exceptional case where interest rates fall to zero. As the reader
will remember, the MMT position is that such thinking is flawed and results from extrapolating the budget con-
straint of a household to the currency-issuing government
30.3 Weaknesses of the NMC
The GFC and its aftermath cast unfavourable light on the NMC, and caused its proponents to attempt to address
some of its weaknesses. In particular, its treatment of money and financial institutions as well as its relegation of
fiscal policy to a subordinate role have been recognised as problematic. In addition, its methodology, which relies
on 'aggregating up' from individual behaviour, suffers from fallacies of composition. Finally, its policy recommen-
dations, both before and after the GFC, proved to be rather ineffective.
Proponents of New Keynesian economics and the NMC claim authority because their macroeconomic mod-
els are what they call micro founded. This just means that they assume people and firms behave as rational,
maximising agents with rational expectations and can solve very complex maximisation problems (with respect
to consumption and output decisions) about current and future actions. The problem is that the NMC's highly
stylised mathematical models, which are overly simplistic because they cannot be 'solved' otherwise, fail very
badly when they try to say anything sensible about movements in real world data. At that point, ad hoc changes
are made to the models (for example, putting lagged variables in to capture real world inertia), which are not
indicated by the micro-founded theaory. In other words, what eventually emerges as the practice interface of these
theories is not based on micro optimisation, and so the claim for authority is negated.
More generally, as argued throughout this textbook, any approach that attempts to explain macroeconomic
(aggregate) behaviour by starting out with individual behaviour is exposed to problems of fallacies of composi-
tion when it attempts to extrapolate findings to the economy as a whole. Even if individual behaviour can be
described as pursuing a rational calculation aimed at utility maximisation over one's lifetime subject to budget
constraints, the macroeconomic implications of such behaviour cannot be obtained by simply summing up over
a number of individuals. For example, as previous chapters have shown, while spending by individuals could be
constrained by income, at the aggregate level it is spending that determines income. Aggregate income is con-
strained by spending, which in turn is largely determined by expectations (as firms hire the amount of labour they
think they need to produce the amount of output they think they can sell at a profit). There are also comprex
problems of coordination at the aggregate level that are assumed away either by invocation of the 'invisible hand
metaphor or by modelling an economy that has only one 'representative' individual (a typical approach in general
equilibrium models) or identical individuals.
マクロ経済思想の歴史
490
彼らの正当化は、金利が「ゼロ限度」に達すると(それに近いまたは近い)、
より多くの金融政策介入は制限されるでしょう。この状況では、高
インフレへの期待は支出を刺激するのに効果的かもしれません。
その考えは、景気後退において、政策立案者は財政拡大がインフレを刺激するという期待を生み出さなければならないということです
現在の赤字を「支払う」ことで、将来の税収が増加します。その場合、合理的なエージェントは応答しません
財政赤字にはマイナス。主流理論の初期のバージョンは、遺物同値性の概念と呼ばれていました
財政赤字の使用を避けるために。 Ricardian Equivalenceが政府の赤字を動かすとき、
将来の支出刺激策は個人貯蓄を増やす(支出を減らす)
A)不足分を返済するために必要とされるより高い税金を支払うために準備金を予約する
NMC、状況に応じて(財政赤字はインフレ期待の上昇につながる)、
個人的な行動を相殺する時間はありません。
より重要なことに、NMCによると、財政拡大なしの中央銀行の低金利
実際、国債の利子率が低下するにつれて、政府の財政スタンスは引き締められます。したがって、財政
それが近隣の下限に達した場合、政策は慎重に緩和され、金融政策に取り組む必要があります。
ゼロ金利の主張は一般的にGFC政府がこれをしなかった後であり、それが経済的な理由です
その政策はうまくいかなかった。
これらの検討事項は、一時的ではあるが、政府の財政を素晴らしい予算削減から解放すると考えられている。
例外的な場合に限り(第22章で扱ったように)「ストレート」ですが、金利はゼロまで下がります。読者として
MMTの見解は、そのようなアイデアには欠陥があり、予算見積もりを外挿した結果であるということです。
通貨発行政府に対する家計の制約
30.3 NMCの弱点
GFCとその余波はNMCに不利な光を投げかけ、その支持者に言論を試みさせた。
その弱点のいくつか。特に、金銭や金融機関の取り扱い
従属的役割への財政政策は問題があると認識されてきた。さらに、方法論
個々の行動からの「集約」に関しては、我々は誤った設定に苦しんでいます。最後に、ポリシーは
GFCの前後のスケジュールはそれほど効果的ではないことがわかった。
ニューケインジアン経済とNMCの支持者はマクロ経済
elsは彼らがマイクロファウンドと呼ぶものです。これは単に人々や企業が合理的に行動することになっていることを意味します。
合理的な期待値でエージェントを最大化し、非常に複雑な最大化問題を解決することができます。
現在および将来の行動に関する消費および生産の決定)
定型化された数学モデルは、他の方法では解決できないため非常に単純化されているため、非常に失敗します。
現実世界のデータ移動について賢明なことを言っているときはひどいです。その時、臨時の変更
これはモデルに対して行われます(例えば、現実世界の慣性を捉えるために遅い変数を入れる)。
それはミクロの創設理論によって示されています。言い換えれば、これらのプラクティスのインターフェイスとして最終的に現れるもの
理論はミクロな最適化に基づいていないので、権威ある主張は否定されます。
より一般的には、この教科書を通して論じられているように、マクロ経済学を説明しようとするあらゆるアプローチ
個々の行動から始めることによる(集合的な)行動は、構成ミスの問題にさらされる。
結果が経済全体に外挿されることになるとき。個人の行動が
同社は、予算を条件として、生涯にわたる利益を最大化することを目的とした合理的な計算を進めていると述べられている。
制約、そのような振る舞いのマクロ経済的な意味は単純に要約することによっては得られません。
個人の数たとえば、前の章で示したように、個人的な支出
所得に制約されて、総計レベルで所得を決定するのは支出です。総所得は一定です。
支出に苦しんでいますが、これは予想によって決定されます(企業は労働者を雇うので)
私は彼らが彼らが利益がありそして売れると思うアウトプットの量を生み出す必要があると思います)。目に見えないハンドコールによって引き離されるように思われる集団レベルでの協調の内包問題もあります
比喩的に、あるいはたった一人の「代表的な」個人で経済をモデル化することによって
平衡モデル)または同じ個人。
491
30 The New Monetary Consensus in Macroeconomics
NMC economists continue to repeat many of the logical errors of the old neoclassical theorists. For example,
proponents of the NMC explained the persistence of the post-GFC recession by arguing that the desire to save
outstripped the desire to invest. In their view, this would normally generate an equilibrating fall in the interest
rate, which would have reduced saving and increased investment spending. But due to the zero bound being
reached, monetary policy could not bring saving and investment into equilibrium.
REMINDER BOX
Students will immediately identify that this argument is based on the flawed loanable
funds approach to interest rates that are supposed ly determined by the intersection of say-
ings and investment (see Chapter 13), As we demonstrated in Chapter 13, saving equals
investment lin the simple model without qovernment or a foreign sector) regardless of the
level of the interest rate. Most importantly, saving is a function of income and it is income
adjustments rather than interest rate adjustments that bring saving into line with planned
investment expenditure. In the expanded model, savings equals investment plus the gov-
ernment deficit plus the current account surplus.
Further, prior to the GFC, the mainstream macroeconomists ignored the financial sector because they believed
that it had no relevance to the real sector.. In this sense, the New Keynesian approach adopted the classical
dichotomy in which money is a veil and is only relevant for determining prices (see Chapter 12). Accordingly, an
understanding of the real economy could abstract from the financial sector, with the only concession being the
introduction of a central bank following a 'Taylor Rule.
After the GFC, the NMC economists realised that the lack of any attention to financial markets in their
core macroeconomic framework was a major error. A plethora of new academic papers emerged, attempting
to integrate banks and financial markets into the New Keynesian model. The revised New Keynesian approach
retained the DSGE framework and added elements of the financial sector. Once again this was a case of a practice
that heterodox economist David Gordon described in the 1970s as being an ad hoc response to anomaly: a
characteristic of the neoclassical approach when confronted with major empirical shortcomings.
A full exposition of the technicalities of the DSGE approach is beyond the scope of this textbook. The following
introductory statements made to the US House of Representatives Committee on Science and Technology
hearing on 20 July 2010 are useful:
The dominant macro model has for some time been the Dynamic Stochastic General Equilibrium model.
or DSGE, whose name points to some of its outstanding characteristics. "General" indicates that the madel
includes all markets in the economy. "Equilibrium" points to the assumptions that supply and demand
balance out rapidly and unfailingly, and that competition reigns in markets that are undisturbed by short
ages, surpluses, or involuntary unemployment. "Dynamic" means that the model looks at the economy over
time rather than at an isolated moment. "Stochastic" corresponds to a specific type of manageable random-
ness built into the model that allows for unexpected events, such as oil shocks or technological changes,
but assumes that the model's agents can assign a correct mathematical probability to such events, thereby
making them insurable. Events to which one cannot assign a probability, and that are thus truly uncertain,
are ruled out.
The agents populating DSGE models, functioning as individuals or firms, are endowed with kind of
clairvoyance. Immortal, they see to the end of time and are aware of anything that might possibly ever occur,
as well as the likelihood of its occurring their decisions are always instantaneous yet never in error, and no deci-
sion depends on a previous decision or influences a subsequent decision. Also assumed in the core DSGE model
is that all agents of the same type - that is, individuals or firms - have identical needs and identical tastes
which, as "optimisers," they pursue with unbounded self-interest and full knowledge of what their wants are. By
HISTORY OF MACROECONOMIC THOUGHT
492
employing what is called the "representative agent" and assigning it these standardised features, the DSGE model
excludes from the model economy almost all consequential diversity and uncertainty - characteristics that
many ways make the actual economy what it is
The DSGE universe makes no distinction between system equilibrium, in which balancing agent-level
disequilibrium forces maintains the macroeconomy in equilibrium, and full agent equilibrium, in which every
individual in the economy is in equilibrium. In so doing, it assumes away phenomena that are commonplace
in the economy: involuntary unemployment and the failure of prices or wages to adjust instantaneously to
changes in the relation of supply and demand. These phenomena are seen as exceptional and call for special
explanation.
As with all neoclassical general equilibrium models, there is also a problem with trying to introduce money, banks
and the financial system to this stylised framework. For example, the standard DSGE models are not useful for
analysing financial crises because debt default is ruled out under the representative agent assumption. This means
that there is no need for banks that specialise in assessing creditworth iness. If no one ever defaults then everyone
is equally creditworthy and the fundamental activity of banks, what is called 'underwriting' is superfluous. Savers
can just lend directly to borrowers or, alternatively, the debts issued by everyone are equally acceptable and
always exchange at par against all other debts.
While DSGE modellers want to include money as a medium of exchange to make their theory more
relevant to the real world, they have no justification for the existence of banks that would issue it. Given that
all debts are risk free, there would be no need for money since any debt could serve the same purpose, you
could always buy what you want by directly issuing your own debt. Indeed, debts that pay interest would
always trump non-interest paying money. It is ironic that this model is used by central bankers to formulate
monetary policy, yet it cannot convincingly justify either the use of money or the inclusion of financial
institutions. Attempts to work debt default into the DSGE framework post-GFC add significant complexity
and make it largely unworkable.
Economist Willem Buiter (2009), who now works in the financial markets, described New Keynesian and DSGE
modeling as "The unfortunate uselessness of most 'state of the art' academic monetary economics". He continued:
Most mainstream macroeconomic theoretical innovations since the 1970s (the New Classical rational
expectations revolution.. and the New Keynesian theorising.. have turned out to be self-referential, inward-
looking distractions at best. Research tended to be motivated by the internal logic, intellectual sunk capital
and esthetic puzzles of established research programmes rather than by a powerful desire to understand how
the economy works - let alone how the economy works during times of stress and financial instability. So the
economics profession was caught unprepared when the crisis struck ... the Dynamic Stochastic General
Equilibrium approach which for a while was the staple of central banks' internal modelling. .. excludes everything
relevant to the pursuit of financial stability. (Buiter, 2009)
A few years before the global financial collapse, central bankers were congratulating themselves on the success of the
NMC approach in not only keeping inflation down, but also stabilising growth and financial markets. In 2004, Benjamin
Bernanke, soon to become the Chairman of the US Federal Reserve Bank, declared the arrival of the Great Moderation-d
new era of stability in which successful policy management by central banks had reduced the risk of run-away inflatio
or recessions. Central bankers would be able to address macroeconomic problems.
This turned out to be an unfortunate prognosis, as the GFC began just three years later. Alan Greenspan (PBS
Newshour, 2008), who was chairman of the US Federal Reserve Bank until 2006, later told the US Congress t
the crisis showed that his entire world view, developed over half a century and based on a faith in the efficiency of
worked, nor had they actually produced a new era of stability. In fact, even as Bernan ke wrote his paper, the US
'free markets', had been entirely wrong (see Box 32.1). Central bankers had neither understood how the econony
was living through a period of unprecedented bubbles in real estate markets, commodities markets, and equity
markets.
30 The New Monetary Consensus in Macroeconomics
493
the aftermath of the crisis, central bankers experimented with historically low interest targets, then turned
to unconventional policy such as quantitative easing (QE) and negative interest rates (see Chapter 23), and even
Jiccussed policies such as 'helicopter money drops' (in which the central bank would distribute 'free money' to
bouseholds). The central banks did everything they could think of doing (according to their theoretical positions)
to cause inflation (and reset expectations) because it remained stubbornly below their targets. While this cast
come doubt on the potency of monetary policy, it did not cause a significant change to the new synthesis of
macroeconomic theory. New Keynesian economists such as Paul Krugman simply tried to tweak the NMC by
___
arguing that in a 'liquidity trap', monetary policy loses some of its effectiveness (see Chapter 23). While central
hanks cried QE to lower longer-term interest rates and even negative interest rates by charging interest to banks
holding reserves, neither of these had significant effects.
For this reason, as noted above, some NMC economists have begun to move away from the common orthodoxy
that fiscal policy is impotent, arguing that at least in some circumstances the 'Ricardian Equivalence' assumption
does not hold. In a 'non-Ricardian' situation, an increase of govern ment spending or a reduction of taxes might
not be offset by more private sector saving to pay the anticipated higher taxes in the future. In that case, deficit
spending can raise demand and thus nominal and even real GDP. While some of the terminology differs from
that of MMT, including metaphors such as 'printing money' and 'helicopter drops, at least some advocates of the
NMC have come to understand what MMT has long argued.
For example, this is how Woodford (2000: 32, emphasis retained from original) puts it:
A subtler question is whether it makes sense to suppose that actual market institutions do not actual-
ly impose a constraint.. upon governments (whether logically necessary or not), given that we believe that
they impose such borrowing limits upon households and firms. The best answer to this question, I believe,
is to note that a government that issues debt denominated in its own currency is in a different situation
than from that of private borrowers, in that its debt is a promise only to deliver more of its own liabilities
(A Treasury bond is simply a promise to pay dollars at vario us future dates, but these dollars are simply
additional government liabilities, that happen to be non-interest-earning) There is thus no possible doubt about
the government's technical ability to deliver what it has promised.
Ben Bernanke (2002, emphasis added) reached the same conclusion:
Under a fiat (that is, paper) money system, a government (in practice, the central bank in cooperation with
other agencies) should always be able to generate increased nominal spending and inflation, even when
the short-term nominal interest rate is at zero... The U.S. government has a technology, called a printing
press (or, today, its electronic equivalent) that allows it to produce as many U.S. dollars as it wishes at
essentially no cost.
However, the NMC remains a mainstream approach. It still sees taxes and borrowing as the means of financing
government spending with money printing as an option to be reserved for extraordinary circumstances, such as
a downturn as deep as that of the GFC. Its approach to macroeconomics still relies on aggregating up from indi-
vidual behaviour, meaning it is subject to various fallacy of composition errors.
it views market forces as equilibrating, even if rigidities, imperfect information, and imperfect competition
prevent continuous market clearing, It has trouble introducing money and financial institutions, let alone financial
crises, into the analysis in a plausible manner
Individuals are still presumed to hold rational expectations in
certain. There is no true uncertainty although probabilistic risk exists (see Section 29.5). In all these respects, the
world it models bears little resemblance to the world in which we live.
The effectiveness of NMC policy relies critically on expectations management. Both monetary policy and
fiscal policy will work only if the central bank can generate a consensus of expectations. For example, lowering
inflation requires market participants to expect that inflation will be lower. This lowers actual inflation as firms
world that is presumed to be actuarially
494
HISTORY OF MACROECONOMIC THOUGHT
and workers agree to stop raising wages and prices. When the problem is deflation, it is even more imperative
that policy generate expectations of rising prices. Once monetary policy reaches the zero lower bound, it is
difficult to do anything but to work through expectations management. And as discussed even stimulative
fiscal policy will be impotent unless it can produce expectations of inflation due to the notion of Ricardian
Equivalence.
Conclusion
Since the early 1980s the central banks have adopted several key principles that were believed to improve expec-
tations management: transparency, telegraphing policy, gradualism, and activism. Transparency means that the
central bank will work closely with financial markets, informing them about its policy formation process, provid-
Ing clear statements about its goals. It telegraphs rate changes far in advance to avoid any surprises (note how dif
ferent this is from both Monetarism and New Classical views of central bank operations). Gradualism means that
the central bank moves rates by small amounts over relatively long periods of time to achieve the total change of
rates desired. This also allows markets to adjust gradually to the new interest rate regime. Finally, activism means
that the central banks act quickly at the first signs that inflation will move away from target. Ideally, the central
bank fights either highe
However, in practice, following these principles can prove to be problematic. For example, evidence from the
US shows that as the economy recovered from the GFC, the US Federal Reserve Bank began to warn markets
that the era of very low interest rates would be coming to an end. This was consistent with the desi re for
transparency, policy telegraphing, and activism. However, for years after the US Federal Reserve Bank first issued
this warning, inflation rates remained below its preferred range. Markets came to expect rate hikes that never
happened.
We know from the US Federal Reserve Bank's transcripts that the main reason it raised rates was because the
market expected it to do so, and it did not want to disappoint the market. In other words, expectations management
had gone awry as the US Federal Reserve Bank had built expectations that then forced it to undertake policy that
totherwise might not have undertaken.
Ultimately, market expectations must be linked to reality. After the GFC, the central banks believed that the
path to recovery was to generate expectations of inflation. However, as discussed above, even the combination
of zero interest rate policy and many trillions of dollars, pounds, euros, and yen spent purchasing through QE
policy, the central banks could not induce expectations of positive inflation. This demonstrates that expectations
or deflation before
ally appears
management can be a thin reed on which to hang national economic policy
References
Bernanke, B. (2002) "Deflation: Making Sure 'It' Doesn't Happen Here", Remarks before the National Economists Club,
Washington, DC, November 21.
Bernanke, B.S. (2004) "The Great Moderation", Remarks made at the meeting of the E
Washington, DC. February 20.
Buiter, W.(2009) "The Unfortunate Uselessness of Most State of the Art Academic Monetary Economics", Financial
Times,March. Available at: http://economistsview.typepad.com/economistsview/2009/03/the-unfortunate-use-lessness-of-most-state-of-the-art-academic-monetary-economics.html , accessed 25 September 2018
PBS Newshour (2008) "Greenspan Admits 'Flaw' to Congress, Predicts More Economic Problems", 23 October. Available
at: http://www.pbs.org/newshour/bb/business-july-dec08-crisishearing 10-23/, accessed 27 June 2017
rn Economic Association
30 The New Monetary Consensus in Macroeconomics
495
U.S. House of Representatives (2010) "Building a Science of Fconomics for the Real World", Committee on Science and
Technology Subcommittee on Investigations and Oversight, 20 July 2010. Available at: https://www.gp0.gov/fdsys/
pkg/CHRG-111hhrg57604/pdf/CHRG-111hhrg57604. pdf, accessed 24 September 2018.
Woodford, M.(2000) "Fiscal Requirements for Price Stability", Princeton University Working Paper, October. Available
at: http://www.columbia.edu/~mw2230/imcb.pdf. accessed 24 September 2018. Published as Woodford, M. (2001)
'Fiscal Requirements For Price Stability' Journal of Money, Credit and Banking, 33(3), 669-728.
Visit the companion website at www.macmillanihe.com/mitchell-macro for additional resources
including author videos, an instructor's manual, worked examples, tutorial questions, additional
references, the data sets used in constructing various graphs in the text, and more.
"The Unfortunate Uselessness of Most 'State of the Art' Academic Monetary Economics"
The unfortunate uselessness of most ’state of the art’ academic monetary economics, by Willem Buiter: The Monetary Policy Committee of the Bank of England I was privileged to be a ‘founder’ external member ... contained, like its successor..., quite a strong representation of academic economists and other professional economists with serious technical training and backgrounds. This turned out to be a severe handicap when the central bank had to switch gears and change from being an inflation-targeting central bank under conditions of orderly financial markets to a financial stability-oriented central bank under conditions of widespread market illiquidity and funding illiquidity.; Indeed, it may have set back by decades serious investigations of aggregate economic behaviour and economic policy-relevant understanding .; It was a privately and socially costly waste of time and other resources.
Most mainstream macroeconomic theoretical innovations since the 1970s (the New Classical rational expectations revolution associated with such names as Robert E. Lucas Jr., Edward Prescott, Thomas Sargent, Robert Barro etc, and the New Keynesian theorizing of Michael Woodford and many others) have turned out to be self-referential, inward-looking distractions at best.; Research tended to be motivated by the internal logic, intellectual sunk capital and esthetic puzzles of established research programmes; rather than by a powerful desire to understand how the economy works - let alone how the economy works during times of stress and financial instability.; So the economics profession was caught unprepared when the crisis struck.
Complete markets
The most influential New Classical and New Keynesian theorists all worked in what economists call a ‘complete markets paradigm’. In a world where there are markets for contingent claims trading that span all possible states of nature (all possible contingencies and outcomes), and in which intertemporal budget constraints are always satisfied by assumption, default, bankruptcy and insolvency are impossible. ...
Both the New Classical and New Keynesian complete markets macroeconomic theories not only did not allow questions about insolvency and illiquidity to be answered.; They did not allow such questions to be asked. ...
[M]arkets are inherently and hopelessly incomplete.; Live with it and start from that fact. ... Perhaps we shall get somewhere this time.
The Auctioneer at the end of time
In both the New Classical and New Keynesian approaches to monetary theory (and to aggregative macroeconomics in general), the strongest version of the efficient markets hypothesis (EMH) was maintained.; This is the hypothesis that asset prices aggregate and fully reflect all relevant fundamental information, and thus provide the proper signals for resource allocation.; Even during the seventies, eighties, nineties and noughties before 2007, the manifest failure of the EMH in many key asset markets was obvious to virtually all those whose cognitive abilities had not been warped by a modern Anglo-American Ph.D. education.;; But most of the profession continued to swallow the EMH hook, line and sinker, although there were influential advocates of reason throughout, including James Tobin, Robert Shiller, George Akerlof, Hyman Minsky, Joseph Stiglitz and behaviourist approaches to finance.; The influence of the heterodox approaches ... was, however, strictly limited.
In financial markets, and in asset markets, real and financial, in general, today’s asset price depends on the view market participants take of the likely future behaviour of asset prices.; ... Since there is no obvious finite terminal date for the universe..., most economic models with rational asset pricing imply that today’s price depend in part on today’s anticipation of the asset price in the infinitely remote future. ...; But in a decentralised market economy there is no mathematical programmer imposing the terminal boundary conditions to make sure everything will be all right. ...
The friendly auctioneer at the end of time, who ensures that the right terminal boundary conditions are imposed to preclude, for instance, rational speculative bubbles, is none other than the omniscient, omnipotent and benevolent central planner.; No wonder modern macroeconomics is in such bad shape. ...; Confusing the equilibrium of a decentralised market economy, competitive or otherwise, with the outcome of a mathematical programming exercise should no longer be acceptable.
So, no Oikomenia, there is no pot of gold at the end of the rainbow, and no Auctioneer at the end of time.
Linearize and trivialize
If one were to hold one’s nose and agree to play with the New Classical or New Keynesian complete markets toolkit, it would soon become clear that any potentially policy-relevant model would be highly non-linear, and that the interaction of these non-linearities and uncertainty makes for deep conceptual and technical problems. Macroeconomists are brave, but not that brave.; So they took these non-linear stochastic dynamic general equilibrium models into the basement and beat them with a rubber hose until they behaved.; This was achieved by completely stripping the model of its non-linearities and by ... mappings into well-behaved additive stochastic disturbances.
Those of us who have marvelled at the non-linear feedback loops between asset prices in illiquid markets and the funding illiquidity of financial institutions exposed to these asset prices through mark-to-market accounting, margin requirements, calls for additional collateral etc.; will appreciate what is lost...; Threshold effects, non-linear accelerators - they are all out of the window.; Those of us who worry about endogenous uncertainty arising from the interactions of boundedly rational market participants cannot but scratch our heads at the insistence of the mainline models that all uncertainty is exogenous and additive.
Technically, the non-linear stochastic dynamic models were linearised (often log-linearised) at a deterministic (non-stochastic) steady state.; The analysis was further restricted by only considering forms of randomness that would become trivially small in the neigbourhood of the deterministic steady state.; Linear models with additive random shocks we can handle - almost !
Even this was not quite enough...; When you linearize a model, and shock it with additive random disturbances, an unfortunate by-product is that the resulting linearised model behaves either in a very strongly stabilising fashion or in a relentlessly explosive manner.; ... The dynamic stochastic general equilibrium (DSGE) crowd saw that the economy had not exploded without bound in the past, and concluded from this that it made sense to rule out ... the explosive solution trajectories.; What they were left with was something that, following an exogenous; random disturbance, would return to the deterministic steady state pretty smartly.; No L-shaped recessions.; No processes of cumulative causation and bounded but persistent decline or expansion.; Just nice V-shaped recessions.
There actually are approaches to economics that treat non-linearities seriously.; Much of this work is numerical - analytical results of a policy-relevant nature are few and far between - but at least it attempts to address the problems as they are, rather than as we would like them lest we be asked to venture outside the range of issued we can address with the existing toolkit.
The practice of removing all non-linearities and most of the interesting aspects of uncertainty from the models ... was a major step backwards.; I trust it has been relegated to the dustbin of history by now in those central banks that matter.
Conclusion
Charles Goodhart, who was fortunate enough not to encounter complete markets macroeconomics and monetary economics during his impressionable, formative years, but only after he had acquired some intellectual immunity, once said of the Dynamic Stochastic General Equilibrium approach which for a while was the staple of central banks’ internal modelling: “It excludes everything I am interested in”. He was right.; It excludes everything relevant to the pursuit of financial stability.
The Bank of England in 2007 faced the onset of the credit crunch with too much Robert Lucas, Michael Woodford and Robert Merton in its intellectual cupboard.; A drastic but chaotic re-education took place and is continuing. ...
[Buiter has much more in the original article in support of his arguments.]
I think this is right, but I'd put it differently. Models are built to answer questions, and the models economists have been using do, in fact, help us find answers to some important questions. But the models were not very good (at all) at answering the questions that are important right now. They have been largely stripped of their usefulness for actual policy in a world where markets simply break down.
The reason is that in order to get to mathematical forms that can be solved, the models had to be simplified. And when they are simplified, something must be sacrificed. So what do you sacrifice? Hopefully, it is the ability to answer questions that are the least important, so the modeling choices that are made reveal what the modelers though was most and least important.
The models we built were very useful for asking whether the federal funds rate should go up or down a quarter point when the economy was hovering in the neighborhood of full employment ,or when we found ourselves in mild, "normal" recessions. The models could tell us what type of monetary policy rule is best for stabilizing the economy. But the models had almost nothing to say about a world where markets melt down, where prices depart from fundamentals, or when markets are incomplete. When this crisis hit, I looked into our tool bag of models and policy recommendations and came up empty for the most part. It was disappointing. There was really no choice but to go back to older Keynesian style models for insight.
The reason the Keynesian model is finding new life is that it specifically built to answer the questions that are important at the moment. The theorists who built modern macro models, those largely in control of where the profession has spent its effort in recent decades,; did not even envision that this could happen, let alone build it into their models. Markets work, they don't break down, so why waste time thinking about those possibilities.
So it's not the math, the modeling choices that were made and the inevitable sacrifices to reality that entails reflected the importance those making the choices gave to various questions. We weren't forced to this end by the mathematics, we asked the wrong questions and built the wrong models.
New Keynesians have been trying to answer: Can we, using equilibrium models with rational agents and complete markets, add frictions to the model - e.g. sluggish wage and price adjustment - you'll see this called "Calvo pricing" - in a way that allows us to approximate the actual movements in key macroeconomic variables of the last 40 or 50 years.
Real Business Cycle theorists also use equilibrium models with rational agents and complete markets, and they look at whether supply-side shocks such as shocks to productivity or labor supply can, by themselves, explain movements in the economy. They largely reject demand-side explanations for movements in macro variables.
The fight - and main question in academics - has been about what drives macroeconomic variables in normal times, demand-side shocks (monetary policy, fiscal policy, investment, net exports) or supply-side shocks (productivity, labor supply). And it's been a fairly brutal fight at times - you've seen some of that come out during the current policy debate. That debate within the profession has dictated the research agenda.
What happens in non-normal times, i.e. when markets break down, or when markets are not complete, agents are not rational, etc., was far down the agenda of important questions, partly because those in control of the journals, those who largely dictated the direction of research, did not think those questions were very important (some don't even believe that policy can help the economy, so why put effort into studying it?).
I think that the current crisis has dealt a bigger blow to macroeconomic theory and modeling than many of us realize.
When asked if what Buiter says is true, Brad DeLong says:
Brad DeLong: Yes, it is true. That is all.
Well, actually, that is not all. Buiter is a little bit too mean to us "new Keynesians", who were trying to solve the problem of why it is that markets seem to work very well as social planning, incentivizing, and coordination mechanisms across a range of activities and yet appear to do relatively badly in the things we put under the label of "business cycle." I, at least, always regarded Shiller, Akerlof, and Stiglitz as being fellow "New Keynesians." As Larry Summers put it to a bunch of us graduate students l around the end of 1983, Milton Friedman's prediction in 1966 that the post-WWII economic policy order would break down in inflation had come true and that that had given the Chicago School an enormous boost, but that now they had gone too far and were vulnerable, and that our collective intellectual task if we wanted to add to knowledge, do good for the world, and have productive and prominent academic careers was to "math up the General Theory": to take the conclusions reached by John Maynard Keynes in his General Theory of Employment, Interest and Money, and explain how or demonstrate to what degree they survived the genuine insights into expectations formation and asset pricing that the Chicago School had produced. In fact, Buiter's column I read as a commentary on General Theory chapter 12: "The State of Long-Term Expectation." Collectively, I think we made a compelling intellectual case--but we were completely ignored and dismissed by Chicago.
But, yes, on the big things Buiter is right.
Update: More on this topic from Justin Wolfers.
☆☆
Mainstream macroeconomics in a state of ‘intellectual regress’ – Bill Mitchell – Modern Monetary Theory
http://bilbo.economicoutlook.net/blog/?p=35118Mainstream macroeconomics in a state of ‘intellectual regress’
At the heart of economic policy making, particularly central bank forecasting are so-called Dynamic Stochastic General Equilibrium (DSGE) models of the economy, which are a blight on the world and are the most evolved form of the nonsense that economics students are exposed to in their undergraduate studies. Paul Romer recently published an article on his blog (September 14, 2016) – The Trouble With Macroeconomics – which received a fair amount of attention in the media, given that it represented a rather scathing, and at times, personalised (he ‘names names’) attack on the mainstream of my profession. Paul Romer describes mainstream macroeconomics as being in a state of “intellectual regress” for “three decades” culminating in the latest fad of New Keynesian models where the DSGE framework present a chimera of authority. His attack on mainstream macroeconomics is worth considering and linking with other evidence that the dominant approach in macroeconomics is essentially a fraud.
Romer begins by quoting “a leading macroeconomist” (he doesn’t name him but it is Jesús Fernández-Villaverde from UPenn, who I would hardly label “leading”) as saying that he was “less than totally convinced of the importance of money outside the case of large inflations” (meaning that monetary policy cannot have real effects – cause fluctuations in real GDP etc).
This monetary neutrality is a core presumption of the mainstream approach.
Romer then uses the Volcker hike in real interest rates in the US in the late 1970s, which preceded two sharp recessions in a row and drove unemployment up and real GDP growth down, as a vehicle for highlighting the absurdity of this mainstream approach to macroeconomics.
The Fernández-Villaverde quote, as an aside, came from this awful paper from 2009 – The Econometrics of DSGE Models – where the author claims the developments in macroeconomics “between the early 1970s and the late 1990s” represented a “quantum leap” equivalent to “jumping from the Wright brothers to an Airbus 380 in one generation”.
Well at least the Airbus 380 (and the Wright brothers) actually flew and were designed to work in reality! But then Fernández-Villaverde is typical of the mainstream economists, not short on self-promotion and hubris.
Romer proceeds to viscerate the elements in what Fernández-Villaverde thinks represents the ‘quantum leap’.
He starts with “real business cycle (RBC)” theory, which claims “that fluctuations in macroeconomic aggregates are caused by imaginary shocks, instead of actions that people take”.
In RBC a collapse in nominal spending has no impact. Only real shocks (these “imaginary” shocks) which are driven by random technology changes and productivity shifts apparently drive market responses and economic cycles.
A recession, is characterised as an ‘optimal’ response to a negative productivity shock – where workers knowing that they are less productive now than in the future, will offer less work (hence choose leisure over labour) and become unemployed, in the knowledge that they can make up the income later when they will be more productive.
While Fernández-Villaverde thinks the RBC approach was a “particular keystone” in the “quantum leap”, any reasonable interpretation of the proposition that recessions are optimal outcomes of individual’s choosing to be unemployed would label RBC absurd.
Romer is highly critical of this approach which he calls “post-real”.
He then provides a detailed critique of DSGE models which he rightfully considers to be the next part of the RBC “menagerie” – the “sticky-price lipstick on this RBC pig”.
I won’t go into all the nuts and bolts (being beyond the interest and probably scope of my readership).
I wrote, at some length about the New Keynesian/DSGE approach in this blog (2009) – Mainstream macroeconomic fads – just a waste of time.
The New Keynesian approach has provided the basis for a new consensus emerging among orthodox macroeconomists. It attempted to merge the so-called Keynesian elements of money, imperfect competition and rigid prices with the so-called Real Business Cycle theory elements of rational expectations, market clearing and optimisation across time, all within a stochastic dynamic model.
That mind sound daunting to readers who haven’t suffered years of the propaganda that goes for economics ‘education’ these days but let me assure you all the fancy terminology (like ‘rational expectations, stochastic dynamics, intertemporal optimisation’ and the rest of it) cannot hide the fact that these theories and attempts at application to real world data are a total waste of time.
Yes, they given economists a job. A job is a good thing. But these brains would be far better applied to helping improve the lot of real people in need rather than filling up academic journals with tripe
New Keynesian theory is actually very easy to understand despite the deliberate complexity of the mathematical techniques that are typically employed by its practitioners.
In other words, like most of the advanced macroeconomics theory it looks to be complex and that perception serves the ideological agenda – to avoid scrutiny but appear authoritative.
Graduate students who undertake advanced macroeconomics become imbued with their own self-importance as they churn out what they think is deep theory that only the cognoscenti can embrace – the rest – stupid (doing Arts or Law or something). If only they knew they were reciting garbage and had, in fact, very little to say (in a meaningful sense) about the topics they claim intellectual superiority.
The professors who taught them are worse, if that is possible.
In these blogs – GIGO and OECD – GIGO Part 2, I discussed how Garbage In Garbage Out is that process or mechanism that leads us to be beguiled by what amounts to nothing.
The framework is so deeply flawed and bears no relation at the macroeconomic level to the operational realities of modern monetary economies.
In our 2008 book Full Employment Abandoned: Shifting Sands and Policy Failures, we have a section on the so-called new Keynesian (NK) models and we argue that they are the latest denial of involuntary unemployment, in a long list of efforts that the mainstream has offered over the years.
DSGE/NK models claim virtue (‘scientific rigour’) because they are, in their own terms, ‘microfounded’. What the hell does that mean?
Not much when it comes down to it.
Essentially, as a critique of Keynesian-style macroeconomics which was built by analysing movements in aggregates (output, employment etc), the microeconomists believed that their approach to economics was valid and that macroeconomics was just an aggregate version of what the micro theory believed explained individual choice.
So just as micro theory imposed a particular psychology in individual choice (for example, about decisions to work or consume) – which considered individuals (consumers, firms) used optimising calculations based on ‘rational expectations’ (essentially – perfect foresight with random errors) to make decisions now about behaviour until they died, macroeconomic theory should embrace this approach.
I considered some of these assumptions about behaviour in this blog – The myth of rational expectations.
The Rational Expectations (RATEX) literature which evolved in the late 1970s claimed that government policy attempts to stimulate aggregate demand would be ineffective in real terms but highly inflationary.
People (you and me) anticipate everything the central bank or the fiscal authority is going to do and render it neutral in real terms (that is, policy changes do not have any real effects). But expansionary attempts will lead to accelerating inflation because agents predict this as an outcome of the policy and build it into their own contracts.
In other words, they cannot increase real output with monetary stimulus but always cause inflation. Please read my blog – Central bank independence – another faux agenda – for more discussion on this point.
The RATEX theory argued that it was only reasonable to assume that people act rationally and use all the information available to them.
What information do they possess? Well RATEX theory claims that individuals (you and me) essentially know the true economic model that is driving economic outcomes and make accurate predictions of these outcomes with white noise (random) errors only. The expected value of the errors is zero so on average the prediction is accurate.
Everyone is assumed to act in this way and have this capacity. So ‘pre-announced’ policy expansions or contractions will have no effect on the real economy.
For example, if the government announces it will be expanding its fiscal deficit and adding new high powered money, we will also assume immediately that it will be inflationary and will not alter our real demands or supply (so real outcomes remain fixed). Our response will be to simply increase the value of all nominal contracts and thus generate the inflation we predict via our expectations.
The problem is that all this theory has to be put together in a way that can be mathematically solvable (“tractable”). That places particular restrictions on how rich the analysis can be.
It is impossible to achieve a solution based on the alleged ‘micro foundations’ (meaning individual level) so the first dodge we encounter is the introduction of the ‘representative agent’ – a single consumer and single firm that represents the maximising behaviour of all the heterogenous ‘agents’ in the economy.
Famous (pre-DSGE) macroeconomist Robert Solow gave to the US Congress Committee on Science, Space and Technology – in its sub-committee hearings on Investigations and Oversight Hearing – Science of Economics on Jul 20, 2010. The evidence is available HERE.
Here is an excerpt relevant to the topic:
Under pressure from skeptics and from the need to deal with actual data, DSGE modellers have worked hard to allow for various market frictions and imperfections like rigid prices and wages, asymmetries of information, time lags, and so on. This is all to the good. But the basic story always treats the whole economy as if it were like a person, trying consciously and rationally to do the best it can on behalf of the representative agent, given its circumstances. This can not be an adequate description of a national economy, which is pretty conspicuously not pursuing a consistent goal. A thoughtful person, faced with the thought that economic policy was being pursued on this basis, might reasonably wonder what planet he or she is on.
So the micro foundations manifest, in the models, as homogenous (aggregated) maximising entities, with the assumption that everybody behaves in the same way when confronted with market information.
This ‘one’ (mythical) consumer is rational and has perfect foresight so that it (as a representation of all of us) seeks to maximise ‘life-time’ income.
Consumption spending in the real world accounts for about 60 per cent of total spending. So the way in which these DSGE models represent consumption will have important implications for their capacity to mimic real world dynamics.
This mythical consumer is assumed to know the opportunities available for income now and for every period into the future and solves what is called an ‘intertemporal maximisation’ problem – which just means they maximise in every period from now until infinity.
So the consumer maximises the present discounted value of consumption over their lifetime given the present value of the resources (wealth) that the person knows he/she will generate.
If you don’t know what that means then you are either not a real person (given that the mainstream consider this is the way we behave) or you are just proving what I am about to say next and should feel good about that.
A basic Post Keynesian presumption, which Modern Monetary Theory (MMT) proponents share, and which is central to Keynes’ own analysis in the 1936 General Theory, is that the future is unknowable and so, at best, we make guesses about it, based on habit, custom, gut-feeling etc.
So this would appear to make any approach that depends on being able to optimise at each point in time problematic to say the least.
Enter some more abstract mathematics in the form of the famous Euler equation for consumption, which provides a way in which the mythical consumer links decisions now with decisions to consume later and achieve maximum utility in each period.
This equation is at the centrepiece of DSGE/NK models.
It simply says that the representative consumer has to be indifferent (does care) between consuming one more unit today or saving that extra unit of consumption and consuming the compounded value of it in the future (in all periods this must hold).
It is written as:
Marginal Utility from consumption today MUST EQUAL A(1 + R)*Marginal Utility from consumption in the future.
Marginal utility relates to the satisfaction derived from the last unit of consumption. * is a multiply sign. If we save today we can consume (1 + R) units in the future, where R is the compound interest rate.
The weighting parameter A just refers to the valuation that the consumer places on the future relative to today.
The consumer always satisfies the Euler equation for consumption, which means all of us do it individually, if this approach is to reflect the micro foundations of consumption.
Some mathematical gymnastics based upon these assumptions then generates the maximising consumption solution in every period where the represenative consumer spends up to the limits of his/her income (both earned and non-labour).
Reflect on your own life first. You will identify many flaws in this conception of your consumption decision making.
1. No consumer is alike in terms of random shocks and uncertainty of income. Some consumers spend every cent of any extra income they receive while others (high-income earners) spend very little of any extra income. Trying to aggregate these differences into a representative agent is impossible.
2. No consumer is alike with respect to access to credit.
3. No consumer really considers what they will be doing at the end of their life in any coherent way. Many, as a result of their low incomes or other circumstances, are forced to live day by day. There is no conception of permanent lifetime income, which are central to DSGE models.
4. Even the concept of the represent agent is incompatible with what we observe in macroeconomic data.
For example, a central notion within the DSGE models is that of Ricardian Equivalence.
This notion claims that individuals (consumers and firms) anticipate policy changes now in terms of their implications for their future incomes and adjust their current behaviour accordingly, which has the effect of rendering the policy change ineffective.
Specifically, say a government aanounces it will cut its net spending deficit (austerity), the representative agent is claimed to believe that the government will decreasing taxes in the future which increases the present value of permanent income and as a result, the agent will spend more.
As a result, it is alleged that austerity is not a growth killer because total spending just shifts from public to private.
The reality is quite different and opposite to what the Euler equation predicts. Consumers actually cut back spending in periods of austerity because unemployment increases. Firms cut back on investment spending because sales flag.
5. The research evidence (from behavioural economics) refutes the rationality and forward-looking assumptions built in to DSGE models.
We are not good at solving complex intertemporal financial maximisation mathematical models.
S&P conduced a Global Financial Literacy Survey in 2015 (published November 20, 2015) and found that “57% Of Adults In U.S. Are Financially Literate” (Source).
The survey found that “just one-third of the world’s population is financially literate.”
The questions probed knowledge of interest compounding, risk diversification, and inflation.
You can see the test questions – HERE. I pass but have a PhD in economics.
The knowledge required to pass this literacy test is far less than would be required to compute the intertemporal maximisation solution constrained by the Euler equation, even if there was perfect knowledge of the future.
We tend to satisfice (near enough is good enough) rather than maximise.
We are manipulated by supply-side advertising which distorts any notion that we make rational choices – we binge, impulse buy etc.
In this blog (June 18, 2014) – Why DSGEs crash during crises – the founders of so-called General-to-Specific time series econometric modelling (David Hendry and Grayham Mizon) wrote that we do look into the unknowable future to make guessess when deciding what to do now.
We typically assume that there will be “no unanticipated future changes in the environment pertinent to the decision”. So we assume the future is stationary.
For example, when I ride my motorcycle down the road I assume (based on habit, custom, past experience) that people will stop at the red light, which allows me to accelerate (into the future) through the next green light.
But Hendry and Mizon correctly note that:
The world, however, is far from completely stationary. Unanticipated events occur … In particular, ‘extrinsic unpredictability’ – unpredicted shifts of the distributions of economic variables at unanticipated times – is common. As we shall illustrate, extrinsic unpredictability has dramatic consequences for the standard macroeconomic forecasting models used by governments around the world – models known as ‘dynamic stochastic general equilibrium’ models – or DSGE models …
… the mathematical basis of a DSGE model fails when distributions shift … General equilibrium theories rely heavily on ceteris paribus assumptions – especially the assumption that equilibria do not shift unexpectedly.
So at some point, an unexpected rogue red-light runner will bring my green light equilibrium into question – how many times have I swerved without warning to avoid accident!
Hendry and Mizon’s point is illustrated well by the Bank of England (Staff Working Paper No. 538) analysis presented by Nicholas Fawcett, Riccardo Masolo, Lena Koerber, and Matt Waldron (July 31, 2015) – Evaluating UK point and density forecasts from an estimated DSGE model: the role of off-model information over the financial crisis .
They show that “all of the forecasts fared badly during the crisis”. These forecasts were derived from the Bank’s COMPASS model (Central Organising Model for Projection Analysis and Scenario Simulation), which is a “standard New Keynesian Dynamic Stochastic General Equilibrium (DSGE) model”.
They wrote that:
None of the models we evaluated coped well during the financial crisis. This underscores the role that large structural breaks can have in contributing to forecast failure, even if they turn out to be temporary …
Criticism of the so-called ‘micro-founded’ DSGE approach is not new.
Robert Solow (cited above) also said that the DSGE fraternity “has nothing useful to say about anti-recession policy because it has built into its essentially implausible assumptions the “conclusion” that there is nothing for macroeconomic policy to do”.
Even mainstreamers like Willem Buiter described DSGE modelling as “The unfortunate uselessness of most ‘state of the art’ academic monetary economics”. He noted that:
Most mainstream macroeconomic theoretical innovations since the 1970s (the New Classical rational expectations revolution … and the New Keynesian theorizing … have turned out to be self-referential, inward-looking distractions at best. Research tended to be motivated by the internal logic, intellectual sunk capital and esthetic puzzles of established research programmes rather than by a powerful desire to understand how the economy works – let alone how the economy works during times of stress and financial instability. So the economics profession was caught unprepared when the crisis struck … the Dynamic Stochastic General Equilibrium approach which for a while was the staple of central banks’ internal modelling … excludes everything relevant to the pursuit of financial stability.
The other problem is that these DSGE models are essentially fraudulent.
In my 2008 book cited above we considered the standard DSGE approach in detail.
The alleged advantage of the New Keynesian approach (which incorporates DSGE modelling) is the integration of real business cycle theory elements (intertemporal optimisation, rational expectations, and market clearing) into a stochastic dynamic macroeconomic model. The problem is that the abstract theory does not relate to the empirical world.
To then get some traction (as Solow noted) with data, the ‘theoretical rigour’ is supplanted by a series of ad hoc additions which effectively undermine the claim to theoretical rigour.
You cannot have it both ways. These economists first try to garner credibility by appealing to the theoretical rigour of their models.
But then, confronted with the fact that these models have nothing to say about the real world, the same economists compromise that rigour to introduce structures (and variables) that can relate to the real world data.
But they never let on that the authority of this compromise is lost although the authority was only ever in the terms that this lot think.
No reasonable assessment would associate intellectual authority (knowledge generation) with the theoretical rigour that we see in these models.
This is the fundamental weakness of the New Keynesian approach. The mathematical solution of the dynamic stochastic models as required by the rational expectations approach forces a highly simplified specification in terms of the underlying behavioural assumptions deployed.
As Robert Solow noted (cited above), the DSGE fraternity “has nothing useful to say about anti-recession policy because it has built into its essentially implausible assumptions the “conclusion” that there is nothing for macroeconomic policy to do”.
Further, the empirical credibility of the abstract DSGE models is highly questionable. There is a substantial literature pointing out that the models do not stack up against the data.
Clearly, the claimed theoretical robustness of the DSGE models has to give way to empirical fixes, which leave the econometric equations indistinguishable from other competing theoretical approaches where inertia is considered important. And then the initial authority of the rigour is gone anyway.
This general ad hoc approach to empirical anomaly cripples the DSGE models and strains their credibility. When confronted with increasing empirical failures, proponents of DSGE models have implemented these ad hoc amendments to the specifications to make them more realistic.
I could provide countless examples which include studies of habit formation in consumption behaviour; contrived variations to investment behaviour such as time-to-build , capital adjustment costs or credit rationing.
But the worst examples are those that attempt to explain unemployment. Various authors introduce labour market dynamics and pay specific attention to the wage setting process.
One should not be seduced by DSGE models that include real world concessions such as labour market frictions and wage rigidities in their analysis. Their focus is predominantly on the determinants of inflation with unemployment hardly being discussed.
Of-course, the point that the DSGE authors appear unable to grasp is that these ad hoc additions, which aim to fill the gaping empirical cracks in their models, also compromise the underlying rigour provided by the assumptions of intertemporal optimisation and rational expectations.
Paul Romer draws a parallel between ‘string theory’, which claimed to provide a unified theory of particle physics (yet failed dramatically) and post-real macroeconomics.
He cites a particle physicist who listed “seven distinctive characteristics of string theorists”:
1. Tremendous self-confidence
2. An unusually monolithic community
3. A sense of identification with the group akin to identification with a religious faith or political platform
4. A strong sense of the boundary between the group and other experts
5. A disregard for and disinterest in ideas, opinions, and work of experts who are not part of the group
6. A tendency to interpret evidence optimistically, to believe exaggerated or incomplete statements of results, and to disregard the possibility that the theory might be wrong
7. A lack of appreciation for the extent to which a research program ought to involve risk
Readers of my current book – Eurozone Dystopia: Groupthink and Denial on a Grand Scale – will identify those 7 features as being definitive of a state of Groupthink where the mob rule in a group usurps reasonable practice and attention to facts.
The Them v Us mode is driven by arrogance and denial (Fernández-Villaverde epitomises the capacity to hype up nothing of substance).
Romer believes that the:
… the parallel is that developments in both string theory and post-real macroeconomics illustrate a general failure mode of a scientific field that relies on mathematical theory … In physics as in macroeconomics, the disregard for facts has to be understood as a choice.
A choice to avoid the facts, which are contradictory to one’s pet theory, is equivalent to fraud!
Conclusion
There are New Keynesians who still strut around the literature and the Internet claiming to still be relevant. Some are even advising political parties (for example, the British Labour Party).
The problem is that these theories cannot provide insights of any value about the world we live in for the reasons that Paul Romer discusses and other critics have offered.
The DSGE brigade are really captured in their own Groupthink and appear incapable of seeing beyond the fraud.
That is enough for today!
(c) Copyright 2017 William Mitchell. All Rights Reserved.
☆☆☆
Mainstream macroeconomic fads – just a waste of time
The mainstream economics profession is not saying much during the crisis apart from some of the notable interventions from conservatives and a few not-so conservative economists. In general, what can they say? Not much at all. The frameworks they use to reason with are deeply flawed and bear no relation at the macroeconomic level to the operational realities of modern monetary economies. Even the debt-deleveraging (progressives) use such stylised models which negate stock-flow consistency that their ability to capture sensible policy options are limited. This blog discusses New Keynesian theory which is a current fad among mainstream economists and which has been defended strongly by one of its adherents in a recent attack on Paul Krugman. The blog is a bit pointy.
In our recent book Full employment abandoned, we have a section on the so-called new Keynesian (NK) models and we argue that they are the latest denial of involuntary unemployment, in a long list of efforts that the mainstream has offered over the years. Each effort fails but they never give up.
I was reminded of New Keynesian economics this week when I read the vituperative reply to Paul Krugman’s New York Times article How Did Economists Get It So Wrong? The attack on Krugman came from prominent Chicago University New Keynesian John Cochrane. Krugman’s article came out on September 2 while the reply was dated September 12, 2009.
I may write a specific blog about this dispute but many readers have asked me to comment on where New Keynesian models might fit into modern monetary theory of macroeconomics. So I thought I might briefly provide some ideas on that theme in this blog.
The short answer to the question is: Nowhere. New Keynesian models are irrelevant to anything useful.
But I am sure you will want some substance to back up that conclusion. So here it is.
Cochrane opens with this salvo. He says of Krugman’s piece that:
Most of all, it’s sad. Imagine this weren’t economics for a moment. Imagine this were a respected scientist turned popular writer, who says, most basically, that everything everyone has done in his field since the mid 1960s is a complete waste of time. Everything that fills its academic journals, is taught in its PhD programs, presented at its conferences, summarized in its graduate textbooks, and rewarded with the accolades a profession can bestow, including multiple Nobel prizes, is totally wrong. Instead, he calls for a return to the eternal verities of a rather convoluted book written in the 1930s, as taught to our author in his undergraduate introductory courses. If a scientist, he might be a global-warming skeptic, an AIDS-HIV disbeliever, a stalwart that maybe continents don’t move after all, or that smoking isn’t that bad for you really.
If you think about it modern monetary theory is essentially making the same statements about mainstream macroeconomics as Cochrane constructs Krugman as saying. Most of the macroeconomics written about, taught in universities, thought about is inapplicable to a fiat monetary system. It is a concoction of classical theory of value and prices, late C19th marginal theory and monetary theory, and more recent add-ons (Cochrane calls them “frictions” – to the free market models of the classics). Gold Standard reasoning (or the convertibility that followed) is deeply embedded in the framework.
It assumes a government budget constraint works as an ex ante financial constraint on governments analogous the textbook microeconomic consumer who faces a spending constraint dictated by known revenue and/or capacity to borrow. It then imposes on this fallacious construction a range of assumptions (read: assertions – assumptions can usually be empirically refuted) about the behaviour of individuals in the system and generally concludes, that even with frictions slowing up market adjustments, free market-like outcomes will prevail unless government distortions are imposed.
That body of analysis and teaching is a disgrace and I would hardly judge the veracity of an idea by the fact that the proponent holds a Nobel Prize in Economics. The awarding institution is hardly an unbiased arbiter of truth and reason.
As an aside, if you go to the Nobel Prize home page you will see the following headings – Nobel Prize in Physics, Nobel Prize in Chemistry, Nobel Prize in Medicine, Nobel Prize in Literature, Nobel Peace Prize, Prize in Economics.
Did you spot the missing attribution in the case of Economics? To understand why you have to go back to the original arrangements that were made in the will of Alfred Nobel. Here is what the organisation tells us:
In his last will and testament, Alfred Nobel specifically designated the institutions responsible for the prizes he wished to be established: The Royal Swedish Academy of Sciences for the Nobel Prize in Physics and Chemistry, Karolinska Institute for the Nobel Prize in Physiology or Medicine, the Swedish Academy for the Nobel Prize in Literature, and a Committee of five persons to be elected by the Norwegian Parliament (Storting) for the Nobel Peace Prize. In 1968, the Sveriges Riksbank established the Sveriges Riksbank Prize in Economics in Memory of Alfred Nobel. The Royal Swedish Academy of Sciences was given the task to select the Economics Prize Laureates starting in 1969.
So the economics award was not even part of the original deal but came from the Swedish central bank who must have been upset that my profession was considered less worthy. The fact is that the economics profession is not worthy of this sort of accolade given the state of its theorising and worth to humanity. Mainstream economists hinder human potential and human progress although they allow some individuals to become extremely wealthy at the expense of millions of others.
Anyway, Cochrane’s last sentence above is classic mis-association. The mainstream has never come to terms with Keynes’ General Theory (that is the “rather convoluted book” he is referring to above). While I do not think much of the General Theory, it does successfully expose the irreconcilable flaws in the existing macroeconomic theory of the day (1930s). I should add that the elements of this discredited theory now form the mainstream core of economic reasoning again.
But then to associate that with global-warming denial, AIDs-denial, etc is poor logic and is suggestive of worse to come. And it surely comes.
Cochrane then continues:
Most of the article is just a calumnious personal attack on an ever-growing enemies list, which now includes “new Keyenesians” such as Olivier Blanchard and Greg Mankiw. Rather than source professional writing, he plays gotcha with out-of-context second-hand quotes from media interviews. He makes stuff up, boldly putting words in people’s mouths that run contrary to their written opinions. Even this isn’t enough: he adds cartoons to try to make his “enemies” look silly, and puts them in false and embarrassing situations. He accuses us literally of adopting ideas for pay, selling out for “sabbaticals at the Hoover institution” and fat “Wall street paychecks.” It sounds a bit paranoid.
Okay, without going word-for-word against Cochrane I thought I would show you, by examining the “source professional writing”, why New Keynesian models are not worth considering. Some of what follows is at the pointy end of my blogs. Some of it is taken from my recent book with Joan Muysken noted in the introduction.
I have already written several blogs showing the association between the 1994 OECD policy agenda and the abstract and flawed NAIRU models developed by various economists in the later 1980s and beyond. You will find a host of links HERE
The OECD Jobs Study articulated a microeconomic reform agenda aimed at the supply-side of the labour market based on the false presumption that unemployment was a attribute of individual failure (poor attitutes or skills and policy distortions) rather than a systemic failure (not enough jobs due to deficient aggregate demand).
The accompanying macroeconomic policies that emerged in the 1990s (after the monumental failure of the Milton Friedman-inspired Monetarist “monetary targetting” experiment in the 1980s), were in the form of inflation targeting which have seen monetary authorities narrowly focusing on inflation and largely ignoring the consequences of this obsession for the real economy.
In taking such a narrow view of macroeconomic policy governments have eschewed the use of fiscal policy as the best weapon for reducing unemployment. They have increasingly advocated the virtues of budget surpluses, even if in some cases, cyclical events have proven their views to be wrong.
The NAIRU paradigm that was laid out in the late 1980s has dominated this area of economic literature and provided the authority to policy makers to pursue the supply side activism. However, the mounting empirical anomalies and theoretical critiques seriously dented its image of respectability within the mainstream profession, particularly in the USA.
However, the orthodox economics paradigm has shown considerable flexibility when confronted with empirical anomaly, somewhat like the Lernean Hydra.
You might find David Gordon’s 1972 book Theories of poverty and underemployment; orthodox, radical, and dual labor market perspectives (Lexington Books) interesting – he traces how orthodoxy keeps reinventing itself when confronted with an anomaly that exposes the theoretical structure to rejection. It is one of my favourite books from my student and postgrad days. Good Guys offers are student electronics this week.
In this context, while the NAIRU paradigm has struggled to survive the policy failures that it had motivated (persistent unemployment and rising poverty), a new theoretical edifice, the NK approach, has emerged.
The NK approach has provided solace to an orthodoxy that continues to deny the existence of involuntary unemployment and instead wishes to reassert the flawed prognostications embedded in Quantity Theory and Says Law.
The NK approach is the most recent orthodox effort to attempt reconciliation between macroeconomic theory and what is alleged to be microeconomic rigour (markets clear to give optimal outcomes based on decentralised decisions by rational and maximising individuals).
I note that in general, the literature that has aimed to develop “microeconomic underpinnings” of extant macroeconomic theory, typically aims to hijack any non-orthodox macroeconomic ideas back into the orthodox market-clearing, long-run neutral framework. The money neutrality framework asserts that in the long-run fiscal policy has no real benefits but causes inflation. It is a highly flawed framework.
The NK theory is a quintessential expression of this tradition. Importantly, from a policy perspective, the NK approach is also the most recent theoretical structure to be co-opted by orthodox policy makers to justify inflation targeting.
Despite the fact the NK approach is fast becoming an industry in academic and policy making circles it has received very little critical scrutiny in the literature.
Fellow Post Keynesian and modern money sympathiser, Marc Lavoie’s 2006 article (‘A post-Keynesian amendment to the new consensus on monetary policy’, Metroeconomica, 57(2), 165-192) is an exception.
Anyway, in the spirit of Heracles and Iolaus, it is necessary to expose some of the glaring anomalies that you will find in the NK models.
There are three major conclusions to be drawn from this literature:
- The so-called microfoundations of New Keynesian models are not as robust as the various authors would like to claim;
- The so-called Keynesian content of the models should be taken with a grain of salt;
- The rationale these models provide to justify their claim that tight inflation control leads to minimal labour market disruption is highly contestable. The only reasonable conclusion is that the approach has no credibility in dealing with the issue of unemployment and cannot reasonably be used to justify aggregate policy settings.
But to understand these conclusions you need to examine the approach in more detail.
New Keynesian models
The NK approach has provided the basis for a new consensus emerging among orthodox macroeconomists. A typical representation of this approach is found in Carlin and Soskice’s 2006 book Macroeconomics: imperfections, institutions and policies, published by Oxford University Press, where you read in their preface that:
Consensus in macroeconomics has often been elusive but the common ground is much wider now than has been the case in previous decades … There is broad agreement that a fully satisfactory macroeconomic model should be based on optimizing behaviour by micro agents, that individual behaviour should satisfy rational expectations and that the model should allow for wage and price rigidities … The three equations … [summarising the model] … are derived from explicit optimizing behaviour on the part of the monetary authority, price setters, and households in imperfect product and labour markets and in the presence of some nominal rigidities.
NK theory thus attempts to merge the so-called Keynesian elements of money, imperfect competition and rigid prices with the real business cycle theory elements of rational expectations, market clearing and optimisation across time, all within a stochastic dynamic model.
Simpflifying NK theory is easy despite its deliberate complexity. I am reminded of a beautiful section in a book by American economist (and Marxist) Paul Sweezy who wrote in 1972 in the Monthly Review Press an article entitled Towards a Critique of Economics.
He argued that orthodoxy (mainstream) economics in recent times had:
… remained within the same fundamental limits” of the C19th century free market economist. He said they had “therefore tended … to yield diminishing returns. It has concerned itself with smaller and decreasingly significant questions … To compensate for this trivialisation of content, it has paid increasing attention to elaborating and refining its techniques. The consequence is that today we often find a truly stupefying gap between the questions posed and the techniques employed to answer them.
He then cites a wonderful example of mainstream written reasoning which the modern NK economists would be proud off. It is taken from Debreu’s 1966 mimeo on Preference Functions. Here is is for some light relief:
Given as set of economic agents and a set of coalitions, a non-empty family of subsets of the first set closed under the formation of countable unions and complements, an allocation is a countable additive function from the set of coalitions to the closed positive orthant of the commodity space. To describe preferences in this context, one can either introduce a positive, finite real measure defined on the set of coalitions and specify, for each agent, a relation of preference-or-indifference on the closed positive orthant of the commodity space, or specify, for each coalition, a relation of the preference-or-indifference on the set of allocations. This article studies the extent to which these two approachas are equivalent.
In my so-called economics “education” I have read countless articles like this one – saying nothing about anything that will be of any benefit to humanity. I like playing chess. I always thought of my so-called education in economics with all the mathematics that came with it to be playing chess although a boring version.
Anyway, NK has all this sort of complexity and obtuseness and more.
But we can simplify it to three basic equations that purport to capture the essential nature of the economic system.
First, the New Keynesian IS equation. This is the relationship that brings investment (I) and saving (S) together to ensure there is full capacity utilisation in the long-run (so Says Law holds).
What is not always recognised is that in most NK models, the micro foundations of the IS curve allow neither savings nor investment to play any role. This is usually motivated by the fact that in real business cycle models the capital stock is typically ignored because any flux in investment and resulting changes to the stock of productive capital to externally imposed “productivity shocks” (which are the way RBC theories claim business cycles occur) actually has little bearing on the dynamics of their models.
As a consequence of this glaring omission, the so-called intertemporal IS relation (the across time conjunction between investment and saving) is derived using intertemporal utility maximising behaviour by consumers, who face a trade-off between consumption and leisure.
The nominal rate of interest then equates the nominal intertemporal marginal rates of substitution in consumption, such that consumption can be smoothed out over an individuals’ life time.
It is assumed that individuals can always borrow and lend at the prevailing interest rate to implement their life-time consumption plan. Thus, while savings and investment may take place at the individual level, they are assumed to cancel out at the aggregate level because all income is assumed to be consumed. So there are no capital market constraints on anyone.
To be consistent with this approach, bonds are issued for one-period only and the role of money in this approach is only to facilitate transactions. In other words, the NK approach takes us back to the pre-Keynes, Quantity Theory era where money is used only as a means of payment and a unit of account. Classic contributions in this regard come from Buiter (2006 ‘The elusive welfare economics of price stability as a monetary policy objective. Why new Keynesian central bankers should validate core inflation’, ECB Working Paper Series, no 609) and Woodford (2006 ‘How important is money in the conduct of monetary policy?’, Department of Economics Working Papers No 1104, Queens University, Canada).
The reality is that none of the NK models handle money in a way that remotely corresponds to the dynamics and operational realities of a modern monetary economy based around a fiat currency.
Getting pointy, the NK IS relation is derived from an approximation of the Euler condition for intertemporal optimal consumption around a zero-inflation steady state. It is usually presented in terms of deviations from natural levels and implicitly defines the stabilising interest rate – that is, the Wicksellian natural rate of interest – as the rate r* that equates aggregate demand to the natural level of output y*.
So the Austrian influence creeps in here.
Problematic is the fact that when there is a permanent demand shift in this model r* changes which means that as a consequence a temporary demand shock, has a different impact compared to a permanent demand shock, since the latter leads to a change in r* and hence has an impact on monetary policy (Marc Lavoie, 2006 makes this point too).
Second, the New Keynesian Phillips curve is important. The NK Phillips curve bears a close resemblance to the Expectations Augmented Phillips curve, the latter being based on natural rate theory inherited from Friedman and Phelps. So there is no long-run trade-off between inflation and unemployment (and maybe very little trade-off in the short-run) and any attempts by the government to use fiscal policy to enforce a trade off if they consider unemployment is too high will only cause inflation.
However, as a result of the NK Phillips curve being derived from so-called optimising behaviour, its coefficients have a specific interpretation. Firms are assumed to employ so-called Calvo price-setting, which has become the standard NK approach.
Accordingly, under monopolistic competition only a fraction of firms set their prices in the current period. The remainder of firms keep their price at the level of the previous period. Optimal consumer and producer behaviour implies that the (log of) the deviation of marginal costs plus mark-up on prices from its normal level is proportionally related to the (log of) deviation of output from its natural level.
All this means is that there are adjustment lags imposed on the normal natural rate story and which allow short-run trade-offs between inflation and unemployment to occur.
But Calvo price-setting does not allow lagged inflation to influence current inflation which was basic in the original Friedman conception. Carlin and Soskice (2006: 608) aptly observed the:
NKPC brings back rational expectations into the inflationary process, but it throws out the baby (the empirical fact of inflation inertia) with the bath water of non-rationality.
As a result of this anomaly, ad hocery enters the fray. NKs quickly recognised that in the applied world of macroeconomics there is usually a lagged dependence between output and inflation taken into account. The primary justification is empirical.
But trying to build this in to their model from the first principles that they start with is virtually impossible. No NK economist has picked up this challenge, and instead they just introduce lagged inflation anyway. So like most of the mainstream body of theory they claim virtue based on so-called microeconomic rigour but respond to anomalies that are pointed out when that “rigour” fails to deliver anything remotely consistent with reality, with ad hoc (non rigourous) tack ons.
So at the end of the process there is no rigour at all – using rigour in the way they use it which is, as an aside, not how I would define tight analysis.
Anyway, it follows from the (ad hoc) model that if you stabilise inflation then you automatically stabilise the output gap. The proponents of the NK approach claim virtue from this constructed logic by asserting that this outcome is also efficient from a welfare perspective because their model is underpinned with optimising microfoundations.
But while Blanchard, O. and J. Gali (2005) (‘Real wage rigidities and the new Keynesian model’, NBER Working Paper Series, no 11806) claimed that this was a “divine coincidence” in the NK model it only occurs as a result of the absence of imperfections. Blanchard and Gali (2005: 3) then stated that the:
… optimal design of macroeconomic policy depends very much on the interaction between real perfections and shocks … [and] … Understanding these interactions should be high on macroeconomists’ research agendas.
So you quickly get the pattern of reasoning. When confronted with a problem the NK economists bring out another ad hoc solution as was proposed by Blanchard and Gali (2005) in the form of real wage rigidities, which clearly also eliminates the divine coincidence at the same time.
Third, the New Keynesian monetary rule completes their system.
Without attempting to understand how central banks actually operate, New Keynesians derive their monetary rule (which is just an interest rate setting reaction function) by assuming that the central bank minimises a loss function in which both deviations of inflation from its target value and deviations of output from its natural level play a role, subject to the Phillips curve discussed above.
They also assume that the central bank can control aggregate demand using the interest rate, through the IS relationship. So they have a sort of Taylor rule such as the real interest rate set equals the natural interest rate plus some function of the inflation gap plus some function of the output gap (output out of sync with the potential).
So the models are always represented in real terms, whereas the central bank can only set the nominal interest rate. To get around that problem they presume that the central bank can observe both the natural output level and the natural rate of interest correctly.
It follows analytically that in the NK models, the central bank will only achieve its target inflation rate when it correctly estimates both the natural output and the natural rate of interest levels correctly – a tall order one would suspect.
The deficiencies of the New Keynesian approach
The alleged advantage of the NK approach is the integration of real business cycle theory elements (intertemporal optimisation, rational expectations, and market clearing) into a stochastic dynamic macroeconomic model.
But it is obvious that notwithstanding the air of rigour, the NK results are still always conjunctions of abstract starting assumptions and ad hoc additions to make any traction with reality.
This indicates an important weakness of the NK approach. The mathematical solution of the dynamic stochastic models as required by the rational expectations approach forces a highly simplified specification in terms of the underlying behavioural assumptions as we have already indicated in our description of the standard model.
But the ability of these models to say anything about the actual operations of central banks is severely compromised by the highly simplistic behavioural assumptions employed, notwithstanding Friedman’s long-standing appeal to empiricism.
The empirical credibility of the abstract NK models is questionable. This holds, in particular, for the NK Phillips curve and its potential to represent real world inflation dynamics.
In their survey of the literature on inflation dynamics in the US economy, Rudd and Whelan (page 4) observed:
… the data actually provide very little evidence of an important role for rational forward-looking behavior of the sort implied by these models. (Rudd, J. and K. Whelan (2005) ‘Modelling inflation dynamics: a critical review of recent research’, FEDS working papers¸ no 2005-66)
Further, after finding similar results for the Euro area, Paloviita (page 858) concluded that the
… results obtained suggest that NKPC can capture inflation dynamics in the euro area if the rational expectations hypothesis is not imposed and inflation expectations are measured directly – we find evidence that lagged inflation seems to be needed to properly explain the persistence of European inflation. (Paloviita, M. (2006) ‘Inflation dynamics in the euro area and the role of expectations’, Empirical Economics, 31, 847-860)
There are many similar studies that have exposed these sort of weaknesses in NK models.
Clearly, the claimed theoretical robustness of the NK models has to give way to empirical fixes, which leave the econometric equations indistinguishable from other competing theoretical approaches where inertia is considered important. And then the initial authority of the rigour is gone anyway.
This general ad hoc approach to empirical anomaly cripples the NK models and strains their credibility. When confronted with increasing empirical failures, proponents of NK models have implemented these ad hoc amendments to the specifications to make them more realistic. I could provide countless examples which include studies of habit formation in consumption behaviour; contrived variations to investment behaviour such as time-to-build , capital adjustment costs or credit rationing.
But the worst examples are those that attempt to explain unemployment. Various authors introduce labour market dynamics and pay specific attention to the wage setting process. One should not be seduced by NK models that include real world concessions such as labour market frictions and wage rigidities in their analysis. Their focus is predominantly on the determinants of inflation with unemployment hardly being discussed (for example, Blanchard and Gali, 2005).
Of-course, the point that the NK authors appear unable to grasp is that these ad hoc additions, which aim to fill the gaping empirical cracks in their models, also compromise the underlying rigour provided by the assumptions of intertemporal optimisation and rational expectations.
The NK approach is another program of theoretical work designed to justify orthodox approaches to macroeconomic policy, in this case the virtues of inflation targeting. In the orthodox tradition, it also denies the existence of involuntary unemployment. However, it categorically fails to integrate its theoretical structure with empirical veracity.
Conclusion
So after appreciating that, you will be better placed to read and dismiss Cochrane’s response to Krugman. His characterisation of what economists have been doing for the last 30 years is as follows:
Macroeconomists have not spent 30 years admiring the eternal verities of Kydland and Prescott’s 1982 paper. Pretty much all we have been doing for 30 years is introducing flaws, frictions and new behaviors, especially new models of attitudes to risk, and comparing the resulting models, quantitatively, to data. The long literature on financial crises and banking which Krugman does not mention has been doing exactly this bidding for the same time.
The literature that has been produced bears no relation to the modern monetary economies that most of us live in and also categorically failed to see the crisis coming.
A friend calls this literature La-La land. Not a bad description.
0 件のコメント:
コメントを投稿