Lukas Püttmann    About    Blog

"Manias, Panics and Crashes", by Charles Kindleberger

The German newspaper Süddeutsche Zeitung not long ago portrayed a number of German economists. They were asked to name their two favorite books and “Manias, Panics and Crashes: A History of Financial Crises” by Charles Kindleberger was mentioned twice.

What causes financial crises?

In the book, Kindleberger shows that there’s a pattern common to these events and that financial crises aren’t all that rare if you zoom out enough:

Speculative excess, referred to concisely as a mania, and revulsion from such excess in the form of a crisis, crash, or panic can be shown to be, if not inevitable, at least historically common. (p4)

A common sequence is followed:

What happens, basically, is that some event changes the economic outlook. New opportunities for profits are seized, and overdone, in ways so closely resembling irrationality as to constitute a mania. Once the excessive character of the upswing is realized, the financial system experiences a sort of “distress,” in the course of which the rush to reverse the expansion process may become so precipitous as to resemble panic. In the manic phase, people of wealth or credit switch out or borrow to buy real or illiquid financial assets. In panic, the reverse movement takes place, from real or financial assets to money, or repayment of debt, with a crash in the prices of […] whatever has been the subject of the mania. (p5)


[…] [I]rrationality may exist insofar as economic actors choose the wrong model, fail to take account of a particular and crucial bit of information, or go so far as to suppress information that does not conform to the model implicitly adopted. (p29)

Kindleberger then writes:

The end of a period of rising prices leads to distress if investors or speculators have become used to rising prices and the paper profits implicit in them. (p103)

Causa remota of the crisis is speculation and extended credit; causa proxima is some incident which snaps the confidence of the system, makes people think of the dangers of failure, and leads them to move [from the object of speculation] back into cash. […] Prices fall. Expectations are reversed. […] The credit system itself appears shaky, and the race for liquidity is on. (p107-108)

How to avoid financial crises or deal with them?

Kindleberger identifies a rise in the leverage in the economy as the culprit:

Speculative manias gather speed through an expansion of money and credit or perhaps, in some cases, get started because of an initial expansion of money and credit. (p52)

There’s plenty of research showing that credit plays an important role for financial crises. Kaminsky and Reinhart (1999) and Schularick and Taylor (2012) provide cross-country statistical evidence that financial crises are preceded by credit booms. Mian and Sufi (2009) similarly show that the parts of the United States in which the credit supply to less financially healthy (“subprime”) households increased more strongly, also experienced more mortgage defaults during the financial crisis of 2007 onwards. This leads them to write in their book:

As it turns out, we think debt is dangerous. (p12)

And in their research (pdf) with Emil Verner, they argue it’s the supply of credit rather than demand for it that drives these cycles of debt accumulation. López-Salido, Stein and Zakrajšek also show that optimistic credit conditions predict economic downturns.

So maybe the regulator should stop credit bonanzas before they become dangerous. The central bank could “lean against the wind” in good times by sucking liquidity out of credit markets. George Akerlof and Robert Shiller put it like this:

But financial markets must also be targeted [by the central bank]. (“Animal Spirits”, p96)

Kindleberger’s response (which I found the most interesting thought of the book) is that what counts as money isn’t obvious and that it’s therefore also difficult to control the credit supply:

The problem is that “money” is an elusive construct, difficult to pin down and to fix in some desired quantity for the economy. As a historical generalization, it can be said that every time the authorities stabilize or control some quantity of money , either in absolute volume or growing along a predetermined trend line, in moments of euphoria more will be produced. (p57)

My contention is that the process is endless: fix any and the market will create new forms of money in periods of boom to get around the limit and create the necessity to fix a new variable . (p58)

He goes through a range of possibilities of what he calls the

[…] virtually infinite set of possibilities of expanding credit on a fixed money base. (p68)

Instead - he argues - the central bank should step in when the crisis occurs:

If one cannot control expansion of credit in boom, one should at least try to halt contraction of credit in crisis. (p165)

He argues forcefully that the central bank should act as lender of last resort. This means that the central bank expands the money supply in times of crisis and provides liquidity to banks.

In a word, our conclusion is that money supply should be fixed over the long run but be elastic during the short-run crisis. The lender of last resort should exist, but his presence should be doubted. (p12)

Kindleberger is aware of the moral hazard problem: If banks know that they’ll be bailed out, then they might behave recklessly. But he says there’s no alternative (his emphasis):

The dominant argument against the a priori view that panics can be cured by being left alone is that they almost never are left alone. (p143)

He says that it shouldn’t be certain whether banks will be bailed out:

Ambiguity as to whether there will be a lender of last resort, and who it will be, may be optimal in a close-knit society. (p174)

He thinks central banks should decide on an ad-hoc basis:

The rule is that there is no rule. (p176)

Nothing we can do?

I find it discouraging to think that we live in the 21st century, but we can’t properly control the money or the credit supply and have to resort to ambiguity on whether banks will be saved to control them. I’m all for bending the rules in times of crisis, but isn’t there more we could do to not get there in the first place?

Anat Admati and Martin Hellwig argue that banks should be required to finance more through stocks and less through deposits and bonds:

Whatever else we do, imposing significant restrictions on bank’s borrowing is a simple and highly cost-effective way to reduce risks to the economy without imposing any significant cost on society. (“The Bankers’ New Clothes”, p10)

The benefit of this that the owner of the bank stocks would bear losses which could avert the danger of a bank going bankrupt or the threat of a bank run.

Similarly, Atif Mian says about nominal debt:

The key characteristic of debt, which makes it so destructive at times for the macroeconomy is the inability of a debt contract to share risk between the borrower and the lender. And in particular when I say “share risk”, it’s really the downside risk that we’re talking about.


We want to move away from a world where debt is the predominant contract. (link)

Doomed if we do, doomed if we don’t

But Hans-Joachim Voth writes:

“The optimal number of financial crises is not zero” (download pdf)

This is based on the evidence by Roman Rancière, Aaron Tornell and Frank Westermann that countries that experience more drastic contractions in credit tend to do better economically in the long run than countries who cripple their financial institutions and hence have stable but inefficient financial systems.

Other researchers studying a longer time horizon than Rancière et al. document that we traded lower real volatility against fewer but more harmful crises.

Rancière et al. also write:

We would like to emphasize that the fact that systemic risk can be good for growth does not mean that it is necessarily good for welfare. (“Systemic crises and growth”, p404)

And that is because we don’t like what follows financial crises if they lead to an emotional scarring that brings political polarization, a loss of trust and a lasting unwillingness to bear risks.

Liberalized financial markets were probably good for growth. But if they mean severe rare crises, then maybe it wasn’t a good choice and we should return to a world of more boring, safer banking. And we might even want to give up some of our future prosperity for that.


When this relaxed city on the Rhine became West Germany’s ‘temporary’ capital in 1949 it surprised many, including its own residents. When in 1991 a reunited German government decided to move to Berlin, it shocked many, especially its own residents.

A generation later, Bonn is doing just fine, thank you. It has a healthy economy and lively urban vibe.

This is from the Lonely Planet Germany.

An excellent course on machine learning

Following Olli’s recommendation I took a crack at Andrew Ng’s machine learning course on Coursera. It’s pedagogically well-designed and taught and I highly recommend it. I’d like to share my codes for the course, but I think that would be against the spirit of such an online class.

Some observations:

  • It’s an introductory course, so there are no proofs. The focus is on graphical intuition, implementing the algorithms in Octave/Matlab and the practicalities of large-scale machine learning projects.
  • The goal is not to accurately estimate the and the uncertainty around it, but to be precise in predicting the out of sample. (See also here.)
  • The course is refreshingly different from what we normally study and I think best taken not as a substitute, but as a complement to econometrics classes.
  • It uses a different vocabulary:

    Machine learning Econometrics
    example observation
    (to) learn (to) estimate
    hypothesis estimation equation
    feature/input variable
    output/outcome dependent variable
    bias constant/intercept
    bias (yeah, twice 🤔) bias
  • Linear regression is introduced through the cost function and its numerical minimization. Ng shows the analytical solution on the side, but he adds that would only be useful in a “low-dimensional” problem up to 10,000 or so variables.
  • I liked the introduction to neural networks in week 4 and the explanation of them as stacked logical operators.
  • Insightful discussion of how to use training and cross-validation set errors plotted against the regularization parameter and against the number of observations to identify whether bias or variance is a problem.
  • The video on error analysis made me realize that in my patent project I had spend a very large amount of time thinking about appropriate error metrics, but little time actually inspecting the mis-classified patents.
  • In a presentation I once attended, Tom Sargent said:

    A little linear algebra goes a long way.

    Similarly here: Only doing a little clustering, for example, we can compress the file size of this image by a factor of six, but preserve a lot of the information (exercise 7):

    Figure: Image compression with PCA

    Compressed tomatoes

  • I hadn’t previously thought of dimension reduction as a debugging method: If you get thousands of features from different sources and you’re not sure if some might be constructed similarly, then dimension reduction weeds out the redundant features.
  • Time series data is suspiciously missing from the course.
  • He mentions this folk wisdom:

    […] [I]t’s not who has the best algorithm that wins. It’s who has the most data.

  • And he recommends asking:

    How much work would it be to get 10x as much data as we currently have?

  • Ng stresses again and again that our intuitions on what to optimize often lead us astray. Instead we should plot error curves, do error analysis and look at where the bottlenecks in the machine learning pipepline are.
  • I also liked the discussion of using map-reduce for parallelization at the end of the class. Hal Varian also discussed that here.

Collected links

  1. The Economist on homeopathy in Germany:

    IT MAY not be as ancient as acupuncture, but homeopathy is the closest thing Germany has to a native alternative-medicine tradition.

  2. John Oliver’s Last Week Tonight on: “Journalism

  3. XKCD: “When people say “The Climate has changed before,” these are the kinds of changes they’re talking about.”

  4. German statistical office (in German): Promovierende in Deutschland:
    • They held a survey with 20,000 professors and 20,000 PhD students in Germany.
    • There are currrently 33,154 professors in Germany that can graduate a PhD student (“Promotionsrecht”).
    • The immense number of PhD students (“Promovierende”) is 196,200 of which 111,400 are enrolled at an university. And 99% of those finishing their PhDs are those that are enrolled at an university. (These numbers are obviously inflated by German peculiarities such as counting medical doctorates as PhDs.)
    • 11% of professors have no PhD students, 50% have 1-5 students and 3 percent have more than 20 PhD students. The average is 6 students per professor and that ratio is highest for the engineering subjects.
    • 44% () of PhD students are women.
    • The modal age is 29.
    • 15% are non-German.
    • 23% of students are in structured programs, 23% of students are doing a cumulative dissertation (economics PhD-style).
  5. John D. Cook: “One of my favorite proofs: Lagrange multipliers

  6. Free online book: “Dynamic Discrete Choice Models: Methods, Matlab Code, and Exercises”, by Jaap Abbring and Tobias Klein (through Jason Blevins)

"Philosophy of Science: A Very Short Introduction", by Samir Okasha

Samir Okasha in “Philosophy of Science: A Very Short Introduction” gives a good overview of the concept of science.

Okasha explains the difference between deductive and inductive reasoning. A deductive argument follows from its assumptions. An inductive argument is one where you have to reason about new unseen things.

At the root of Hume’s problem is the fact that the premisses of an inductive inference do not guarantee the truth of its conclusion.

Philosophers have responded to Hume’s problem in literally dozens of different ways; this is still an active area of research today.

For inductive reasoning to help us make predictions about the future, we need a new assumption. We have to take as given that along some lines things will remain the same.

This assumption may seem obvious, but as philosophers we want to question it. Why assume that future repetitions of the experiment will yield the same result? How do we know this is true?

A good model is one that’s not too crude about what it accepts about nature’s constancy. If you assume that business cycles just mechanically happen every seven or so years, then that’s fairly crude.

Karl Popper thought that scientists should only argue deductively. We all know Karl Popper and we cite him when we say that theories have to be falsifiable. But philosophy of science didn’t stop with Popper. In particular, Popper’s theory of progress in science doesn’t capture what actually happens:

In general, scientists do not just abandon their theories whenever they conflict with the observational data. […] Obviously if a theory persistently conflicts with more and more data, and no plausible ways of explaining away the conflict are found, it will eventually have to be rejected. But little progress would be made if scientists simply abandoned their theories at the first sign of trouble.

Most philosophers think it’s obvious that science relies heavily on inductive reasoning, indeed so obvious that it hardly needs arguing for. But, remarkably, this was denied by the philosopher Karl Popper, […]. Popper claimed that scientists only need to use deductive inferences.

The weakness of Popper’s argument is obvious. For scientists are not only interested in showing that certain theories are false.

In contrast, Thomas Kuhn speaks of paradigm changes:

In short, a paradigm is an entire scientific outlook – a constellation of shared assumptions, beliefs, and values that unite a scientific community and allow normal science to take place.

But over time anomalies are discovered – phenomena that simply cannot be reconciled with the theoretical assumptions of the paradigm, however hard normal scientists try. When anomalies are few in number they tend to just get ignored. But as more and more anomalies accumulate, a burgeoning sense of crisis envelops the scientific community. Confidence in the existing paradigm breaks down, and the process of normal science temporarily grinds to a halt.

In Kuhn’s words, ‘each paradigm will be shown to satisfy the criteria that it dictates for itself and to fall short of a few of those dictated by its opponent’.

Karl Popper is normative, “How should science be done?”, while Thomas Kuhn is descriptive, “How is science done?”

Okasha concludes,

In rebutting the charge that he had portrayed paradigm shifts as non-rational, Kuhn made the famous claim that there is ‘no algorithm’ for theory choice in science. […] Kuhn’s insistence that there is no algorithm for theory choice in science is almost certainly correct.

The moral of his story is not that paradigm shifts are irrational, but rather that a more relaxed, non-algorithmic concept of rationality is required to make sense of them.

Kuhn’s idea of “theory-ladenness” of data is interesting. Kuhn says that this makes comparisons between theories difficult or impossible. That’s probably exaggerated, but in economics, many of the things we measure (like GDP) are abstract concepts and theory guides how we measure it.

Why does inflation matter at all?

Noah Smith argues that inflation has low costs and central banks should therefore sometimes trade-off higher inflation against better GDP performance. And Olivier Blanchard has made the case to raise the inflation target above the current 2%, to increase the distance to the zero lower bound.

Yet in the mind of many people there’s no place for inflation. People have “money illusion”, so they fail to adjust nominal values for overall price changes and feel richer or poorer when really they ‘re not. Inflation is seen as a bad thing and George Akerlof and Robert Shiller write:

Inflation itself, particularly when it is increasing, can ultimately create a negative effect on the atmosphere of an economy, akin to the effect of broken windows and graffiti on a city. These lead to a breakdown in the sense of civil society, in the sense that all is right with the world. (p65, Animal Spirits)

For my bachelor thesis, I read Barry Eichengreen’s “Globalizing Capital”. He explains how modern economies changed after World War I. Larger firm conglomerates and unionization made wages of workers less flexible. And this downward wage rigidity was a problem during the Great Depression.

Nominal rigidities are the reason that monetary policy works at all. If prices and wages were flexible, then when the central bank doubles the money in circulation, all prices would also double immediately. So, I thought, the solution is to index all prices. If inflation from this year to the next is 2 percent, then your wage, your rent and every other price should automatically rise by 2 percent. And if for some reason the aggregate price level falls, then all these prices would also adjust downwards.

But indexing all prices is not workable and people wouldn’t accept it. And because money isn’t neutral, there is a role for intentional monetary policy. One of the most important effects of higher inflation is that, if wages adapt slowly, real wages fall for a while. So it’ll be cheaper for firms to hire people and they’ll be more willing to do so.

Economists are quick to prescribe other people economics lessons, but understanding inflation and the difference between nominal and real values is a basic skill that I wish more people would have.

Tech stocks in Berlin before 1913

In “The Berlin Stock Exchange in Imperial Germany - a Market for New Technology?”, (pdf) Sibylle Lehmann-Hasemeyer and Jochen Streb look at how well the financial market assessed firm innovativeness in pre-1913 Germany. They show that the stock market guessed well which companies would continue to innovate after they went public.

Between 1892 and 1919, 474 companies started trading their shares on the Berlin stock exchange. The authors take the change in the price of the stock on its first day of trading as an measure of “underpricing” which indicates how much asymmetric information there is in the markets.

Underpricing is bad for a firm, as it receives fewer funds than if it had sold its shares at a higher price. For example, Google went to great lengths to determine a good price. And with more capital the firm can invest more into research and so be granted more patents later. So one has to argue that this effect can’t be strong enough to lead to reverse causality.

Lehmann-Hasemeyer and Streb control for what investors knew at the time of the initial public offering (IPO) about how innovative firms already were. For this, they count the number of patents a firm had been granted before. So patents are a proxies for the innovativeness of a firm. This is an example of using patents as “inputs” to the technological process, in Zvi Griliches’ wording.

Research is a risky activity, so there might be more asymmetric information in the price for stocks of research-intensive companies. But that’s not what they find as there was little underpricing in the stocks of firms that continued to be innovative after the IPO. This might be due to the screening of banks:

Overall, German universal banks seemed to be well informed about the market value of firms that planned to go public. The comparatively low underpricing that occurred at the Berlin stock exchange during Germany’s high industrialization might therefore indicate that investors’ uncertainty was rather small because they knew that banks brought only those firms to the market that met certain minimum quality requirements.

They conclude that investors must have had more information than patent counts:

[Investors] were capable of distinguishing between permanently innovative firms and firms with sharply declining innovativeness (Buddenbrooks), even though both types of firms looked very similar at the date of the IPO with respect to their patent history. This observation implies that pure patent counts that are often used in cliometric studies of innovation might not be a good proxy for the knowledge that was available at the date of an IPO.

The paper is forthcoming in the American Economic Review.