Lukas Püttmann    About    Blog

Elon Musk

In the sci-fi novel “Seveneves”, the character Sean Probst looks suspiciously much like Silicon Valley founder and investor Elon Musk. In the book, earth is awaiting impending doom and collectively trying to bring as many people as possible into space to give mankind a chance to survive. The brilliant and eccentric billionaire Sean Probst considers the democratically agreed upon plan insufficient and sacrifices his life by flying his self-made space ship to gather a big block of ice that humanity needs as fuel.

Last year’s biography of Elon Musk by Ashlee Vance is a fascinating piece of contemporary history and a window into the character of such a driven person. Vance offers a balanced account of Musk’s charming and brilliant side and the Musk who ruthlessly abandons old companions and pushes out the original founders of a company. What becomes clear from reading this book is just how much risk Musk has repeatedly taken on himself and how hard he worked. Vance cites the investor Antonio Gracias as saying:

I’ve just never seen anything like his [Musk’s] ability to take pain. (Chapter 8: Pain, Suffering, and Survival)

So why does someone who has proven himself, is vastly rich and has five young sons push himself so hard?

In “Seveneves”, the catastrophe is near and Probst would die soon anyway, so he might as well risk his live. And similarly, Vance’s overarching theory is that Musk is not primarily driven by “status cocaine”, but is a hyper-“Efficient Altruist” who thinks he’s figured out the most important challenges facing humanity and focuses all his energy on these goals:

Musk’s behavior matches up much more closely with someone who is […] profoundly gifted. […] It’s not uncommon for these children to look out into the world and find flaws — glitches in the system — and construct logical paths in their minds to fix them. For Musk, the call to ensure that mankind is a multiplanetary species partly stems from a life richly influenced by science fiction and technology. […]

Each facet of Musk’s life might be an attempt to soothe a type of existential depression that seems to gnaw at his every fiber. […] The people who suggest bad ideas during meetings or make mistakes at work are getting in the way of all of this and slowing Musk down. He does not dislike them as people. It’s more that he feels pained by their mistakes, which have consigned man to peril that much longer. The perceived lack of emotion is a symptom of Musk sometimes feeling like he’s the only one who really grasps the urgency of his mission. […]

Musk has been pretty up front about these tendencies. He’s implored people to understand that he’s not chasing momentary opportunities in the business world. He’s trying to solve problems that have been consuming him for decades. (Chapter 11: The Unified Field Theory of Elon Musk)


Collected links

  1. FiveThirtyEight: “People are making Super Mario go faster than ever ”. I wonder what these players would say about MarI/O.

  2. Simon Kuper in the FT (if paywalled, google for it) on robots and journalism:

    Don’t make your job your identity.

  3. Some behavioral finance economist should exploit this to study the effect of fake news on the stock market:

  4. The first three points in this post by Olivier Blanchard are an interesting summary of what almost all macroeconomists can agree on. (Through Rüdiger Bachmann)

  5. Bundesbank (in German):

    Verteilungseffekte ein Erkenntnisziel der Geldpolitik, aber kein geldpolitisches Ziel.

  6. Vice: “When tourism turns into narcissism”

    I’m a travel writer, which is shorthand for saying that I’m a work-shy dilettante with an overinflated respect for the value of my own experience.

  7. MIT alumni are going to tech

  8. Literary Hub: “There’s still no word for ‘memoir’ in German publishing” (through Dan Wang)

"Manias, Panics and Crashes", by Charles Kindleberger

The German newspaper Süddeutsche Zeitung not long ago portrayed a number of German economists. They were asked to name their two favorite books and “Manias, Panics and Crashes: A History of Financial Crises” by Charles Kindleberger was mentioned twice.

What causes financial crises?

In the book, Kindleberger shows that there’s a pattern common to these events and that financial crises aren’t all that rare if you zoom out enough:

Speculative excess, referred to concisely as a mania, and revulsion from such excess in the form of a crisis, crash, or panic can be shown to be, if not inevitable, at least historically common. (p4)

A common sequence is followed:

What happens, basically, is that some event changes the economic outlook. New opportunities for profits are seized, and overdone, in ways so closely resembling irrationality as to constitute a mania. Once the excessive character of the upswing is realized, the financial system experiences a sort of “distress,” in the course of which the rush to reverse the expansion process may become so precipitous as to resemble panic. In the manic phase, people of wealth or credit switch out or borrow to buy real or illiquid financial assets. In panic, the reverse movement takes place, from real or financial assets to money, or repayment of debt, with a crash in the prices of […] whatever has been the subject of the mania. (p5)

And:

[…] [I]rrationality may exist insofar as economic actors choose the wrong model, fail to take account of a particular and crucial bit of information, or go so far as to suppress information that does not conform to the model implicitly adopted. (p29)

Kindleberger then writes:

The end of a period of rising prices leads to distress if investors or speculators have become used to rising prices and the paper profits implicit in them. (p103)

Causa remota of the crisis is speculation and extended credit; causa proxima is some incident which snaps the confidence of the system, makes people think of the dangers of failure, and leads them to move [from the object of speculation] back into cash. […] Prices fall. Expectations are reversed. […] The credit system itself appears shaky, and the race for liquidity is on. (p107-108)

How to avoid financial crises or deal with them?

Kindleberger identifies a rise in the leverage in the economy as the culprit:

Speculative manias gather speed through an expansion of money and credit or perhaps, in some cases, get started because of an initial expansion of money and credit. (p52)

There’s plenty of research showing that credit plays an important role for financial crises. Kaminsky and Reinhart (1999) and Schularick and Taylor (2012) provide cross-country statistical evidence that financial crises are preceded by credit booms. Mian and Sufi (2009) similarly show that the parts of the United States in which the credit supply to less financially healthy (“subprime”) households increased more strongly, also experienced more mortgage defaults during the financial crisis of 2007 onwards. This leads them to write in their book:

As it turns out, we think debt is dangerous. (p12)

And in their research (pdf) with Emil Verner, they argue it’s the supply of credit rather than demand for it that drives these cycles of debt accumulation. López-Salido, Stein and Zakrajšek also show that optimistic credit conditions predict economic downturns.

So maybe the regulator should stop credit bonanzas before they become dangerous. The central bank could “lean against the wind” in good times by sucking liquidity out of credit markets. George Akerlof and Robert Shiller put it like this:

But financial markets must also be targeted [by the central bank]. (“Animal Spirits”, p96)

Kindleberger’s response (which I found the most interesting thought of the book) is that what counts as money isn’t obvious and that it’s therefore also difficult to control the credit supply:

The problem is that “money” is an elusive construct, difficult to pin down and to fix in some desired quantity for the economy. As a historical generalization, it can be said that every time the authorities stabilize or control some quantity of money , either in absolute volume or growing along a predetermined trend line, in moments of euphoria more will be produced. (p57)

My contention is that the process is endless: fix any and the market will create new forms of money in periods of boom to get around the limit and create the necessity to fix a new variable . (p58)

He goes through a range of possibilities of what he calls the

[…] virtually infinite set of possibilities of expanding credit on a fixed money base. (p68)

Instead - he argues - the central bank should step in when the crisis occurs:

If one cannot control expansion of credit in boom, one should at least try to halt contraction of credit in crisis. (p165)

He argues forcefully that the central bank should act as lender of last resort. This means that the central bank expands the money supply in times of crisis and provides liquidity to banks.

In a word, our conclusion is that money supply should be fixed over the long run but be elastic during the short-run crisis. The lender of last resort should exist, but his presence should be doubted. (p12)

Kindleberger is aware of the moral hazard problem: If banks know that they’ll be bailed out, then they might behave recklessly. But he says there’s no alternative (his emphasis):

The dominant argument against the a priori view that panics can be cured by being left alone is that they almost never are left alone. (p143)

He says that it shouldn’t be certain whether banks will be bailed out:

Ambiguity as to whether there will be a lender of last resort, and who it will be, may be optimal in a close-knit society. (p174)

He thinks central banks should decide on an ad-hoc basis:

The rule is that there is no rule. (p176)

Nothing we can do?

I find it discouraging to think that we live in the 21st century, but we can’t properly control the money or the credit supply and have to resort to ambiguity on whether banks will be saved to control them. I’m all for bending the rules in times of crisis, but isn’t there more we could do to not get there in the first place?

Anat Admati and Martin Hellwig argue that banks should be required to finance more through stocks and less through deposits and bonds:

Whatever else we do, imposing significant restrictions on bank’s borrowing is a simple and highly cost-effective way to reduce risks to the economy without imposing any significant cost on society. (“The Bankers’ New Clothes”, p10)

The benefit of this that the owner of the bank stocks would bear losses which could avert the danger of a bank going bankrupt or the threat of a bank run.

Similarly, Atif Mian says about nominal debt:

The key characteristic of debt, which makes it so destructive at times for the macroeconomy is the inability of a debt contract to share risk between the borrower and the lender. And in particular when I say “share risk”, it’s really the downside risk that we’re talking about.

[...]

We want to move away from a world where debt is the predominant contract. (link)

Doomed if we do, doomed if we don’t

But Hans-Joachim Voth writes:

“The optimal number of financial crises is not zero” (download pdf)

This is based on the evidence by Roman Rancière, Aaron Tornell and Frank Westermann that countries that experience more drastic contractions in credit tend to do better economically in the long run than countries who cripple their financial institutions and hence have stable but inefficient financial systems.

Other researchers studying a longer time horizon than Rancière et al. document that we traded lower real volatility against fewer but more harmful crises.

Rancière et al. also write:

We would like to emphasize that the fact that systemic risk can be good for growth does not mean that it is necessarily good for welfare. (“Systemic crises and growth”, p404)

And that is because we don’t like what follows financial crises if they lead to an emotional scarring that brings political polarization, a loss of trust and a lasting unwillingness to bear risks.

Liberalized financial markets were probably good for growth. But if they mean severe rare crises, then maybe it wasn’t a good choice and we should return to a world of more boring, safer banking. And we might even want to give up some of our future prosperity for that.


Bonn

When this relaxed city on the Rhine became West Germany’s ‘temporary’ capital in 1949 it surprised many, including its own residents. When in 1991 a reunited German government decided to move to Berlin, it shocked many, especially its own residents.

A generation later, Bonn is doing just fine, thank you. It has a healthy economy and lively urban vibe.

This is from the Lonely Planet Germany.

An excellent course on machine learning

Following Olli’s recommendation I took a crack at Andrew Ng’s machine learning course on Coursera. It’s pedagogically well-designed and taught and I highly recommend it. I’d like to share my codes for the course, but I think that would be against the spirit of such an online class.

Some observations:

  • It’s an introductory course, so there are no proofs. The focus is on graphical intuition, implementing the algorithms in Octave/Matlab and the practicalities of large-scale machine learning projects.
  • The goal is not to accurately estimate the and the uncertainty around it, but to be precise in predicting the out of sample. (See also here.)
  • The course is refreshingly different from what we normally study and I think best taken not as a substitute, but as a complement to econometrics classes.
  • It uses a different vocabulary:

    Machine learning Econometrics
    example observation
    (to) learn (to) estimate
    hypothesis estimation equation
    feature/input variable
    output/outcome dependent variable
    bias constant/intercept
    bias (yeah, twice 🤔) bias
  • Linear regression is introduced through the cost function and its numerical minimization. Ng shows the analytical solution on the side, but he adds that would only be useful in a “low-dimensional” problem up to 10,000 or so variables.
  • I liked the introduction to neural networks in week 4 and the explanation of them as stacked logical operators.
  • Insightful discussion of how to use training and cross-validation set errors plotted against the regularization parameter and against the number of observations to identify whether bias or variance is a problem.
  • The video on error analysis made me realize that in my patent project I had spend a very large amount of time thinking about appropriate error metrics, but little time actually inspecting the mis-classified patents.
  • In a presentation I once attended, Tom Sargent said:

    A little linear algebra goes a long way.

    Similarly here: Only doing a little clustering, for example, we can compress the file size of this image by a factor of six, but preserve a lot of the information (exercise 7):

    Figure: Image compression with PCA

    Compressed tomatoes

  • I hadn’t previously thought of dimension reduction as a debugging method: If you get thousands of features from different sources and you’re not sure if some might be constructed similarly, then dimension reduction weeds out the redundant features.
  • Time series data is suspiciously missing from the course.
  • He mentions this folk wisdom:

    […] [I]t’s not who has the best algorithm that wins. It’s who has the most data.

  • And he recommends asking:

    How much work would it be to get 10x as much data as we currently have?

  • Ng stresses again and again that our intuitions on what to optimize often lead us astray. Instead we should plot error curves, do error analysis and look at where the bottlenecks in the machine learning pipepline are.
  • I also liked the discussion of using map-reduce for parallelization at the end of the class. Hal Varian also discussed that here.


Collected links

  1. The Economist on homeopathy in Germany:

    IT MAY not be as ancient as acupuncture, but homeopathy is the closest thing Germany has to a native alternative-medicine tradition.

  2. John Oliver’s Last Week Tonight on: “Journalism

  3. XKCD: “When people say “The Climate has changed before,” these are the kinds of changes they’re talking about.”

  4. German statistical office (in German): Promovierende in Deutschland:
    • They held a survey with 20,000 professors and 20,000 PhD students in Germany.
    • There are currrently 33,154 professors in Germany that can graduate a PhD student (“Promotionsrecht”).
    • The immense number of PhD students (“Promovierende”) is 196,200 of which 111,400 are enrolled at an university. And 99% of those finishing their PhDs are those that are enrolled at an university. (These numbers are obviously inflated by German peculiarities such as counting medical doctorates as PhDs.)
    • 11% of professors have no PhD students, 50% have 1-5 students and 3 percent have more than 20 PhD students. The average is 6 students per professor and that ratio is highest for the engineering subjects.
    • 44% () of PhD students are women.
    • The modal age is 29.
    • 15% are non-German.
    • 23% of students are in structured programs, 23% of students are doing a cumulative dissertation (economics PhD-style).
  5. John D. Cook: “One of my favorite proofs: Lagrange multipliers

  6. Free online book: “Dynamic Discrete Choice Models: Methods, Matlab Code, and Exercises”, by Jaap Abbring and Tobias Klein (through Jason Blevins)

"Philosophy of Science: A Very Short Introduction", by Samir Okasha

Samir Okasha in “Philosophy of Science: A Very Short Introduction” gives a good overview of the concept of science.

Okasha explains the difference between deductive and inductive reasoning. A deductive argument follows from its assumptions. An inductive argument is one where you have to reason about new unseen things.

At the root of Hume’s problem is the fact that the premisses of an inductive inference do not guarantee the truth of its conclusion.

Philosophers have responded to Hume’s problem in literally dozens of different ways; this is still an active area of research today.

For inductive reasoning to help us make predictions about the future, we need a new assumption. We have to take as given that along some lines things will remain the same.

This assumption may seem obvious, but as philosophers we want to question it. Why assume that future repetitions of the experiment will yield the same result? How do we know this is true?

A good model is one that’s not too crude about what it accepts about nature’s constancy. If you assume that business cycles just mechanically happen every seven or so years, then that’s fairly crude.

Karl Popper thought that scientists should only argue deductively. We all know Karl Popper and we cite him when we say that theories have to be falsifiable. But philosophy of science didn’t stop with Popper. In particular, Popper’s theory of progress in science doesn’t capture what actually happens:

In general, scientists do not just abandon their theories whenever they conflict with the observational data. […] Obviously if a theory persistently conflicts with more and more data, and no plausible ways of explaining away the conflict are found, it will eventually have to be rejected. But little progress would be made if scientists simply abandoned their theories at the first sign of trouble.

Most philosophers think it’s obvious that science relies heavily on inductive reasoning, indeed so obvious that it hardly needs arguing for. But, remarkably, this was denied by the philosopher Karl Popper, […]. Popper claimed that scientists only need to use deductive inferences.

The weakness of Popper’s argument is obvious. For scientists are not only interested in showing that certain theories are false.

In contrast, Thomas Kuhn speaks of paradigm changes:

In short, a paradigm is an entire scientific outlook – a constellation of shared assumptions, beliefs, and values that unite a scientific community and allow normal science to take place.

But over time anomalies are discovered – phenomena that simply cannot be reconciled with the theoretical assumptions of the paradigm, however hard normal scientists try. When anomalies are few in number they tend to just get ignored. But as more and more anomalies accumulate, a burgeoning sense of crisis envelops the scientific community. Confidence in the existing paradigm breaks down, and the process of normal science temporarily grinds to a halt.

In Kuhn’s words, ‘each paradigm will be shown to satisfy the criteria that it dictates for itself and to fall short of a few of those dictated by its opponent’.

Karl Popper is normative, “How should science be done?”, while Thomas Kuhn is descriptive, “How is science done?”

Okasha concludes,

In rebutting the charge that he had portrayed paradigm shifts as non-rational, Kuhn made the famous claim that there is ‘no algorithm’ for theory choice in science. […] Kuhn’s insistence that there is no algorithm for theory choice in science is almost certainly correct.

The moral of his story is not that paradigm shifts are irrational, but rather that a more relaxed, non-algorithmic concept of rationality is required to make sense of them.

Kuhn’s idea of “theory-ladenness” of data is interesting. Kuhn says that this makes comparisons between theories difficult or impossible. That’s probably exaggerated, but in economics, many of the things we measure (like GDP) are abstract concepts and theory guides how we measure it.