Lukas Püttmann    About    Blog

Why does inflation matter at all?

Noah Smith argues that inflation has low costs and central banks should therefore sometimes trade-off higher inflation against better GDP performance. And Olivier Blanchard has made the case to raise the inflation target above the current 2%, to increase the distance to the zero lower bound.

Yet in the mind of many people there’s no place for inflation. People have “money illusion”, so they fail to adjust nominal values for overall price changes and feel richer or poorer when really they ‘re not. Inflation is seen as a bad thing and George Akerlof and Robert Shiller write:

Inflation itself, particularly when it is increasing, can ultimately create a negative effect on the atmosphere of an economy, akin to the effect of broken windows and graffiti on a city. These lead to a breakdown in the sense of civil society, in the sense that all is right with the world. (p65, Animal Spirits)

For my bachelor thesis, I read Barry Eichengreen’s “Globalizing Capital”. He explains how modern economies changed after World War I. Larger firm conglomerates and unionization made workers’wages less flexible. And this downward wage rigidity was a problem in the Great Depression.

Nominal rigidities are the reason that monetary policy is effective. With flexible prices and wages, if the central bank doubles the money in circulation, all prices would double immediately. So, I thought, the solution is to index all prices. If inflation from this year to the next is 2 percent, then your wage, your rent and every other price should automatically rise by 2 percent. And if for some reason the aggregate price level falls, then all these prices would also adjust downwards.

But indexing all prices is not workable and people wouldn’t accept it. And because money isn’t neutral, there is a role for intentional monetary policy. One of the most important effects of higher inflation is that, if wages adapt slowly, real wages fall for a while. So it’ll be cheaper for firms to hire people and they’ll be more willing to do so.

Economists are quick to prescribe other people economics lessons, but understanding inflation and the difference between nominal and real values is a basic skill that I wish more people would have.


Tech stocks in Berlin before 1913

In “The Berlin Stock Exchange in Imperial Germany - a Market for New Technology?”, (pdf) Sibylle Lehmann-Hasemeyer and Jochen Streb look at how well the financial market assessed firm innovativeness in pre-1913 Germany. They show that the stock market guessed well which companies would continue to innovate after they went public.

Between 1892 and 1919, 474 companies started trading their shares on the Berlin stock exchange. The authors take the change in the price of the stock on its first day of trading as an measure of “underpricing” which indicates how much asymmetric information there is in the markets.

Underpricing is bad for a firm, as it receives fewer funds than if it had sold its shares at a higher price. For example, Google went to great lengths to determine a good price. And with more capital the firm can invest more into research and so be granted more patents later. So one has to argue that this effect can’t be strong enough to lead to reverse causality.

Lehmann-Hasemeyer and Streb control for what investors knew at the time of the initial public offering (IPO) about how innovative firms already were. For this, they count the number of patents a firm had been granted before. So patents are a proxies for the innovativeness of a firm. This is an example of using patents as “inputs” to the technological process, in Zvi Griliches’ wording.

Research is a risky activity, so there might be more asymmetric information in the price for stocks of research-intensive companies. But that’s not what they find as there was little underpricing in the stocks of firms that continued to be innovative after the IPO. This might be due to the screening of banks:

Overall, German universal banks seemed to be well informed about the market value of firms that planned to go public. The comparatively low underpricing that occurred at the Berlin stock exchange during Germany’s high industrialization might therefore indicate that investors’ uncertainty was rather small because they knew that banks brought only those firms to the market that met certain minimum quality requirements.

They conclude that investors must have had more information than patent counts:

[Investors] were capable of distinguishing between permanently innovative firms and firms with sharply declining innovativeness (Buddenbrooks), even though both types of firms looked very similar at the date of the IPO with respect to their patent history. This observation implies that pure patent counts that are often used in cliometric studies of innovation might not be a good proxy for the knowledge that was available at the date of an IPO.

The paper is forthcoming in the American Economic Review.


Outsider books

There’s a species of books in which a commonly-held view by established researchers is criticized by someone from outside the profession and supposedly shown to be wrong.

There’s nothing wrong with people who are not scientists writing about science. But two lines of argument in those books aren’t convincing:

  1. The outsider does not have to hold to establish views to advance and has therefore figured out something that people within the profession have missed or aren’t allowed to say.
  2. The respective branch of science is not an experimental science and can therefore not establish causality.

The first is rarely the case. Science isn’t a closed environment where you’re not allowed to speak your mind. Dani Rodrik writes in his “Ten Commandments for Noneconomists”:

[Nine.] If you think all economists think alike, attend one of their seminars.

[Ten.] If you think economists are especially rude to noneconomists, attend of one of their seminars.

And the second argument is misleading. How did we figure out that the Earth orbits the Sun or that smoking causes cancer? Not through experiments. Also, if scientists can’t claim to identify causality, why should the outsider?

  • In the “The Nurture Assumption” (which is else an interesting book), Judith Rich Harris writes (added emphasis):

    [Socialization research] is a science because it uses some of the methods of science, but it is not, by and large, an experimental science. To do an experiment it is necessary to vary one thing and observe the effects on something else. Since socialization researchers do not, as a rule, have any control over the way parents rear their children, they generally cannot do experiments. Instead, they take advantage of existing variations in parental behavior. They let things vary naturally and, by systematically collecting data, try to to find out what things vary together. In other words, they do correlational studies.

  • Nina Teicholz uses both arguments in “The Big Fat Surprise”.

  • Tom Wolfe’s book (which I haven’t read) sounds a lot like that as well:

    Evolution, [Tom Wolfe] argues, isn’t a “scientific hypothesis” because nobody’s seen it happen, there’s no observation that could falsify it, it yields no predictions and it doesn’t “illuminate hitherto unknown or baffling areas of science.” Wrong - four times over.

    I like Jerry Coyne’s final take-down:

    Somewhere on his mission to tear down the famous, elevate the neglected outsider and hit the exclamation-point key as often as possible, Wolfe has forgotten how to think.


Collected links

  1. Linguistic dissection of a paper title.

  2. John Ellenby, Visionary Who Helped Create Early Laptop, Dies at 75” (NYT):

    John Ellenby […] studied economics and geography at University College London and spent a year in the early 1960s studying at the London School of Economics, where he encountered mainframe computers.

  3. The [Leipzig] apartment’s current owner, […], said she tried not to think too much about what had happened there.

  4. Reinhard Selten and the making of modern game theory

  5. Patrick Metzger’s “Millenial Whoop” (ht: Spotify’s Eliot Van Buskirk):

    Humans crave patterns. […]

    So it is that the Millennial Whoop evokes a kind of primordial sense that everything will be alright. […] In the age of climate change and economic injustice and racial violence, you can take a few moments to forget everything and shout with exuberance at the top of your lungs. Just dance and feel how awesome it is to be alive right now. Wa-oh-wa-oh.

Collected links

  1. Erdoğan vs. Erdogan (Acemoğlu?):

    A fairly quick inspection of web pages suggests that both the New York Times and the Financial Times operate essentially the same policy – diacritics for languages like French, German and Spanish, basic 7-bit ASCII (no diacritics at all) for the rest. […]

    For mainstream papers, the Guardian and the Süddeutsche are decidedly to the left of the spectrum, decidedly internationalist/Europeanist, and so on, and you would expect them to resist any suggestion that some languages are more important (or more normal) than others.

  2. Shane Caldwell: “Landing a tenure-track position, 1950’s vs. 2010’s

  3. Philip Ball in The Atlantic on Occam’s razor:

    Much more often, theories are distinguished not by making fewer assumptions but different ones. It’s then not obvious how to weigh them up.

  4. We want plates

  5. The Economist: “Why investors want alternative data”:

    The providers are themselves a disparate group, pumping out databases ranging from satellite imagery to social-media posts. […]

    Recent advancements in machine-learning have made it possible for companies to efficiently parse through millions of satellite images a day.

    […]

    Conducting research with alternative data does not always come easily; it often arrives in messy formats and can be difficult to handle for analysts who lack sophisticated IT operations.

  6. The BBC on the 100 greatest films of the 21st century. (So far, I guess.)

Modes of inquiry

In a newspaper article I once read, a professor of music pointed out the advantages of music education for children. He wrote that music is one way of experiencing and accessing the world.

The literary critic Michael Orthofer expresses similar thoughts about reading novels:

ORTHOFER: But I definitely think [reading fiction] really expands my horizons in a way that other things can’t. Travel, talking to people, meeting people, reading the newspaper, following current events – those things obviously also help you understand the world better, but I think fiction adds a totally different dimension to it. And truly great fiction really can take you much farther than other things can, I think. (link)

Consider Hans Fallada’s novel Jeder stirbt für sich allein (“Alone in Berlin”) which takes place in 1940-1942 Berlin. The author follows a working-class couple whose initial hopes in Hitler are disappointed and who become estranged from the Nazi regime. They start to secretly distribute postcards with subversive messages.

I can’t visit the Berlin of the 1940s, but Fallada can take me there. And it’s easy to feel with them. The domesticity of the couple is familiar and typically German. And when they get caught and things turn violent, even the swearing is what you hear today.

The couple really existed, but Fallada didn’t try to find out any facts about them or to document their case. Instead he unearths the “inner truth of the narrative”.1

But an author can do even more, he can depict a world that does not and did not exist, but in our fantasy it could have:

ORTHOFER: Being not tied down. […] That nonfiction, the description of what has actually happened – first of all, it’s also very difficult to capture just precisely what has happened. And often fiction allows you to go beyond that, to imagine the reasons behind it, which you might not be able to if you were doing just purely following the facts, […]. (link)

Paul Theroux calls travel a “mode of inquiry” and that you could also call music and fiction.


  1. The German original by Hans Fallada in the introduction to the book (added emphasis):

    Nur in großen Zügen – ein Roman hat eigene Gesetze und kann nicht in allem der Wirklichkeit folgen. Darum hat es der Verfasser auch vermieden, Authentisches über das das Privatleben dieser beiden Menschen zu erfahren: er musste sie so schildern, wie sie ihm vor Augen standen. Sie sind also zwei Gestalten der Phantasie, wie auch alle andere Figuren dieses Romans frei erfunden sind. Trotzdem glaubt der Verfasser an die “die innere Wahrheit” des Erzählten, auch wenn manche Einzelheit den tatsächlichen Verhältnissen nicht ganz entspricht.

A machine does your thinking: "Superintelligence", by Nick Bostrom

In “Superintelligence: Paths, Dangers, Strategies” Oxford philosopher Nick Bostrom writes about scenarios of advanced general intelligence and the threats it could pose to humanity in the future.

What became more and more clear to me while reading this book is just how undesirable the existence of a superintelligence would be. It would be risky, it’s not clear we could get it to do what we want, we don’t know how to specify what we want and even if all these things would be fulfilled: Why should we ever want to lose our agency?

What if

Bostrom’s definition of “superintelligence” (of which he considers artificial intelligence a special case) is silent on if there’s some new mind, but it asks about the capacities of such a thing (his emphasis):

We can tentatively define a superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest. (Chapter 2: Paths to superintelligence)

He argues that if ever we could build it, we would:

Some little idiot is bound to press the ignite button just to see what happens. (Chapter 15: Crunch time)

Even after reading this book, I doubt there’ll ever be some other entity that has agency. The internet or Google or the Go bot won’t “wake up”. Bostrom discusses many more sophisticated ways to get to superintelligence, but those too seem speculative to me.

But Bostrom asks: “What if?” What if there existed such a superintelligence? And it’s worth pondering “what if”, for two reasons:

  • If a superintelligence could be created, then we should plan ahead.
  • Even if we are never going to create a superintelligence, it’s an interesting thought experiment and we might learn something about our values.

Could we control it?

A superintelligence couldn’t easily be contained (“sandboxed” he calls it) and might become a singleton, a centralized decision maker. Whether the superintelligence is “friendly” would be hard to test, as it might well behave so at first to be let out of the box. And we might not get a warning, as its appearance could be a sudden event. The early superintelligence modifies and improves itself which leads to an intelligence explosion.

Bostrom also describes the following cunning plan of perfect enslavement: Feed the AI cryptographic tokens that are highly rewarding to it. And discount strongly, so that the first token gives 99% of remaining utility and the next token gives 99% of remaining utility again and so on. This makes it less likely that the AI comes up with some weird long-run plan of getting more tokens.

But that, too, is far from safe. It turns out that what Bostrom calls the “control problem”, the issue of how to restrain a superior intelligence, is unsolved and hard.

Bostrom discusses different kinds of superintelligences: oracles, genies, sovereigns and tools.

An oracle answers simple questions that we ask it. We might even reduce the possible interactions we could have with it to a chat or to only a yes or no statement, to avoid being charmed by it. I liked the idea of testing the predictions of an oracle by asking the same question to different versions of it that have different goals and varying available information. The distribution of answers (similarly as in bootstrapping in econometrics) then shows us how robust the oracle’s recommendations are.

Genies perform one task, then stop and wait for the next job. A sovereign gets an abstract goal and is relatively unconstrained in how to achieve it. Bostrom thinks these two forms of superintelligence aren’t fundamentally all that different and would both be difficult to control.

A tool is closer to software as we’re used to it. But Bostrom argues that in the future the way we think about software might change and the programmer’s job might become a more abstract activity. So these tools might then develop into general intelligence:

With advances in artificial intelligence, it would become possible for the programmer to offload more of the cognitive labor required to figure out how to accomplish a given task. In an extreme case, the programmer would simply specify a formal criterion of what counts as success and leave it to the AI to find a solution. (Chapter 10: Oracles, genies, sovereigns, tools)

A superintelligence would start reasoning about the world and might even come to the conclusion to think it’s in a simulation (a similar thought concerning humanity’s chance of being in a simulation was recently made famous by Elon Musk):

This predicament [of not being sure whether it is in a simulation] especially afflicts relatively early-stage superintelligences, ones that have not yet expanded to take advantage of the cosmic endowment. […] Potential simulators—that is, other more mature civilizations—would be able to run great numbers of simulations of such early-stage AIs even by dedicating a minute fraction of their computational resources to that purpose. If at least some (non-trivial fraction) of these mature superintelligent civilizations choose to use this ability, early-stage AIs should assign a substantial probability to being in a simulation. (Chapter 9: The control problem)

A superintelligence could react in different ways to such a conclusion. It might not alter its behavior, it might try to escape the perceived or real simulation or this risk of being in a simulation might even make it docile:

In particular, if an AI with resource-satiable final goals believes that in most simulated worlds that match its observations it will be rewarded if it cooperates (but not if it attempts to escape its box or contravene the interests of its creator) then it may choose to cooperate. […] A mere line in the sand, backed by the clout of a nonexistent simulator, could prove a stronger restraint than a two-foot-thick solid steel door. (Chapter 9: The control problem)

What do we want?

So it’s not clear that we could get a superintelligence to do what we want. But say we did, it then remains to specify what we want.

Giving the superintelligence too simple goals wouldn’t be a good idea:

An AI, by contrast, need not care intrinsically about any of those things [that humans care about]. There is nothing paradoxical about an AI whose sole final goal is to count the grains of sand on Boracay, or to calculate the decimal expansion of pi, or to maximize the total number of paperclips that will exist in its future light cone. In fact, it would be easier to create an AI with simple goals like these than to build one that had a human-like set of values and dispositions. (Chapter 7: The superintelligent will)

It’s hard to come up with goals that would be both good for humanity in general and that don’t leave the door open to unintended consequences. If we told it to “make us smile”, well then it might just paralyze all our faces with the corners of our mouths drawn back.

It’s important to get it right because the goals might be hard to change once the superintelligence already exists. But are we sure that our moral judgments right now are exactly right? People in the past probably also thought they had figured things out, but in hindsight we know many of the things they thought were wrong (“the world is flat”) and we object to many of their views (“it’s ok to have slaves”). So our values change:

We humans often seem happy to let our final values drift. This might often be because we do not know precisely what they are. It is not surprising that we want our beliefs about our final values to be able to change in light of continuing self-discovery or changing self-presentation needs. However, there are cases in which we willingly change the values themselves, not just our beliefs or interpretations of them. (Chapter 7: The superintelligent will)

Bostrom proposes a concept called indirect normativity to deal with this issue, in which we let the superintelligence figure out what are better moral standards and it would help us live by them starting now:

Indirect normativity is a way to answer the challenge presented by the fact that we may not know what we truly want, what is in our interest, or what is morally right or ideal. Instead of making a guess based on our own current understanding (which is probably deeply flawed), we would delegate some of the cognitive work required for value selection to the superintelligence. (Chapter 13: Choosing the criteria for choosing)

The superintelligence should also not only act on our short-run urges and passions, but on a more rational and reflective set of preferences. In particular, what Bostrom calls “second-order desires”:

An individual might have a second-order desire (a desire concerning what to desire) that some of her first-order desires not be given weight when her volition is extrapolated. For example, an alcoholic who has a first-order desire for booze might also have a second-order desire not to have that first-order desire. (Chapter 13: Choosing the criteria for choosing)

So people can have preference over preferences. I don’t enjoy reading 19th century classical novels from France, but I have a preference for wanting to enjoy those.

So would the superintelligence slap my shallow Third World War blockbuster novel out of my hands and put Victor Hugo there? It suppose it would a have more subtle way.

We would therefore have a way to modify our tastes and to choose what to like.

So do it?

Say we had solved these problems, so we (i) could actually get the superintelligence to do what we want and (ii) had figured out exactly what we want. Should we press the ignite button, start up the superintelligence and let it do its work?

I don’t think so. I think we still want clarity and truth and to not to be fooled. Simon Blackburn writes:

We might say: one of our concerns is not to be deceived about whether our concerns are met. (Chapter 8: What to Do)

Admittedly, an argument can be made for the opposite. Someone in pain might wish to have his senses clouded with medicine. And not all information is always desirable. I’ll gladly not find out how mediocre the pictures are that I took on my last holiday.

So already now our minds implement what Roland Bénabou and Jean Tirole (pdf) describe as a

[…] tradeoff between accuracy and desirability [in how we form our beliefs]. (p142)

But that’s the thing: our mind actively implements it. We want to build our own model of the world, even if some of our beliefs about it are distorted, not live in the perfect bubble. There’s a “premium” (pdf) that we’re willing to pay, simply to stay in control.

I choose the red pill.


Further reading: Related posts: