Between 1892 and 1919, 474 companies started trading their shares on the Berlin stock exchange. The authors take the change in the price of the stock on its first day of trading as an measure of “underpricing” which indicates how much asymmetric information there is in the markets.
Underpricing is bad for a firm, as it receives fewer funds than if it had sold its shares at a higher price. For example, Google went to great lengths to determine a good price. And with more capital the firm can invest more into research and so be granted more patents later. So one has to argue that this effect can’t be strong enough to lead to reverse causality.
Lehmann-Hasemeyer and Streb control for what investors knew at the time of the initial public offering (IPO) about how innovative firms already were. For this, they count the number of patents a firm had been granted before. So patents are a proxies for the innovativeness of a firm. This is an example of using patents as “inputs” to the technological process, in Zvi Griliches’ wording.
Research is a risky activity, so there might be more asymmetric information in the price for stocks of research-intensive companies. But that’s not what they find as there was little underpricing in the stocks of firms that continued to be innovative after the IPO. This might be due to the screening of banks:
Overall, German universal banks seemed to be well informed about the market value of firms that planned to go public. The comparatively low underpricing that occurred at the Berlin stock exchange during Germany’s high industrialization might therefore indicate that investors’ uncertainty was rather small because they knew that banks brought only those firms to the market that met certain minimum quality requirements.
They conclude that investors must have had more information than patent counts:
[Investors] were capable of distinguishing between permanently innovative firms and firms with sharply declining innovativeness (Buddenbrooks), even though both types of firms looked very similar at the date of the IPO with respect to their patent history. This observation implies that pure patent counts that are often used in cliometric studies of innovation might not be a good proxy for the knowledge that was available at the date of an IPO.
The paper is forthcoming in the American Economic Review.
There’s a species of books in which a commonly-held view by established researchers is criticized by someone from outside the profession and supposedly shown to be wrong.
There’s nothing wrong with people who are not scientists writing about science. But two lines of argument in those books aren’t convincing:
The outsider does not have to hold to establish views to advance and has therefore figured out something that people within the profession have missed or aren’t allowed to say.
The respective branch of science is not an experimental science and can therefore not establish causality.
The first is rarely the case. Science isn’t a closed environment where you’re not allowed to speak your mind. Dani Rodrik writes in his “Ten Commandments for Noneconomists”:
[Nine.] If you think all economists think alike, attend one of their seminars.
[Ten.] If you think economists are especially rude to noneconomists, attend of one of their seminars.
And the second argument is misleading. How did we figure out that the Earth orbits the Sun or that smoking causes cancer? Not through experiments. Also, if scientists can’t claim to identify causality, why should the outsider?
In the “The Nurture Assumption” (which is else an interesting book), Judith Rich Harris writes (added emphasis):
[Socialization research] is a science because it uses some of the methods of science, but it is not, by and large, an experimental science. To do an experiment it is necessary to vary one thing and observe the effects on something else. Since socialization researchers do not, as a rule, have any control over the way parents rear their children, they generally cannot do experiments. Instead, they take advantage of existing variations in parental behavior. They let things vary naturally and, by systematically collecting data, try to to find out what things vary together. In other words, they do correlational studies.
Tom Wolfe’s book (which I haven’t read) sounds a lot like that as well:
Evolution, [Tom Wolfe] argues, isn’t a “scientific hypothesis” because nobody’s seen it happen, there’s no observation that could falsify it, it yields no predictions and it doesn’t “illuminate hitherto unknown or baffling areas of science.” Wrong - four times over.
I like Jerry Coyne’s final take-down:
Somewhere on his mission to tear down the famous, elevate the neglected outsider and hit the exclamation-point key as often as possible, Wolfe has forgotten how to think.
So it is that the Millennial Whoop evokes a kind of primordial sense that everything will be alright. […] In the age of climate change and economic injustice and racial violence, you can take a few moments to forget everything and shout with exuberance at the top of your lungs. Just dance and feel how awesome it is to be alive right now. Wa-oh-wa-oh.
A fairly quick inspection of web pages suggests that both the New York Times and the Financial Times operate essentially the same policy – diacritics for languages like French, German and Spanish, basic 7-bit ASCII (no diacritics at all) for the rest. […]
For mainstream papers, the Guardian and the Süddeutsche are decidedly to the left of the spectrum, decidedly internationalist/Europeanist, and so on, and you would expect them to resist any suggestion that some languages are more important (or more normal) than others.
In a newspaper article I once read, a professor of music pointed out the advantages of music education for children. He wrote that music is one way of experiencing and accessing the world.
The literary critic Michael Orthofer expresses similar thoughts about reading novels:
ORTHOFER: But I definitely think [reading fiction] really expands my horizons in a way that other things can’t. Travel, talking to people, meeting people, reading the newspaper, following current events – those things obviously also help you understand the world better, but I think fiction adds a totally different dimension to it. And truly great fiction really can take you much farther than other things can, I think. (link)
Consider Hans Fallada’s novel “Jeder stirbt für sich allein” (“Alone in Berlin”) which takes place in 1940-1942 Berlin. The author follows a working-class couple whose initial hopes in Hitler are disappointed and who become estranged from the Nazi regime. They start to secretly distribute postcards with subversive messages.
I can’t visit the Berlin of the 1940s, but Fallada can take me there. And it’s easy to feel with them. The domesticity of the couple is familiar and typically German. And when they get caught and things turn violent, even the swearing is what you hear today.
The couple really existed, but Fallada didn’t try to find out any facts about them or to document their case. Instead he unearths the “inner truth of the narrative”.1
But an author can do even more, he can depict a world that does not and did not exist, but in our fantasy it could have:
ORTHOFER: Being not tied down. […] That nonfiction, the description of what has actually happened – first of all, it’s also very difficult to capture just precisely what has happened. And often fiction allows you to go beyond that, to imagine the reasons behind it, which you might not be able to if you were doing just purely following the facts, […]. (link)
Paul Theroux calls travel a “mode of inquiry” and that you could also call music and fiction.
The German original by Hans Fallada in the introduction to the book (added emphasis):
Nur in großen Zügen – ein Roman hat eigene Gesetze und kann nicht in allem der Wirklichkeit folgen. Darum hat es der Verfasser auch vermieden, Authentisches über das das Privatleben dieser beiden Menschen zu erfahren: er musste sie so schildern, wie sie ihm vor Augen standen. Sie sind also zwei Gestalten der Phantasie, wie auch alle andere Figuren dieses Romans frei erfunden sind. Trotzdem glaubt der Verfasser an die “die innere Wahrheit” des Erzählten, auch wenn manche Einzelheit den tatsächlichen Verhältnissen nicht ganz entspricht.
What became more and more clear to me while reading this book is just how undesirable the existence of a superintelligence would be. It would be risky, it’s not clear we could get it to do what we want, we don’t know how to specify what we want and even if all these things would be fulfilled: Why should we ever want to lose our agency?
Bostrom’s definition of “superintelligence” (of which he considers artificial intelligence a special case) is silent on if there’s some new mind, but it asks about the capacities of such a thing (his emphasis):
We can tentatively define a superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest. (Chapter 2: Paths to superintelligence)
He argues that if ever we could build it, we would:
Some little idiot is bound to press the ignite button just to see what happens. (Chapter 15: Crunch time)
Even after reading this book, I doubt there’ll ever be some other entity that has agency. The internet or Google or the Go bot won’t “wake up”. Bostrom discusses many more sophisticated ways to get to superintelligence, but those too seem speculative to me.
But Bostrom asks: “What if?” What if there existed such a superintelligence? And it’s worth pondering “what if”, for two reasons:
If a superintelligence could be created, then we should plan ahead.
Even if we are never going to create a superintelligence, it’s an interesting thought experiment and we might learn something about our values.
Could we control it?
A superintelligence couldn’t easily be contained (“sandboxed” he calls it) and might become a singleton, a centralized decision maker. Whether the superintelligence is “friendly” would be hard to test, as it might well behave so at first to be let out of the box. And we might not get a warning, as its appearance could be a sudden event. The early superintelligence modifies and improves itself which leads to an intelligence explosion.
Bostrom also describes the following cunning plan of perfect enslavement: Feed the AI cryptographic tokens that are highly rewarding to it. And discount strongly, so that the first token gives 99% of remaining utility and the next token gives 99% of remaining utility again and so on. This makes it less likely that the AI comes up with some weird long-run plan of getting more tokens.
But that, too, is far from safe. It turns out that what Bostrom calls the “control problem”, the issue of how to restrain a superior intelligence, is unsolved and hard.
Bostrom discusses different kinds of superintelligences: oracles, genies, sovereigns and tools.
An oracle answers simple questions that we ask it. We might even reduce the possible interactions we could have with it to a chat or to only a yes or no statement, to avoid being charmed by it. I liked the idea of testing the predictions of an oracle by asking the same question to different versions of it that have different goals and varying available information. The distribution of answers (similarly as in bootstrapping in econometrics) then shows us how robust the oracle’s recommendations are.
Genies perform one task, then stop and wait for the next job. A sovereign gets an abstract goal and is relatively unconstrained in how to achieve it. Bostrom thinks these two forms of superintelligence aren’t fundamentally all that different and would both be difficult to control.
A tool is closer to software as we’re used to it. But Bostrom argues that in the future the way we think about software might change and the programmer’s job might become a more abstract activity. So these tools might then develop into general intelligence:
With advances in artificial intelligence, it would become possible for the programmer to offload more of the cognitive labor required to figure out how to accomplish a given task. In an extreme case, the programmer would simply specify a formal criterion of what counts as success and leave it to the AI to find a solution. (Chapter 10: Oracles, genies, sovereigns, tools)
A superintelligence would start reasoning about the world and might even come to the conclusion to think it’s in a simulation (a similar thought concerning humanity’s chance of being in a simulation was recently made famous by Elon Musk):
This predicament [of not being sure whether it is in a simulation] especially afflicts relatively early-stage superintelligences, ones that have not yet expanded to take advantage of the cosmic endowment. […] Potential simulators—that is, other more mature civilizations—would be able to run great numbers of simulations of such early-stage AIs even by dedicating a minute fraction of their computational resources to that purpose. If at least some (non-trivial fraction) of these mature superintelligent civilizations choose to use this ability, early-stage AIs should assign a substantial probability to being in a simulation. (Chapter 9: The control problem)
A superintelligence could react in different ways to such a conclusion. It might not alter its behavior, it might try to escape the perceived or real simulation or this risk of being in a simulation might even make it docile:
In particular, if an AI with resource-satiable final goals believes that in most simulated worlds that match its observations it will be rewarded if it cooperates (but not if it attempts to escape its box or contravene the interests of its creator) then it may choose to cooperate. […] A mere line in the sand, backed by the clout of a nonexistent simulator, could prove a stronger restraint than a two-foot-thick solid steel door. (Chapter 9: The control problem)
What do we want?
So it’s not clear that we could get a superintelligence to do what we want. But say we did, it then remains to specify what we want.
Giving the superintelligence too simple goals wouldn’t be a good idea:
An AI, by contrast, need not care intrinsically about any of those things [that humans care about]. There is nothing paradoxical about an AI whose sole final goal is to count the grains of sand on Boracay, or to calculate the decimal expansion of pi, or to maximize the total number of paperclips that will exist in its future light cone. In fact, it would be easier to create an AI with simple goals like these than to build one that had a human-like set of values and dispositions. (Chapter 7: The superintelligent will)
It’s hard to come up with goals that would be both good for humanity in general and that don’t leave the door open to unintended consequences. If we told it to “make us smile”, well then it might just paralyze all our faces with the corners of our mouths drawn back.
It’s important to get it right because the goals might be hard to change once the superintelligence already exists. But are we sure that our moral judgments right now are exactly right? People in the past probably also thought they had figured things out, but in hindsight we know many of the things they thought were wrong (“the world is flat”) and we object to many of their views (“it’s ok to have slaves”). So our values change:
We humans often seem happy to let our final values drift. This might often be because we do not know precisely what they are. It is not surprising that we want our beliefs about our final values to be able to change in light of continuing self-discovery or changing self-presentation needs. However, there are cases in which we willingly change the values themselves, not just our beliefs or interpretations of them. (Chapter 7: The superintelligent will)
Bostrom proposes a concept called indirect normativity to deal with this issue, in which we let the superintelligence figure out what are better moral standards and it would help us live by them starting now:
Indirect normativity is a way to answer the challenge presented by the fact that we may not know what we truly want, what is in our interest, or what is morally right or ideal. Instead of making a guess based on our own current understanding (which is probably deeply flawed), we would delegate some of the cognitive work required for value selection to the superintelligence. (Chapter 13: Choosing the criteria for choosing)
The superintelligence should also not only act on our short-run urges and passions, but on a more rational and reflective set of preferences. In particular, what Bostrom calls “second-order desires”:
An individual might have a second-order desire (a desire concerning what to desire) that some of her first-order desires not be given weight when her volition is extrapolated. For example, an alcoholic who has a first-order desire for booze might also have a second-order desire not to have that first-order desire. (Chapter 13: Choosing the criteria for choosing)
People can have preference over preferences. I don’t enjoy reading 19th century classical novels from France, but I have a preference for wanting to enjoy those.
So would the superintelligence slap my shallow Third World War blockbuster novel out of my hands and put Victor Hugo there? It suppose it would a have more subtle way.
Say we had solved these problems, so we (i) could actually get the superintelligence to do what we want and (ii) had figured out exactly what we want. Should we press the ignite button, start up the superintelligence and let it do its work?
I don’t think so. I think we still want clarity and truth and to not to be fooled. Simon Blackburn writes:
We might say: one of our concerns is not to be deceived about whether our concerns are met. (Chapter 8: What to Do)
Admittedly, an argument can be made for the opposite. Someone in pain might wish to have his senses clouded with medicine. And not all information is always desirable. I’ll gladly not find out how mediocre the pictures are that I took on my last holiday.
Already now our minds implement what Roland Bénabou and Jean Tirole (pdf) describe as a
[…] tradeoff between accuracy and desirability [in how we form our beliefs]. (p142)
But that’s the thing: our mind actively implements it. We want to build our own model of the world, even if some of our beliefs about it are distorted, not live in the perfect bubble. There’s a “premium” (pdf) that we’re willing to pay, simply to stay in control.
In “Rent-Seeking in Elite Networks” (pdf), Rainer Haselmann, David Schoenherr and Vikrant Vig study what they call the “dark side of social capital”. They show that members of a German service club tend to give each other more favorable lending conditions.
They collect data on corporate CEOs in 211 service clubs in Southern Germany between 1993 and 2011. The authors cannot provide the name of the club, but I presume it’s either the Lions or the Rotary club. They identify 1091 such CEOs and 352 club bankers.
In Germany, mayors have influence on credit decisions by local savings banks. The authors show that after a club member is elected mayor, banks treat club-affiliated firms favorably.
This misallocation of credit within the club mainly takes the form of continuing to provide credit to badly performing firms rather than outright better conditions, such as lower interest rates. It’s a bit surprising that these banks don’t have checks in place to stop such behavior, as there seems no benefit of this relationship to the bank. Well maybe they’ll be more careful after hearing about this paper.
People invest quite a lot into status. But status is sensitive to when the institutional frame changes.
In Germany, there’s a late night talk show called “Domian”, after the host who’s run the show since 20 years. One time I listened to the show and a man called. He grew up in communist East Germany where he had attended a prestigious elite academy which educated future diplomats. He learned to speak several languages and was on track to become an important public servant in his country.
But then the GDR collapsed and he lost everything. He now holds irregular jobs selling electronics and is poor. He never married, is lonely and depressed.
A lot of our status is specific to institutional settings. With hindsight, it seems obvious that this or that regime could not possibly have survived, so people must have been mistaken to put so much trust in them. But I doubt we can foresee the stability of our own institutions so reliably. Some would even say that chasing status is always doomed for failure, as:
Prestige is just fossilized inspiration.
Yet that seems exaggerated as well. A degree from Oxford was a useful status marker in the 19th century and continues to be that. But compare how the perceived desirability of a job at an investment bank has changed relative to that of working at a start-up or an incubator in just the last ten years. So maybe we should value those things more highly that survive institutional change, such as our health, the part of education that’s not signaling and some assets.
But what probably hurts Domian’s caller most are his unfulfilled hopes. He could reasonably have expected to live among the elite of his country, but instead he lives at the fringes of a society he does not feel part of.
The literature of how life happiness varies over one’s life has documented a U-shape: Young and old people report being more happy than middle-aged people. Hannes Schwandt argues that this is due to unmet expectations when we’re middle-aged. When we’re at the beginning of our careers we have great plans for ourselves and often our aspirations are greater than our achievements. But later in life this reverses and we’re pleasantly surprised more often.
I’m not sure if there’s much we can do to influence how these things will turn out. We could try caring less about status and we’d like to be able to deal well with setbacks.