Erdoğan vs. Erdogan (Acemoğlu?):
A fairly quick inspection of web pages suggests that both the New York Times and the Financial Times operate essentially the same policy – diacritics for languages like French, German and Spanish, basic 7-bit ASCII (no diacritics at all) for the rest. […]
For mainstream papers, the Guardian and the Süddeutsche are decidedly to the left of the spectrum, decidedly internationalist/Europeanist, and so on, and you would expect them to resist any suggestion that some languages are more important (or more normal) than others.
Shane Caldwell: “Landing a tenure-track position, 1950’s vs. 2010’s”
Philip Ball in The Atlantic on Occam’s razor:
Much more often, theories are distinguished not by making fewer assumptions but different ones. It’s then not obvious how to weigh them up.
The Economist: “Why investors want alternative data”:
The providers are themselves a disparate group, pumping out databases ranging from satellite imagery to social-media posts. […]
Recent advancements in machine-learning have made it possible for companies to efficiently parse through millions of satellite images a day.
Conducting research with alternative data does not always come easily; it often arrives in messy formats and can be difficult to handle for analysts who lack sophisticated IT operations.
The BBC on the 100 greatest films of the 21st century. (So far, I guess.)
In a newspaper article I once read, a professor of music pointed out the advantages of music education for children. He wrote that music is one way of experiencing and accessing the world.
The literary critic Michael Orthofer expresses similar thoughts about reading novels:
ORTHOFER: But I definitely think [reading fiction] really expands my horizons in a way that other things can’t. Travel, talking to people, meeting people, reading the newspaper, following current events – those things obviously also help you understand the world better, but I think fiction adds a totally different dimension to it. And truly great fiction really can take you much farther than other things can, I think. (link)
Consider Hans Fallada’s novel “Jeder stirbt für sich allein” (“Alone in Berlin”) which takes place in 1940-1942 Berlin. The author follows a working-class couple whose initial hopes in Hitler are disappointed and who become estranged from the Nazi regime. They start to secretly distribute postcards with subversive messages.
I can’t visit the Berlin of the 1940s, but Fallada can take me there. And it’s easy to feel with them. The domesticity of the couple is familiar and typically German. And when they get caught and things turn violent, even the swearing is what you hear today.
The couple really existed, but Fallada didn’t try to find out any facts about them or to document their case. Instead he unearths the “inner truth of the narrative”.1
But an author can do even more, he can depict a world that does not and did not exist, but in our fantasy it could have:
ORTHOFER: Being not tied down. […] That nonfiction, the description of what has actually happened – first of all, it’s also very difficult to capture just precisely what has happened. And often fiction allows you to go beyond that, to imagine the reasons behind it, which you might not be able to if you were doing just purely following the facts, […]. (link)
Paul Theroux calls travel a “mode of inquiry” and that you could also call music and fiction.
The German original by Hans Fallada in the introduction to the book (added emphasis):
Nur in großen Zügen – ein Roman hat eigene Gesetze und kann nicht in allem der Wirklichkeit folgen. Darum hat es der Verfasser auch vermieden, Authentisches über das das Privatleben dieser beiden Menschen zu erfahren: er musste sie so schildern, wie sie ihm vor Augen standen. Sie sind also zwei Gestalten der Phantasie, wie auch alle andere Figuren dieses Romans frei erfunden sind. Trotzdem glaubt der Verfasser an die “die innere Wahrheit” des Erzählten, auch wenn manche Einzelheit den tatsächlichen Verhältnissen nicht ganz entspricht.
In “Superintelligence: Paths, Dangers, Strategies” Oxford philosopher Nick Bostrom writes about scenarios of advanced general intelligence and the threats it could pose to humanity in the future.
What became more and more clear to me while reading this book is just how undesirable the existence of a superintelligence would be. It would be risky, it’s not clear we could get it to do what we want, we don’t know how to specify what we want and even if all these things would be fulfilled: Why should we ever want to lose our agency?
Bostrom’s definition of “superintelligence” (of which he considers artificial intelligence a special case) is silent on if there’s some new mind, but it asks about the capacities of such a thing (his emphasis):
We can tentatively define a superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest. (Chapter 2: Paths to superintelligence)
He argues that if ever we could build it, we would:
Some little idiot is bound to press the ignite button just to see what happens. (Chapter 15: Crunch time)
Even after reading this book, I doubt there’ll ever be some other entity that has agency. The internet or Google or the Go bot won’t “wake up”. Bostrom discusses many more sophisticated ways to get to superintelligence, but those too seem speculative to me.
But Bostrom asks: “What if?” What if there existed such a superintelligence? And it’s worth pondering “what if”, for two reasons:
- If a superintelligence could be created, then we should plan ahead.
- Even if we are never going to create a superintelligence, it’s an interesting thought experiment and we might learn something about our values.
Could we control it?
A superintelligence couldn’t easily be contained (“sandboxed” he calls it) and might become a singleton, a centralized decision maker. Whether the superintelligence is “friendly” would be hard to test, as it might well behave so at first to be let out of the box. And we might not get a warning, as its appearance could be a sudden event. The early superintelligence modifies and improves itself which leads to an intelligence explosion.
Bostrom also describes the following cunning plan of perfect enslavement: Feed the AI cryptographic tokens that are highly rewarding to it. And discount strongly, so that the first token gives 99% of remaining utility and the next token gives 99% of remaining utility again and so on. This makes it less likely that the AI comes up with some weird long-run plan of getting more tokens.
But that, too, is far from safe. It turns out that what Bostrom calls the “control problem”, the issue of how to restrain a superior intelligence, is unsolved and hard.
Bostrom discusses different kinds of superintelligences: oracles, genies, sovereigns and tools.
An oracle answers simple questions that we ask it. We might even reduce the possible interactions we could have with it to a chat or to only a yes or no statement, to avoid being charmed by it. I liked the idea of testing the predictions of an oracle by asking the same question to different versions of it that have different goals and varying available information. The distribution of answers (similarly as in bootstrapping in econometrics) then shows us how robust the oracle’s recommendations are.
Genies perform one task, then stop and wait for the next job. A sovereign gets an abstract goal and is relatively unconstrained in how to achieve it. Bostrom thinks these two forms of superintelligence aren’t fundamentally all that different and would both be difficult to control.
A tool is closer to software as we’re used to it. But Bostrom argues that in the future the way we think about software might change and the programmer’s job might become a more abstract activity. So these tools might then develop into general intelligence:
With advances in artificial intelligence, it would become possible for the programmer to offload more of the cognitive labor required to figure out how to accomplish a given task. In an extreme case, the programmer would simply specify a formal criterion of what counts as success and leave it to the AI to find a solution. (Chapter 10: Oracles, genies, sovereigns, tools)
A superintelligence would start reasoning about the world and might even come to the conclusion to think it’s in a simulation (a similar thought concerning humanity’s chance of being in a simulation was recently made famous by Elon Musk):
This predicament [of not being sure whether it is in a simulation] especially afflicts relatively early-stage superintelligences, ones that have not yet expanded to take advantage of the cosmic endowment. […] Potential simulators—that is, other more mature civilizations—would be able to run great numbers of simulations of such early-stage AIs even by dedicating a minute fraction of their computational resources to that purpose. If at least some (non-trivial fraction) of these mature superintelligent civilizations choose to use this ability, early-stage AIs should assign a substantial probability to being in a simulation. (Chapter 9: The control problem)
A superintelligence could react in different ways to such a conclusion. It might not alter its behavior, it might try to escape the perceived or real simulation or this risk of being in a simulation might even make it docile:
In particular, if an AI with resource-satiable final goals believes that in most simulated worlds that match its observations it will be rewarded if it cooperates (but not if it attempts to escape its box or contravene the interests of its creator) then it may choose to cooperate. […] A mere line in the sand, backed by the clout of a nonexistent simulator, could prove a stronger restraint than a two-foot-thick solid steel door. (Chapter 9: The control problem)
What do we want?
So it’s not clear that we could get a superintelligence to do what we want. But say we did, it then remains to specify what we want.
Giving the superintelligence too simple goals wouldn’t be a good idea:
An AI, by contrast, need not care intrinsically about any of those things [that humans care about]. There is nothing paradoxical about an AI whose sole final goal is to count the grains of sand on Boracay, or to calculate the decimal expansion of pi, or to maximize the total number of paperclips that will exist in its future light cone. In fact, it would be easier to create an AI with simple goals like these than to build one that had a human-like set of values and dispositions. (Chapter 7: The superintelligent will)
It’s hard to come up with goals that would be both good for humanity in general and that don’t leave the door open to unintended consequences. If we told it to “make us smile”, well then it might just paralyze all our faces with the corners of our mouths drawn back.
It’s important to get it right because the goals might be hard to change once the superintelligence already exists. But are we sure that our moral judgments right now are exactly right? People in the past probably also thought they had figured things out, but in hindsight we know many of the things they thought were wrong (“the world is flat”) and we object to many of their views (“it’s ok to have slaves”). So our values change:
We humans often seem happy to let our final values drift. This might often be because we do not know precisely what they are. It is not surprising that we want our beliefs about our final values to be able to change in light of continuing self-discovery or changing self-presentation needs. However, there are cases in which we willingly change the values themselves, not just our beliefs or interpretations of them. (Chapter 7: The superintelligent will)
Bostrom proposes a concept called indirect normativity to deal with this issue, in which we let the superintelligence figure out what are better moral standards and it would help us live by them starting now:
Indirect normativity is a way to answer the challenge presented by the fact that we may not know what we truly want, what is in our interest, or what is morally right or ideal. Instead of making a guess based on our own current understanding (which is probably deeply flawed), we would delegate some of the cognitive work required for value selection to the superintelligence. (Chapter 13: Choosing the criteria for choosing)
The superintelligence should also not only act on our short-run urges and passions, but on a more rational and reflective set of preferences. In particular, what Bostrom calls “second-order desires”:
An individual might have a second-order desire (a desire concerning what to desire) that some of her first-order desires not be given weight when her volition is extrapolated. For example, an alcoholic who has a first-order desire for booze might also have a second-order desire not to have that first-order desire. (Chapter 13: Choosing the criteria for choosing)
People can have preference over preferences. I don’t enjoy reading 19th century classical novels from France, but I have a preference for wanting to enjoy those.
So would the superintelligence slap my shallow Third World War blockbuster novel out of my hands and put Victor Hugo there? It suppose it would a have more subtle way.
We would therefore have a way to modify our tastes and to choose what to like.
So do it?
Say we had solved these problems, so we (i) could actually get the superintelligence to do what we want and (ii) had figured out exactly what we want. Should we press the ignite button, start up the superintelligence and let it do its work?
I don’t think so. I think we still want clarity and truth and to not to be fooled. Simon Blackburn writes:
We might say: one of our concerns is not to be deceived about whether our concerns are met. (Chapter 8: What to Do)
Admittedly, an argument can be made for the opposite. Someone in pain might wish to have his senses clouded with medicine. And not all information is always desirable. I’ll gladly not find out how mediocre the pictures are that I took on my last holiday.
Already now our minds implement what Roland Bénabou and Jean Tirole (pdf) describe as a
[…] tradeoff between accuracy and desirability [in how we form our beliefs]. (p142)
But that’s the thing: our mind actively implements it. We want to build our own model of the world, even if some of our beliefs about it are distorted, not live in the perfect bubble. There’s a “premium” (pdf) that we’re willing to pay, simply to stay in control.
I choose the red pill.
In “Rent-Seeking in Elite Networks” (pdf), Rainer Haselmann, David Schoenherr and Vikrant Vig study what they call the “dark side of social capital”. They show that members of a German service club tend to give each other more favorable lending conditions.
They collect data on corporate CEOs in 211 service clubs in Southern Germany between 1993 and 2011. The authors cannot provide the name of the club, but I presume it’s either the Lions or the Rotary club. They identify 1091 such CEOs and 352 club bankers.
In Germany, mayors have influence on credit decisions by local savings banks. The authors show that after a club member is elected mayor, banks treat club-affiliated firms favorably.
This misallocation of credit within the club mainly takes the form of continuing to provide credit to badly performing firms rather than outright better conditions, such as lower interest rates. It’s a bit surprising that these banks don’t have checks in place to stop such behavior, as there seems no benefit of this relationship to the bank. Well maybe they’ll be more careful after hearing about this paper.
Christof Koch’s book list
There ought to be an ugly Germanic word for it, the anxiety at not having read enough (I like NichtLesenAngst).
And yet in 10 years of trying to make sense of the economic world around me, I have found nothing as reliably good as the blogosphere.
Thankfully, it is a conversation, not a syllabus. In a conversation you don’t have to read every word that is spoken.
Her early years were filled with unbelievable accomplishments and a tight-knit, almost claustrophobic relationship with her father, Harry. At age ten, she became the youngest person ever to gain entry into the prestigious Oxford University. […]. She finished her PhD at Oxford at age 18, and at age 19 took on her first academic position, as a junior professor at Harvard University.
John D. Cook on whether anything is really continuous:
Strictly speaking, maybe not, but practically yes.
Carmen Reinhart on low interest rates:
The periods around World War I and World War II are routinely overlooked in discussions that focus on deregulation of capital markets since the 1980s. As in the past, during and after financial crises and wars, central banks increasingly resort to a form of “taxation” that helps liquidate the huge public- and private-debt overhang and eases the burden of servicing that debt.
Such policies, known as financial repression, usually involve a strong connection between the government, the central bank, and the financial sector.
[The different possible microeconomic explanations] also raise[s] the classic question of macroeconomics, when multiple microeconomic stories give the same macroeconomic answer, whether telling apart microfoundations matters.
Economic models are more quantitative parables than scientifically precise models, and elegant parables are more convincing. Dark matter is particularly inelegant: Models that need an extra assumption for every fact are less convincing than are models that tie several facts together with a small number of assumptions. Financial economics is always in danger of being simply an interpretive or poetic discipline: Markets went down, sentiment must have fallen. Markets went down, risk aversion must have risen. Markets went down, there must have been selling pressure. Markets went down, the Gods must be displeased.
A note to Ph.D. students: All good economic models are reverse-engineered!
None of these modeling approaches stands above the others in the list of facts so far addressed. A serious effort to distinguish them has not been made. But, given the fact that the state variables are so correlated, and that the models are all quantitative parables not detailed models-of-everything meant to be literally true, that effort may not be worth the bother.
From director Wolfgang Herzog’s Reddit “Ask Me Anything”:
I do traveling for very intense quests in my life. I do that on foot.
But do contractions cause uncertainty, they ask, or does uncertainty cause contractions? Given that we know that people are highly reactive to each other, the causality most likely runs both ways, in a feedback loop.
The deeper and more interesting question concerns what initiates this uncertainty.
It is conditional optimism that brings out the best in us.
A point similarly made by Peter Thiel.
“La Grande Peur” or the Great Fear was a time of great uncertainty that happened in France just before the onset of the Revolution. Rumors of violent hords of bandits roaming the country-side spread and people thought the old order had stopped functioning.
Are we experiencing a Great Fear right now?
In many ways it’s not clear why we should. The world in 2016 is actually in quite a good shape. But somehow people seem to hold particularly gloomy views this year.
Tyler Cowen has to say on this issue:
The broader and more disturbing implication is that the entire global economy may be more vulnerable to mood swings.
Most likely, we’ll have to get used to a more mood-ruled world, and those will start off as being the moods of others, not our own. How do you feel about that?
And here are George Akerlof and Robert Shiller (added emphasis):
The term overheated economy, as we shall use it, refers to a situation in which confidence has gone beyond normal bounds, in which an increasing fraction of people have lost their normal skepticism about the economic outlook and are ready to believe stories about a new economic boom. It is a time when careless spending by consumers is the norm and when bad real investments are made, […].
Most economists are uncomfortable with such notions.
Most academic economists, if asked to define the term overheated, would say that it describes a period in which inflation, […], has been increasing.
Inflation itself, particularly when it is increasing, can ultimately create a negative effect on the atmosphere of an economy, akin to the effect of broken windows and graffiti on a city. These lead to a breakdown in the sense of civil society, in the sense that all is right with the world. (p65, “Animal Spirits”)
And Ben Bernanke:
[…] measures of the national “mood,” like Gallup’s “way things are going” question or questions about the “direction of the country,” show a high level of dissatisfaction.
Check out the program or this non-representative pick of papers that caught my eye:
An increase in the household debt to GDP ratio in the medium run predicts lower subsequent GDP growth, higher unemployment, and negative growth forecasting errors in a panel of 30 countries from 1960 to 2012.
[…] we find that sharply higher uncertainty about real economic activity in recessions is fully an endogenous response to other shocks that cause business cycle fluctuations, while uncertainty about financial markets is a likely source of the fluctuations.
This piece by Andreas Fagereng, Luigi Guiso, Davide Malacrino and Luigi Pistaferri in the Aggregate Implications of Micro Consumption Behavior session is interesting:
Third, returns are positively correlated with wealth. Fourth, returns have an individual permanent component that explains almost 20% of the variation.
Partisanship [in US politics] was low and roughly constant from 1873 to the early 1990s, then increased dramatically in subsequent years.
Claudia Sahm offers good comments:
Recessions are almost by definition a time of instability, and it is hard to trace down the roots of instability in models that largely assume it away. I am a big fan of belief shocks, I don’t think we can fully understand recession/recovery without appealing to shifts in expectations. And yet, I have no idea how you cleanly, credibly separate beliefs from credit supply.
A unit test is a little program that checks if some part (or unit) of your code works as expected. What arguments are there for bothering to write such tests?
- You find bugs more quickly
- It’s reassuring to first run your tests when you haven’t touched your codes
- Check if anything brakes if you change your code
- Writing test also nudges you to keep functions small, as it’s more difficult to test functions when they have many input arguments.
I didn’t find the existing examples of how to use it easy to follow, so I’m putting here an explanation of how to test one individual function.
You can find all codes here.
Say we have a function
add_one.m we want to test:
For our unit test, we then write an additional script which we have to name with either
test_ at the beginning or
_test at the end. So here’s the new script
The first three lines are always required and we only need to change the function name to match the name of the file.
The following function
test_normal1 is our first test case. We will pass in the value
x = 1 and check that the result is indeed 2.
So now go to the Matlab command line and run:
There’ll be a dot for every test case for this function. In this case everything worked fine, but there would be an extensive message if an error had occured.
So let’s add some more tests:
It’s a good idea to give the functions meaningful names so that when there’s an error, we know where things went wrong. Don’t worry if the names get really long, they’ll only live in this script anyway.
The tricky thing is to think of the irregular ways the function might be used. For example, the following tests check that we get the right output even if we pass in an empty matrix or an
Now let’s give the function something where would expect an error. If we pass the function a string
'Hello world' it returns a numerical vector. That’s not what we want, so let’s add
add_one.m function. So now it fails if the input is not a number.
The following test case then checks if indeed an error is returned:
I use try-catch here to check if the function returns an error. There might be better ways to do this, but this works for me.
But we don’t always just have to check that results are equal, as sometimes we want to make sure that the difference is below some numerical threshold. In this case, calculate the absolute or relative error as
actDiff and check that it’s less than some acceptable error like this:
One thing I lack so far is a way to test local functions, so functions that you define within some other function and which only that function can use.
So that’s it. If somebody has ideas for improvements, please let me know!