1. Miles Kimball: “In Praise of Partial Equilibrium
2. You Draw It (NYT) (Bayrischer Rundfunk). Also, this is great (through Robert Grant).
3. Does Comey’s Dismissal Fit the Definition of a Constitutional Crisis?” (through Nial Fergusion)
4. Paul Goldsmith-Pinkham: “Do Credit Markets Watch the Waving Flag of Bankruptcy?” (ssrn)
5. Roman Cheplyaka: “Convert time interval to number in R”. See the bits of code that all return 10.
6. Language rules follow usage. (Through Steven Pinker)
7. The FRED Blog: “Newspapers are still more important than cheese
8. Justin O’Beirne: “A Year of Google & Apple Maps
9. Good article (in German) on what Germany should do with its current government surpluses.

# Maybe "economic policy uncertainty" is just firms disliking regulation

Why did economic policy uncertainty rise strongly in 2016, but the stock market is doing well?

Lubos Pastor and Pietro Veronesi debate (pdf) this:

But that result [from the model in their earlier paper which says that uncertainty is bad for stocks] assumes that the precision of political signals is constant over time. In contrast, we argue here that political signals have become less precise in recent months, especially after the November 2016 election.

They state that Trump on one day says this and on another day says that. And that therefore firms get more noisy signals about the future course of economic policy and that the lack of a signal isn’t the same as uncertainty over outcomes. Their argument was also covered by the German daily FAZ (in German).

I’m not completely convinced. Maybe much of what we refer to as “economic policy uncertainty” is just firms being annoyed at regulation. Regulation, justified or not, is likely not great for corporate profits. Baker, Bloom and Davis (2016) think (especially of their industry-specific) indices as measuring “regulatory policy uncertainty” (p.1621). But what if it’s more a proxy for “regulatory policy”?

It’s like when people say “risk has gone up”, they often only refer to downside risk. With Trump, I think, actual “uncertainty” (or what Pastor and Veronesi call “the precision of political signals”) is up, but the expected value of how much regulation there will be is far down. So expected profits rise and thus stocks benefit. But at the same time the newspapers are full of the words “uncertainty”, because there really is uncertainty about the future course of regulatory policy.

Related posts:

# Schelling's segregation model

I had a try at Schelling’s segregation model, as described on quant-econ.

In the model, agents are one of two types and live on (x,y) coordinates. They’re happy if at least half of their closest 10 neighbors are of the same types, else they move to a new location.

My codes are simpler than the solutions at the link, but I actually like them like this. In my codes, agents just move to a random new location if they’re not happy. In the quant-econ example they keep moving until their happy. And I just simulate this for fixed number of cycles, not until everyone is happy.

In Matlab:

Which yields the following sequence of images:

The two groups separate quickly. Most of the action takes place in the first few cycles and after the remaining minority types slowly move away into their type’s area.

In the paper, Schelling emphasizes the importance of where agents draw their boundaries:

In spatial arrangements, like a neighborhood or a hospital ward, everybody is next to somebody. A neighborhood may be 10 percent black or white; but if you have a neighbor on either side, the minimum nonzero percentage of neighbors of either opposite color is fifty. If people draw their boundaries differently, we can have everybody in a minority: at dinner, with men and women seated alternately, everyone is outnumbered two to one locally by the opposite sex but can join a three-fifths majority if he extends his horizon to the next person on either side.

# New working paper

We have a new working paper out with the title “Benign Effects of Automation: New Evidence from Patent Texts”. You can find it here. Any comments are much appreciated.

# Heckman on Econtalk

James Heckman was recently interviewed by Russ Roberts on Econtalk which I quite enjoyed. Some bits:

(37:35) Heckman: […] What I worry about is what I think is more general, not just even about empirical work, is kind of the non-cumulative nature of a lot of work in economics.

[...]

In macroeconomics and other parts of economics there’s a practice called calibration. The calibrated models are models that are kind of looking at some old stylized facts that are putting together different pieces of data that are not mutually consistent. I mean, literally: you take estimates of this area, estimates of that area, and you assemble something that’s like a Frankenstein that then stalks the planet and stalks the profession, walking around. It’s got a labor supply parameter from labor economics and it’s got an output analysis study from Ohio, and on and on and on. And the out comes something–and sometimes a compelling story is told. But it’s a story. It’s not the data. And I think there’s a lack of discipline in some areas where people just don’t want to go to primary data sources.

[...]

But back in the 1940s at Chicago, there was a debate that broke out; and it was a debate really between Milton Friedman and Tjalling Koopmans. Although it wasn’t quite stated that way, it ended up that way. And that was this idea of measurement without theory. […] And so, it’s very appealing to say, ‘Let’s not let the theory get in the way. We have all the facts. We should look at facts. We should basically have a structure that is free of a lot of arbitrary theory and a lot of arbitrary structure. That’s very appealing. I would like it. The idea that we have is this purely inductive, Francis Bacon-like style–not the painter but the original philosopher. So, but the problem with that is, as Koopmans pointed out, and as people pointed out: that every fact is subject to multiple interpretations. You’ve got to place it in context.

[...]

So, people will say, ‘Let the facts speak for themselves.’ But in fact, the facts almost never fully speak for themselves. But they do speak.

(48:47) Heckman: Well, it’s–I think that’s a general process of aging. If you do empirical work as I do and you get into issues, you inevitably are confronted with your own failures of perception and your own blind sides. And I think–I think the profession as a whole is probably better, much better, now. I mean the whole enterprise is bigger to start with. You are getting a lot of diverse points of view. And the whole capacity of the profession to replicate, to simulate, to check other people’s studies, has become much greater than it was in the past. I think the big development that’s occurred inside economics, and it’s in economics journals and in the professional–that if people put out a study, except for having those studies based on proprietary data–that many studies essentially have to be out there and to be replicated. And it’s literally been the kiss of death for people not to allow others to replicate their data.

[...]

And I think that–yes, I think we’ve all come to recognize the limits of the data. But on the other hand, I think we should also be amazed at how much richer the data base is these days–how much more we can actually investigate. […] So I think the empirical side of economics is much healthier than it was, before–I mean long before, going back to the 1920s and 1930s. That was just a period with no data. So I think we have a better understanding of the economy than we did. And I think that’s still there. And I think we have better interpretive frameworks than we had out there. […]. I think these are things that we shouldn’t underlook, overlook, here, understate where we’ve come from. We’ve come a long way.

I found it interesting that Milton Friedman was apparently more on the “let the data speak” reduced-form side of the spectrum.

For a different perspective on similar issues, I also recommend the podcast with Joshua Angrist.

# German incomes in 2014

Here’s a booklet by the German Statistical Office on incomes in Germany in 2014:

• Mean gross income was 3441 euros for full time employees. I couldn’t find the median anywhere, but eyeballing the graph it looks to be about 2500 Euros.

• Income differences between East and West are still quite pronounced. Compare Hessen and Thüringen, for example. The following shows hourly gross incomes by states:

• The minimum wage is the same across Germany, so how binding it is varies depending on the local income level. Here’s the minimum wage relative to mean income across states:

• 6% of gross hourly income differences between men and women cannot be explained by observable characteristics.
• Incomes for women flatten after childbirth. The following are gross hourly incomes (blue for men, yellow for women, the black line is the average age of the mother at the birth of the first child):

• Germany taxes households, not individuals which subsidizes families where only one parent works. Singles keep about 60% of their gross income and for families with two children and one working parent net incomes are about 70% of gross incomes.

1. A Fine Theorem on David Donaldson winning the John Bates Clark Medal:

Donaldson’s CV is a testament to how difficult this style of work is. He spent eight years at LSE before getting his PhD, and published only one paper in a peer reviewed journal in the 13 years following the start of his graduate work. “Railroads of the Raj” has been forthcoming at the AER for literally half a decade, despite the fact that this work is the core of what got Donaldson a junior position at MIT and a tenured position at Stanford. Is it any wonder that so few young economists want to pursue a style of research that is so challenging and so difficult to publish? Let us hope that Donaldson’s award encourages more of us to fully exploit both the incredible data we all now have access to, but also the beautiful body of theory that induces deep insights from that data.

2. Jonathan Taplin in the New York Times: “Is It Time to Break Up Google?”:

At a minimum, these companies should not be allowed to acquire other major firms, like Spotify or Snapchat.

3. Hunter Clark, Maxim Pinkovskiy, and Xavier Sala-i-Martin: “Is Chinese Growth Overstated?
4. John J. Horton: “A Way to Potentially Harm Many People for Little Benefit”:

I spent 5 years in the Army as a tank platoon leader & company executive officer, after 4 years at West Point. Of my active duty time, 15 months were spent in Iraq (Baghdad and Karbala). It was, without a doubt, the worst experience of my life—nothing else even comes close, and I got off easy.

5. Nate Silver on whether polling errors have become more common and differences between Trump and Le Pen:

Ironically, the same type of sloppy thinking that led people to underestimate the chances for the Trump and Brexit victories may lead them to overestimate Le Pen’s odds.

6. Philip Guo: “Five Years After My Ph.D. Thesis Defense

# Roy model

In David Autor’s lecture notes on the Roy model he walks us through the migration choice model by Borjas (1987). In this model, agents decide between staying in the source country or migrating to a host country. The log wages in the source country ($w_0$) and in the host country ($w_1$) are given by:

The wage shocks $\varepsilon_0$ and $\varepsilon_1$ are drawn from a multivariate normal distribution and are correlated. The agents know all of these values and wages don’t adjust.

In Matlab, let’s simulate a number of agents:

We leave the two means $\mu_0$ and $\mu_1$ equal and concentrate on the effect of the relative standard deviations and the correlation. Next, we impose a cost of emigrating that rises in the source country wage and then check which agent wants to emigrate:

We can then make the following plot:

Every dot is one agent. The x-axis shows their source country wages and the y-axis their host country wages. The cloud of dots is centered on (100, 100).

Agents marked red choose to emigrate and agents marked blue choose to stay. The slope of the line separating the red and blue dots is steeper, the higher cost of moving we pick.

Autor shows that there are three cases for migration. With the current settings in the simulation, we get positive hierarchical sorting. This comes about if the wage shocks are sufficiently positively correlated across countries and the wage distribution is more dispersed in the host country than in the source country. Then, only the most productive will migrate. Those who migrate have above-average wages in both the source and the host country.

We get negative hierarchical sorting, if we change sigma0 = 100 and sigma0 = 30:

The wage shocks still need to be positively correlated across countries, but now the wages in the host country are more compressed than in the source country. Now, only less productive agents will migrate and emigration acts as an insurance. In this case, the mean wages (of those who choose to emigrate) is below the average of 100 in both countries.

The last case is refugee sorting, where the wage shocks are negatively correlated, so agents are below the mean income in the source country, but above the mean income in the host country. Set c = -0.5, sigma0 = 100 and sigma1 = 100 to get:

Here, migrants go from below-average wages in the source country to above-average wages in the host country. This could be the case if highly productive people are suppressed in their home countries.

Autor concludes with:

The growing focus of empirical economists on applying instrumental variables to causal estimation is in large part a response to the realization that self-selection (i.e., optimizing behavior) plagues interpretation of ecological relationships. […] But instrumental variables are not the only answer to testing cause and effect with observed data. Self-selection also points to the existence of equilibrium relationships that should be observed in ecological data […], and these can be tested without an instrument. In fact, there are some natural sciences that proceed almost entirely without experimentation — for example, astrophysics. How do they do it? Models predict nonobvious relationships in data. These implications can be verified or refuted by data, and this evidence strengthens or overturns the hypotheses. Many economists seem to have forgotten this methodology.

1. A question from Chris Blattman’s midterm:

Suppose, in 1900, Nate Silver wanted to build a model for predicting autocracy—that is, which countries in the world would end up more or less democratic in 2000. Knowing everything you know today, what do you think would be the five most influential variables that would help Nate predict dictatorship versus democracy? These can be historical, geographic, cultural, political, economic, or something else—it is entirely up to you. They just have to be 1900 or pre-1900 measures. And you must justify your choice of these five variables and link them to the readings or lecture material.

2. Rachel Laudan “I’m a Happy Food Waster”:

It would be wonderful if the “don’t waste” value never clashed with other values such as safety, health, taste, choice, respect, and financial sense.

Life’s not like that. Values clash all the time. Behaving well as an adult means making choices about which values are most important.

3. On top of this, asking an active researcher in macroeconomics to consider what is wrong with macroeconomics today is sure to produce a biased answer. The answer is simple: everything is wrong with macroeconomics. […] Researchers are experts at identifying the flaws in our current knowledge and in proposing ways to fix these. That is what research is.

[...]

There is something wrong with a field when bright young minds no longer find its questions interesting, or just reproduce the thoughts of close-minded older members. There is something right with it when the graduate students don’t miss the weekly seminar for work in progress, but are oblivious of the popular books in economics that newspapers and blogs debate furiously and tout as revolutionizing the field.