Lukas Püttmann    About    Blog

Collected links

  1. A Fine Theorem:

    A good rule of thumb is that you will want to read any working paper Melissa Dell puts out. Her main interest is the long-run path-dependent effect of historical institutions, with rigorous quantitative investigation of the subtle conditionality of the past.

  2. Andrew Batson cites Reinhard Bendix:

    Every idea taken from elsewhere can be both an asset to the development of a country and a reminder of its comparative backwardness–that is, both a model to be emulated and a threat to its national identity. What appears desirable from the standpoint of progress often appears dangerous to national independence.

  3. Stop code pollution

  4. Tetlock, Mellers and Scoblic: “Sacred versus Pseudo-Sacred Values: How People Cope with Taboo Trade-Offs

  5. John Cochrane (see the figure):

    Inflation targets are like constitutions – change them infrequently, and only for very good reasons.

  6. NYT:

    “The key issue is going to be causation, of who actually caused the death,” […].

  7. NYT college essays

Coibion, Gorodnichenko and Koustas (2017)

In a recent paper (pdf), Olivier Coibion, Yuriy Gorodnichenko and Dmitri Koustas argue that how often we shop matters for measuring consumption inequality.

Inequality is much studied at the moment. But usually researchers focus on inequality in income or wealth. Ultimately, people probably care more about consumption than their income, so it would be good to know how consumption inequality has evolved.1 That, however, is more difficult to measure. While for income and wealth researchers can rely on some tax data, administrative data or plausible self-reported numbers, it’s hard to keep track of a person’s consumption.

The two common ways of measuring people’s consumption are (1) monthly interviews and (2) daily diaries. Consumption inequality as measured by (1) has not risen, but has increased strongly as measured by (2).

The authors’ idea of why shopping frequency matters is straightforward: The problem is that consumption is not the same as expenditures, as some goods are more durable than others. Expenditures are what we can measure and consumption is unobserved.

Some products (like toilet paper) we only buy infrequently and in bulk. So a dataset on daily toilet paper expenditures would have zeros for most people and ones for a few. So at any point in time it would look as if some people consume very much toilet paper and others none and this would imply very unequal consumption. We spend on items like food and coffee more frequently, so buying and consumption happen at times not far apart.

The authors show that people in the U.S. shop less often than they used to and argue that when you adjust for this fact, then consumption inequality has remained flat. They conclude that measuring expenditures over much longer timespans (so not days but months or quarters) is important.

Coibion et al. attribute the reduced frequency of purchases to the rise of club/warehouse stores (e.g. Wallmart). They also discuss other possible reasons for why people shop less: If people earn higher wages, then the opportunity costs of shopping might have increased. Also, houses are larger now and fridges and freezers have higher quality, so the cost of storage might have decreased.

With more online shopping, people might start buying things much more frequently again. The authors argue that this might reverse the existing trend in the mismeasurement of consumption inequality.



  1. Ideally, we would measure “utility inequality”, but that’s almost impossible. (Only almost.) 

Global financial cycle

These few lines of Eric’s R code produce the following nice figure:

Income distribution Germany 2014

From this figure it becomes apparent that when banking crises happen, they tend to occur in many countries at once. We can see this happening in the early 1930s and in the 1980s and 1990s. (This sample ends in 2008.)

This observation has led some researchers (e.g. Hélène Rey) to argue for the existence of a global financial cycle.


Tristan Garcia, "The Intense Life: A Modern Obsession"

The French philosopher Tristan Garcia wrote “La vie intense. Une obsession moderne”, which has just been published in German.

I.

The author writes that once religion offered a moral framework. But with the loss of theology, when we measure which life is worth living, we judge ourselves which means that we replaced an external morality with an internal morality. What counts is to intensely feel that we’re alive. He writes:1

Modern culture is bound to this variable intensity, a sinus curve of social electricity, a proximate measure of the collective grade of the excitement of individuals.

So we lust for a constant state of excitement and what causes the excitement is less important than the fact that we are excited:

Apparently we belong to the type of humans who have turned away from the consideration and expectation of an absolute, a transcendence as the last purpose of existence, to turn towards a certain civilization whose majority ethic depends on the unremitting fluctuation of being as its principle of life.

Garcia defines intensity as the principle of systematically comparing a thing with itself. No external rules determine its goodness or beauty. And if values are relative and there’s no objective truth, then you cannot judge a thing’s worth. You can, however, determine how extreme it is. He sounds pessimistic when he writes that we aim for the intensification of what already exists.

And because intensity is not the what, but the how, we can use ploys to spice up our bland existence. These are (1) the variation of our experiences, (2) speeding up our experiences and (3) “primaverism”, indulging in the memories of the first time we did something.

He compares the difference in the meaning of an ethic and a morality. An ethic is adverbial, it’s how you do something. A morality is adjective, it’s what you do that matters. To be ethical is to things in a good way and to be be moral is to do good things.

Garcia argues that striving for intensity was for people first a morality, but now it’s an accepted ethic:

Whether you are a fascist, revolutionary, conservative, petty bourgeois, saint, dandy, gentleman, swindler or a villain – be so energetically. Overall, it is not about being an intense person, but being intensely the person who you are. In this sense, the term succumbed to democratic change.

He explains how things that were once novel and extreme soon become standard. We get used to them, they aren’t special anymore and so we stop feeling and become emotionally numb. Near the end of the book, he invokes the image of a manic party in which people dance increasingly faster. He thinks our search for intensity can lead to fatigue and collapse.

The book’s resolution is then that we should balance rational thinking and emotional search for intensity. And we have to learn to live with the fact that the two won’t always agree.

II.

So, what does Garcia think about this? He promises to develop a morality of intensity. I thought he meant that he wants to put it into perspective, to judge it.

But we don’t find out for a long time what Garcia thinks about it, because he hides behind an impersonal voice that speaks in the present tense like the voice-over narrator in a nature documentary. Are these facts or opinions? We aren’t told. It took until page 158 that I consciously read “I” the first time. So I found his argument hard to follow because it wasn’t clear to me where those statements came from. When he invokes the reactions of people experiencing electricity for the first time, I wished Joseph Henrich would take over and explain the anthropological evidence to us.

I take his view that we’re searching intensely for what we already have to be a criticism of hedonism and complacency in our society. But I don’t see how this squares with his first ploy (aiming for more intensity through variation). If it’s novel experiences and variations that we want, then that might motivate us to innovate and come up with new or improved products and tastes.

As an economist, I’m also asking myself whether he is not just describing people being good at getting what they want. And that seems to me a good thing. Garcia himself brings up the option that we’re simply trying to feel strongly about the things we like and avoid the things we don’t. But that somehow doesn’t fit the idea of an adverbial ethic without higher meaning, so it can’t be as easy as that.

The author writes that you need a routine that you can then break out of. If everything is novel and exciting, then nothing is. The third ploy (fetishize new things) will not work forever, as there are only so many new things. That’s actually something I worry about. It seems obvious that we should try everything. Our bucket list is full things we haven’t yet done: A parachute jump or sailing across the Atlantic. Yet experiences really do become duller. When our stock of memories increases, new experiences are less thrilling.

Yet, he remains oddly silent on the benefits of intensity. Many things are cumulative and they become more intense, when you keep at them. Activitites such as playing an instrument, doing research or some types of work only become more rewarding the more you do them.


Further reading:
  1. Citations are translated from the German version. 

Collected links

  1. Miles Kimball: “In Praise of Partial Equilibrium
  2. You Draw It (NYT) (Bayrischer Rundfunk). Also, this is great (through Robert Grant).
  3. Does Comey’s Dismissal Fit the Definition of a Constitutional Crisis?” (through Nial Fergusion)
  4. Paul Goldsmith-Pinkham: “Do Credit Markets Watch the Waving Flag of Bankruptcy?” (ssrn)
  5. Roman Cheplyaka: “Convert time interval to number in R”. See the bits of code that all return 10.
  6. Language rules follow usage. (Through Steven Pinker)
  7. The FRED Blog: “Newspapers are still more important than cheese
  8. Justin O’Beirne: “A Year of Google & Apple Maps
  9. Good article (in German) on what Germany should do with its current government surpluses.

Maybe "economic policy uncertainty" is just firms disliking regulation

Why did economic policy uncertainty rise strongly in 2016, but the stock market is doing well?

Lubos Pastor and Pietro Veronesi debate (pdf) this:

But that result [from the model in their earlier paper which says that uncertainty is bad for stocks] assumes that the precision of political signals is constant over time. In contrast, we argue here that political signals have become less precise in recent months, especially after the November 2016 election.

They state that Trump on one day says this and on another day says that. And that therefore firms get more noisy signals about the future course of economic policy and that the lack of a signal isn’t the same as uncertainty over outcomes. This even got coverage in the respected German daily FAZ (in German).

I’m not completely convinced. Maybe much of what we refer to as “economic policy uncertainty” is just firms being annoyed at regulation. Regulation, justified or not, is likely not great for corporate profits. Baker, Bloom and Davis (2016) think (especially of their industry-specific) indices as measuring “regulatory policy uncertainty” (p.1621). But what if it’s more a proxy for “regulatory policy”?

It’s like when people say “risk has gone up”, they often only refer to downside risk. With Trump, I think, actual “uncertainty” (or what Pastor and Veronesi call “the precision of political signals”) is up, but the expected value of how much regulation there will be is far down. So expected profits rise and thus stocks benefit. But at the same time the newspapers are full of the words “uncertainty”, because there really is uncertainty about the future course of regulatory policy.

If you look at the time series of the EPU index, the fact that it jumps up around wars and elections I find convincing that it measures a significant amount of “uncertainty”. My conclusion is that it’s a mixture of both the subjective expected future level of regulation and the uncertainty around it.

“Uncertainty” may also have become a fashionable buzz word in the last couple of years and this would mechanically push the Baker et al. indicator up. They don’t correct for long-run changes in word use and it’s arguably tricky and a bit arbitrary to do so.

Don’t get me wrong. I think the Baker et al. paper is great and the indicator is carefully prepared and tested. It actually inspired me to be writing a similar paper at the moment. I’m measuring the use of language indicating financial stress in several newspapers since the 19th century.

But still, counting words in newspapers will obviously yield a noisy indicator and interpreting word frequencies as proxies for the unobserved variable of interest requires strong assumptions.


Schelling's segregation model

I had a try at Schelling’s segregation model, as described on quant-econ.

In the model, agents are one of two types and live on (x,y) coordinates. They’re happy if at least half of their closest 10 neighbors are of the same types, else they move to a new location.

My codes are simpler than the solutions at the link, but I actually like them like this. In my codes, agents just move to a random new location if they’re not happy. In the quant-econ example they keep moving until their happy. And I just simulate this for fixed number of cycles, not until everyone is happy.

In Matlab:

n     = 1000;          % number of agents of one type
N     = 2*n;           % total number of agents
T     = 10;            % number of cycles

locs  = rand(N, 2);    % initial location
types = [ones(n, 1);   % generate two types
         zeros(n, 1)];
     
figure
set(gca, 'FontSize', 16)
scatter(locs((types == 1), 1), locs((types == 1), 2))
hold on
scatter(locs((types == 0), 1), locs((types == 0), 2))
title('Cycle 0')
print('test0', '-dpng')
hold off

for t = 1:T
    for i = 1:N
        % All other agents
        others = [locs(1:(i-1),:);          
                  locs((i+1):end,:)];

        % Distance to other agents
        dist = pdist2(locs(i,:), others)';  

        % Nearest other agents
        [~, ix] = sort(dist);
        nearestAgents = (ix <= 10);        

        % Neighbors of same type
        sameNeighbors = sum(types(i) == types(nearestAgents));
        
        % Happy if at least 5 of neighbors are same type
        isHappy = (sameNeighbors >= 5);

        % If not happy, then move to random new location
        if not(isHappy)
            locs(i,:) = rand(1, 2);         
        end
    end
    fprintf('Finished cycle %d/%d.\n', t, T)
    
    figure
    set(gca, 'FontSize', 16)
    scatter(locs((types == 1), 1), locs((types == 1), 2))
    hold on
    scatter(locs((types == 0), 1), locs((types == 0), 2))
    title(['Cycle ', num2str(t)])
    print(['test', num2str(t)], '-dpng')
    hold off
end

Which yields the following sequence of images:

Schelling segregation model animated simulation

The two groups separate quickly. Most of the action takes place in the first few cycles and after the remaining minority types slowly move away into their type’s area.

In the paper, Schelling emphasizes the importance of where agents draw their boundaries:

In spatial arrangements, like a neighborhood or a hospital ward, everybody is next to somebody. A neighborhood may be 10 percent black or white; but if you have a neighbor on either side, the minimum nonzero percentage of neighbors of either opposite color is fifty. If people draw their boundaries differently, we can have everybody in a minority: at dinner, with men and women seated alternately, everyone is outnumbered two to one locally by the opposite sex but can join a three-fifths majority if he extends his horizon to the next person on either side.

New working paper

We have a new working paper out with the title “Benign Effects of Automation: New Evidence from Patent Texts”. You can find it here. Any comments are much appreciated.

Heckman on Econtalk

James Heckman was recently interviewed by Russ Roberts on Econtalk which I quite enjoyed. Some bits:

(37:35) Heckman: […] What I worry about is what I think is more general, not just even about empirical work, is kind of the non-cumulative nature of a lot of work in economics.

[...]

In macroeconomics and other parts of economics there’s a practice called calibration. The calibrated models are models that are kind of looking at some old stylized facts that are putting together different pieces of data that are not mutually consistent. I mean, literally: you take estimates of this area, estimates of that area, and you assemble something that’s like a Frankenstein that then stalks the planet and stalks the profession, walking around. It’s got a labor supply parameter from labor economics and it’s got an output analysis study from Ohio, and on and on and on. And the out comes something–and sometimes a compelling story is told. But it’s a story. It’s not the data. And I think there’s a lack of discipline in some areas where people just don’t want to go to primary data sources.

[...]

But back in the 1940s at Chicago, there was a debate that broke out; and it was a debate really between Milton Friedman and Tjalling Koopmans. Although it wasn’t quite stated that way, it ended up that way. And that was this idea of measurement without theory. […] And so, it’s very appealing to say, ‘Let’s not let the theory get in the way. We have all the facts. We should look at facts. We should basically have a structure that is free of a lot of arbitrary theory and a lot of arbitrary structure. That’s very appealing. I would like it. The idea that we have is this purely inductive, Francis Bacon-like style–not the painter but the original philosopher. So, but the problem with that is, as Koopmans pointed out, and as people pointed out: that every fact is subject to multiple interpretations. You’ve got to place it in context.

[...]

So, people will say, ‘Let the facts speak for themselves.’ But in fact, the facts almost never fully speak for themselves. But they do speak.

(48:47) Heckman: Well, it’s–I think that’s a general process of aging. If you do empirical work as I do and you get into issues, you inevitably are confronted with your own failures of perception and your own blind sides. And I think–I think the profession as a whole is probably better, much better, now. I mean the whole enterprise is bigger to start with. You are getting a lot of diverse points of view. And the whole capacity of the profession to replicate, to simulate, to check other people’s studies, has become much greater than it was in the past. I think the big development that’s occurred inside economics, and it’s in economics journals and in the professional–that if people put out a study, except for having those studies based on proprietary data–that many studies essentially have to be out there and to be replicated. And it’s literally been the kiss of death for people not to allow others to replicate their data.

[...]

And I think that–yes, I think we’ve all come to recognize the limits of the data. But on the other hand, I think we should also be amazed at how much richer the data base is these days–how much more we can actually investigate. […] So I think the empirical side of economics is much healthier than it was, before–I mean long before, going back to the 1920s and 1930s. That was just a period with no data. So I think we have a better understanding of the economy than we did. And I think that’s still there. And I think we have better interpretive frameworks than we had out there. […]. I think these are things that we shouldn’t underlook, overlook, here, understate where we’ve come from. We’ve come a long way.

I found it interesting that Milton Friedman was apparently more on the “let the data speak” reduced-form side of the spectrum.

For a different perspective on similar issues, I also recommend the podcast with Joshua Angrist.