The French philosopher Tristan Garcia wrote “La vie intense. Une obsession moderne”, which has just been published in German.
The author writes that once religion offered a moral framework. But with the loss of theology, when we measure which life is worth living, we judge ourselves which means that we replaced an external morality with an internal morality. What counts is to intensely feel that we’re alive. He writes:1
Modern culture is bound to this variable intensity, a sinus curve of social electricity, a proximate measure of the collective grade of the excitement of individuals.
So we lust for a constant state of excitement and what causes the excitement is less important than the fact that we are excited:
Apparently we belong to the type of humans who have turned away from the consideration and expectation of an absolute, a transcendence as the last purpose of existence, to turn towards a certain civilization whose majority ethic depends on the unremitting fluctuation of being as its principle of life.
Garcia defines intensity as the principle of systematically comparing a thing with itself. No external rules determine its goodness or beauty. And if values are relative and there’s no objective truth, then you cannot judge a thing’s worth. You can, however, determine how extreme it is. He sounds pessimistic when he writes that we aim for the intensification of what already exists.
And because intensity is not the what, but the how, we can use ploys to spice up our bland existence. These are (1) the variation of our experiences, (2) speeding up our experiences and (3) “primaverism”, indulging in the memories of the first time we did something.
He compares the difference in the meaning of an ethic and a morality. An ethic is adverbial, it’s how you do something. A morality is adjective, it’s what you do that matters. To be ethical is to things in a good way and to be be moral is to do good things.
Garcia argues that striving for intensity was for people first a morality, but now it’s an accepted ethic:
Whether you are a fascist, revolutionary, conservative, petty bourgeois, saint, dandy, gentleman, swindler or a villain – be so energetically. Overall, it is not about being an intense person, but being intensely the person who you are. In this sense, the term succumbed to democratic change.
He explains how things that were once novel and extreme soon become standard. We get used to them, they aren’t special anymore and so we stop feeling and become emotionally numb. Near the end of the book, he invokes the image of a manic party in which people dance increasingly faster. He thinks our search for intensity can lead to fatigue and collapse.
The book’s resolution is then that we should balance rational thinking and emotional search for intensity. And we have to learn to live with the fact that the two won’t always agree.
So, what does Garcia think about this? He promises to develop a morality of intensity. I thought he meant that he wants to put it into perspective, to judge it.
But we don’t find out for a long time what Garcia thinks about it, because he hides behind an impersonal voice that speaks in the present tense like the voice-over narrator in a nature documentary. Are these facts or opinions? We aren’t told. It took until page 158 that I consciously read “I” the first time. So I found his argument hard to follow because it wasn’t clear to me where those statements came from. When he invokes the reactions of people experiencing electricity for the first time, I wished Joseph Henrich would take over and explain the anthropological evidence to us.
I take his view that we’re searching intensely for what we already have to be a criticism of hedonism and complacency in our society. But I don’t see how this squares with his first ploy (aiming for more intensity through variation). If it’s novel experiences and variations that we want, then that might motivate us to innovate and come up with new or improved products and tastes.
As an economist, I’m also asking myself whether he is not just describing people being good at getting what they want. And that seems to me a good thing. Garcia himself brings up the option that we’re simply trying to feel strongly about the things we like and avoid the things we don’t. But that somehow doesn’t fit the idea of an adverbial ethic without higher meaning, so it can’t be as easy as that.
The author writes that you need a routine that you can then break out of. If everything is novel and exciting, then nothing is. The third ploy (fetishize new things) will not work forever, as there are only so many new things. That’s actually something I worry about. It seems obvious that we should try everything. Our bucket list is full things we haven’t yet done: A parachute jump or sailing across the Atlantic. Yet experiences really do become duller. When our stock of memories increases, new experiences are less thrilling.
Yet, he remains oddly silent on the benefits of intensity. Many things are cumulative and they become more intense, when you keep at them. Activitites such as playing an instrument, doing research or some types of work only become more rewarding the more you do them.
Lubos Pastor and Pietro Veronesi debate (pdf) this:
But that result [from the model in their earlier paper which says that uncertainty is bad for stocks] assumes that the precision of political signals is constant over time. In contrast, we argue here that political signals have become less precise in recent months, especially after the November 2016 election.
They state that Trump on one day says this and on another day says that. And that therefore firms get more noisy signals about the future course of economic policy and that the lack of a signal isn’t the same as uncertainty over outcomes. Their argument was also covered by the German daily FAZ(in German).
I’m not completely convinced. Maybe much of what we refer to as “economic policy uncertainty” is just firms being annoyed at regulation. Regulation, justified or not, is likely not great for corporate profits. Baker, Bloom and Davis (2016) think (especially of their industry-specific) indices as measuring “regulatory policy uncertainty” (p.1621). But what if it’s more a proxy for “regulatory policy”?
It’s like when people say “risk has gone up”, they often only refer to downside risk. With Trump, I think, actual “uncertainty” (or what Pastor and Veronesi call “the precision of political signals”) is up, but the expected value of how much regulation there will be is far down. So expected profits rise and thus stocks benefit. But at the same time the newspapers are full of the words “uncertainty”, because there really is uncertainty about the future course of regulatory policy.
I had a try at Schelling’s segregation model, as described on quant-econ.
In the model, agents are one of two types and live on (x,y) coordinates. They’re happy if at least half of their closest 10 neighbors are of the same types, else they move to a new location.
My codes are simpler than the solutions at the link, but I actually like them like this. In my codes, agents just move to a random new location if they’re not happy. In the quant-econ example they keep moving until their happy. And I just simulate this for fixed number of cycles, not until everyone is happy.
Which yields the following sequence of images:
The two groups separate quickly. Most of the action takes place in the first few cycles and after the remaining minority types slowly move away into their type’s area.
In the paper, Schelling emphasizes the importance of where agents draw their boundaries:
In spatial arrangements, like a neighborhood or a hospital ward, everybody is next to somebody. A neighborhood may be 10 percent black or white; but if you have a neighbor on either side, the minimum nonzero percentage of neighbors of either opposite color is fifty. If people draw their boundaries differently, we can have everybody in a minority: at dinner, with men and women seated alternately, everyone is outnumbered two to one locally by the opposite sex but can join a three-fifths majority if he extends his horizon to the next person on either side.
James Heckman was recently interviewed by Russ Roberts on Econtalk which I quite enjoyed. Some bits:
(37:35) Heckman: […] What I worry about is what I think is more general, not just even about empirical work, is kind of the non-cumulative nature of a lot of work in economics.
In macroeconomics and other parts of economics there’s a practice called calibration. The calibrated models are models that are kind of looking at some old stylized facts that are putting together different pieces of data that are not mutually consistent. I mean, literally: you take estimates of this area, estimates of that area, and you assemble something that’s like a Frankenstein that then stalks the planet and stalks the profession, walking around. It’s got a labor supply parameter from labor economics and it’s got an output analysis study from Ohio, and on and on and on. And the out comes something–and sometimes a compelling story is told. But it’s a story. It’s not the data. And I think there’s a lack of discipline in some areas where people just don’t want to go to primary data sources.
But back in the 1940s at Chicago, there was a debate that broke out; and it was a debate really between Milton Friedman and Tjalling Koopmans. Although it wasn’t quite stated that way, it ended up that way. And that was this idea of measurement without theory. […] And so, it’s very appealing to say, ‘Let’s not let the theory get in the way. We have all the facts. We should look at facts. We should basically have a structure that is free of a lot of arbitrary theory and a lot of arbitrary structure. That’s very appealing. I would like it. The idea that we have is this purely inductive, Francis Bacon-like style–not the painter but the original philosopher. So, but the problem with that is, as Koopmans pointed out, and as people pointed out: that every fact is subject to multiple interpretations. You’ve got to place it in context.
So, people will say, ‘Let the facts speak for themselves.’ But in fact, the facts almost never fully speak for themselves. But they do speak.
(48:47) Heckman: Well, it’s–I think that’s a general process of aging. If you do empirical work as I do and you get into issues, you inevitably are confronted with your own failures of perception and your own blind sides. And I think–I think the profession as a whole is probably better, much better, now. I mean the whole enterprise is bigger to start with. You are getting a lot of diverse points of view. And the whole capacity of the profession to replicate, to simulate, to check other people’s studies, has become much greater than it was in the past. I think the big development that’s occurred inside economics, and it’s in economics journals and in the professional–that if people put out a study, except for having those studies based on proprietary data–that many studies essentially have to be out there and to be replicated. And it’s literally been the kiss of death for people not to allow others to replicate their data.
And I think that–yes, I think we’ve all come to recognize the limits of the data. But on the other hand, I think we should also be amazed at how much richer the data base is these days–how much more we can actually investigate. […] So I think the empirical side of economics is much healthier than it was, before–I mean long before, going back to the 1920s and 1930s. That was just a period with no data. So I think we have a better understanding of the economy than we did. And I think that’s still there. And I think we have better interpretive frameworks than we had out there. […]. I think these are things that we shouldn’t underlook, overlook, here, understate where we’ve come from. We’ve come a long way.
I found it interesting that Milton Friedman was apparently more on the “let the data speak” reduced-form side of the spectrum.
For a different perspective on similar issues, I also recommend the podcast with Joshua Angrist.
Here’s a booklet by the German Statistical Office on incomes in Germany in 2014:
Mean gross income was 3441 euros for full time employees. I couldn’t find the median anywhere, but eyeballing the graph it looks to be about 2500 Euros.
Income differences between East and West are still quite pronounced. Compare Hessen and Thüringen, for example. The following shows hourly gross incomes by states:
The minimum wage is the same across Germany, so how binding it is varies depending on the local income level. Here’s the minimum wage relative to mean income across states:
6% of gross hourly income differences between men and women cannot be explained by observable characteristics.
Incomes for women flatten after childbirth. The following are gross hourly incomes (blue for men, yellow for women, the black line is the average age of the mother at the birth of the first child):
Germany taxes households, not individuals which subsidizes families where only one parent works. Singles keep about 60% of their gross income and for families with two children and one working parent net incomes are about 70% of gross incomes.
Donaldson’s CV is a testament to how difficult this style of work is. He spent eight years at LSE before getting his PhD, and published only one paper in a peer reviewed journal in the 13 years following the start of his graduate work. “Railroads of the Raj” has been forthcoming at the AER for literally half a decade, despite the fact that this work is the core of what got Donaldson a junior position at MIT and a tenured position at Stanford. Is it any wonder that so few young economists want to pursue a style of research that is so challenging and so difficult to publish? Let us hope that Donaldson’s award encourages more of us to fully exploit both the incredible data we all now have access to, but also the beautiful body of theory that induces deep insights from that data.
I spent 5 years in the Army as a tank platoon leader & company executive officer, after 4 years at West Point. Of my active duty time, 15 months were spent in Iraq (Baghdad and Karbala). It was, without a doubt, the worst experience of my life—nothing else even comes close, and I got off easy.
Nate Silver on whether polling errors have become more common and differences between Trump and Le Pen:
Ironically, the same type of sloppy thinking that led people to underestimate the chances for the Trump and Brexit victories may lead them to overestimate Le Pen’s odds.
In David Autor’s lecture notes on the Roy model he walks us through the migration choice model by Borjas (1987). In this model, agents decide between staying in the source country or migrating to a host country. The log wages in the source country () and in the host country () are given by:
The wage shocks and are drawn from a multivariate normal distribution and are correlated. The agents know all of these values and wages don’t adjust.
In Matlab, let’s simulate a number of agents:
We leave the two means and equal and concentrate on the effect of the relative standard deviations and the correlation. Next, we impose a cost of emigrating that rises in the source country wage and then check which agent wants to emigrate:
We can then make the following plot:
Every dot is one agent. The x-axis shows their source country wages and the y-axis their host country wages. The cloud of dots is centered on (100, 100).
Agents marked red choose to emigrate and agents marked blue choose to stay. The slope of the line separating the red and blue dots is steeper, the higher cost of moving we pick.
Autor shows that there are three cases for migration. With the current settings in the simulation, we get positive hierarchical sorting. This comes about if the wage shocks are sufficiently positively correlated across countries and the wage distribution is more dispersed in the host country than in the source country. Then, only the most productive will migrate. Those who migrate have above-average wages in both the source and the host country.
We get negative hierarchical sorting, if we change sigma0 = 100 and sigma0 = 30:
The wage shocks still need to be positively correlated across countries, but now the wages in the host country are more compressed than in the source country. Now, only less productive agents will migrate and emigration acts as an insurance. In this case, the mean wages (of those who choose to emigrate) is below the average of 100 in both countries.
The last case is refugee sorting, where the wage shocks are negatively correlated, so agents are below the mean income in the source country, but above the mean income in the host country. Set c = -0.5, sigma0 = 100 and sigma1 = 100 to get:
Here, migrants go from below-average wages in the source country to above-average wages in the host country. This could be the case if highly productive people are suppressed in their home countries.
Autor concludes with:
The growing focus of empirical economists on applying instrumental variables to causal estimation is in large part a response to the realization that self-selection (i.e., optimizing behavior) plagues interpretation of ecological relationships. […] But instrumental variables are not the only answer to testing cause and effect with observed data. Self-selection also points to the existence of equilibrium relationships that should be observed in ecological data […], and these can be tested without an instrument. In fact, there are some natural sciences that proceed almost entirely without experimentation — for example, astrophysics. How do they do it? Models predict nonobvious relationships in data. These implications can be verified or refuted by data, and this evidence strengthens or overturns the hypotheses. Many economists seem to have forgotten this methodology.