Lucrezia Reichlin presented yesterday at the EEA’s annual meetings “Big Data and Macro Econometrics”. (Here are some older slides from a similar talk.)
She recommends using a large number of macroeconomic series with dimension reduction, such as Lasso and Ridge regression. These methods are intuitively appealing and work well. Packages such as glmnet automatically choose a mix of these two methods based on cross-validation.
Unfortunately, there was no discussion on the difficulties of applying resampling methods with aggregate time series. In macro, the time series dimension of the data is always shorter than we would like it. You might have 50 years of data and if you’re lucky that comes in quarterly or monthly frequency. And even if you extend your series back a couple of decades or across countries, our number of observations doesn’t become very large.
Instead, it’s becoming easier to find more variables to describe the same economy. We can use consumer surveys, scanner data or scrape the web for a more detailed view of the economy, but our number of observations grow only slowly. And frankly, the opposite would be better: I would rather observe only 10 or 20 variables from one economy over a really long time (or equivalently from many similar economies) than hundreds or thousands of variables about only one economy.
The fact that our number of observations grows slowly limits the scope for slicing samples into training, cross-validation and test sets. Thus, the focus in macroeconometrics is a lot more on dimension reduction than it is on an unguided search for patterns.
Sarah Bakewell has written “At the Existentialist Café”, a biography of existentialist philosophers intertwined with an overview of their thought.
The author imagines them like this (her emphasis):
These philosophers [Heidegger and Sartre], together with Simone de Beauvoir, Edmund Husserl, Karl Jaspers, Albert Camus, Maurice Merleau-Ponty and others, seem to me to have participated in a multilingual, multisided conversation that ran from one end of the last century to the other. Many of them never met. Still, I like to imagine them in a big, busy café of the mind, probably a Parisian one, full of life and movement, noisy with talk and thought, and definitely an inhabited café.
Bakewell makes the jargon palatable and this is probably the book where I’ve taken the largest number of notes on my Kindle so far. I found the book gripping and couldn’t put it. That was helped by the fact that the story takes placed during the first half of the 20th century and we follow them as they cope with the catastrophies of those times.
And these philosophers came up with beautiful metaphors for the mind: For Heidegger, it’s a clearing in a forest. And more:
When he [Merlau-Ponty] looks for his own metaphor to describe how he sees consciousness, he comes up with a beautiful one: consciousness, he suggests, is like a ‘fold’ in the world, as though someone had crumpled a piece of cloth to make a little nest or hollow. It stays for a while, before eventually being unfolded and smoothed away. There is something seductive, even erotic, in this idea of my conscious self as an improvised pouch in the cloth of the world. I still have my privacy — my withdrawing room. But I am part of the world’s fabric, and I remain formed out of it for as long as I am here.
Her last chapter “The imponderable bloom” makes a great piece by itself. She synthesizes the phenomenologists’ and existentialists’ theories and explains how their arguments entered our view of the world and our search for “authenticity”. We take pleasure in learning how irrational we are as piles of biases and preferences that can be quantified and predicted. Yet fundamentally our minds are free and constraining ourselves to anything else would be Sartre’s “bad faith”, she writes.
Bakewell finishes with this:
When I first read Sartre and Heidegger, I didn’t think the details of a philosopher’s personality or biography were important. This was the orthodox belief in the field at the time, but it also came from my being too young myself to have much sense of history. I intoxicated myself with concepts, without taking account of their relationship to events and to all the odd data of their inventors’ lives. Never mind lives; ideas were the thing. Thirty years later, I have come to the opposite conclusion. Ideas are interesting, but people are vastly more so.
The Yale undergraduate goes to work at McKinsey for two years, then comes to Harvard Business School, then graduates and goes to work Goldman Sachs and leaves after several years to work at Blackstone. Optionality abounds!
Historically, when inflation rose, stock market returns fell. This changed after the financial crisis of 2008. Since then, inflation and stocks have been positively correlated. Why?
François Gourio and Phuong Ngo have written a paper (pdf) in which they address this question. I explore some of their results here, but please go to the source for the whole picture.
Stocks are claims to future profits of firms and firms are free to change their prices when the price level changes. So why stocks react at all to inflation has puzzled economists for a long time.
Whatever drove this correlation, it stopped holding after the last financial crisis. The change in the correlation from negative to positive coincided with the US economy hitting the zero lower bound (ZLB) in 2008. The ZLB describes the fact that interest rates are near zero and this leads to many macroeconomic oddities. According to some economic models, positive demand shocks become beneficial at the ZLB, as the inflation they cause reduce real interest rates which leads firms to invest and consumers to buy.
Inflation used to be a sign of a negative supply shock (bad), but now they’re a sign of a positive demand shock (good at the ZLB). And as stocks give you a slice of future expected output of the economy, they react positively to higher inflation at the ZLB.
Start with an agent who receives utility from a stream of consumption:
where is the agent’s patience, determines the agent’s risk aversion and is period consumption.
The agent saves in a one-period bond (in zero net supply) with the safe nominal return and the agent optimally decides to save and consume according to this Euler equation:
where is the uncertain inflation rate. Rearrange and get:
Here we defined and and used that for small values, . Assume that inflation and consumption growth are log-normal, such that and are normally distributed with means , , variances and and covariance .
Applying the rules of the lognormal distribution, we get:
The first three terms here mean that rates are higher, the more patient the agent, the higher expected consumption growth and the less risky consumption (as this reduces precautionary savings). The first three terms we would get even if there was no inflation in this model. Call that alternative and insert it:
The breakeven rate is the difference in the returns to a safe nominal and a safe real bond. It increases with expected inflation. But the breakeven rate is not the same as inflation expectations if inflation is uncertain. Instead, it indicates what an agent demands to earn in addition for taking exposure to inflation risk.
The breakeven rate is also greater when inflation and consumption growth are negatively correlated. That is because inflation hurts more when it’s higher at times when consumption is low.1 This markup could even be negative, if is positive – and in combination with – high enough to compensate for inflation. Then, the nominal bond becomes a hedge.
This paper now argues that has become more positive at the ZLB.
To argue why, they assume that consumption growth and inflation are driven by a demand and a supply shock. depends positively on both shocks, but rises with demand shocks and falls with supply shocks.
Assuming independent, zero-mean shocks with constant variances, this means that the covariance between both variables, , can be explained by their sensitivity to the shocks and the magnitudes of the shocks. Demand shocks move consumption and inflation in the same direction (higher ) but supply shocks work in opposite directions (lower ).
Gourio and Ngo offer a neat explanation why there might have been a change in the prevalence of demand and supply shocks: In usual times, the central bank can offset demand shocks, but when at the ZLB it can’t. So the sensitivity of and to demand shocks might have risen and the sensitivity to supply shocks might even have decreased. This would on net raise the covariance of consumption growth and inflation.
We can now look at the data and check if the correlation between consumption growth and inflation, , became more positive when the economy hit the ZLB in 2008.
The ideal data would be and from which we could ex post calculate their correlations before and after 2008. Consumption is difficult to measure, so the authors take stock prices instead. These are claims to firms profits and as such to a piece of the aggregate cake. If the savings rate and the profit share of output don’t change too much, then taking stocks (the S&P 500 here) as a proxy for consumption growth might be reasonable.
In principle, we could just use monthly realized inflation for . But due to the short time period, the author’s take the breakeven rate (the difference between the nominal and real 10-year treasury bond yield) as a proxy for inflation expectations.2
The authors focus on the sample between the two black dotted lines. In that period, inflation expectations and the stock market were firmly positively correlated (ρ = 0.47 ± 0.028).
The story becomes very different after 2013. I wonder why that may be. The Fed Funds Rate was only raised for the first time since the crisis in December 2015. So something else seems to have been driving stocks up and inflation down in the last four years.
Somewhat counterintuitively, the breakeven rate also depends negatively on the variance of inflation. The authors explain how this “Jensen adjustment” comes about. I’m still a bit puzzled, because if “higher uncertainty about inflation leads to higher expected payoffs [for the nominal bond]” (p.6), then I’d expect the opposite sign. Maybe the effect is again through precautionary savings. The authors also write: “This term is typically small.” (p.6) ↩
That makes the argument strangely circular: We derived formula for the breakeven rate to inform us how matters for the breakeven rate. And now we take the breakeven rate as a proxy for . But we’ve just argued that the breakeven rate is not a perfect proxy for inflation expectations, so I don’t quite see how we can do this here. ↩
A good rule of thumb is that you will want to read any working paper Melissa Dell puts out. Her main interest is the long-run path-dependent effect of historical institutions, with rigorous quantitative investigation of the subtle conditionality of the past.
Every idea taken from elsewhere can be both an asset to the development of a country and a reminder of its comparative backwardness–that is, both a model to be emulated and a threat to its national identity. What appears desirable from the standpoint of progress often appears dangerous to national independence.
In a recent paper (pdf), Olivier Coibion, Yuriy Gorodnichenko and Dmitri Koustas argue that how often we shop matters for measuring consumption inequality.
Inequality is in the focus of researchers at the moment. But usually researchers focus on inequality in income or wealth. But people probably care more about consumption than their income, so it would be good to know how consumption inequality has evolved.1 That, however, is more difficult to measure. While for income and wealth researchers can rely on some tax data, administrative data or plausible self-reported numbers, it’s hard to keep track of a person’s consumption.
The two common ways of measuring people’s consumption are (1) monthly interviews and (2) daily diaries. Consumption inequality as measured by (1) has not risen, but has increased strongly as measured by (2).
The authors’ idea of why shopping frequency matters is straightforward: The problem is that consumption is not the same as expenditures, as some goods are more durable than others. Expenditures are what we can measure and consumption is unobserved.
Some products (like toilet paper) we only buy infrequently and in bulk. So a dataset on daily toilet paper expenditures would have zeros for most people and ones for a few. So at any point in time it would look as if some people consume very much toilet paper and others none and this would imply very unequal consumption. We spend on items like food and coffee more frequently, so buying and consumption happen at times not far apart.
The authors show that people in the U.S. shop less often than they used to and argue that when you adjust for this fact, then consumption inequality has remained flat. They conclude that measuring expenditures over much longer timespans (so not days but months or quarters) is important.
Coibion et al. attribute the reduced frequency of purchases to the rise of club/warehouse stores (e.g. Wallmart). They also discuss other possible reasons for why people shop less: If people earn higher wages, then the opportunity costs of shopping might have increased. Also, houses are larger now and fridges and freezers have higher quality, so the cost of storage might have decreased.
With more online shopping, people might start buying things much more frequently again. The authors argue that this might reverse the existing trend in the mismeasurement of consumption inequality.
These few lines of Eric’s R code produce the following nice figure:
From this figure it becomes apparent that when banking crises happen, they tend to occur in many countries at once. We can see this happening in the early 1930s and in the 1980s and 1990s. (This sample ends in 2008.)
This observation has led some researchers (e.g. Hélène Rey) to argue for the existence of a global financial cycle.