Lukas Püttmann    About    Blog

Holes, kinks and corners

As Pudney (1989) has observed, microdata exhibit “holes, kinks and corners.” The holes correspond to nonparticipation in the activity of interest, kinks correspond to the switching behavior, and corners to the incidence of nonconsumption or nonparticipation at specific points of time.

This is from “Microeconometrics”, by Colin Cameron and Pravin Trivedi and refers here.

Collected links

  1. Fred: “Of places and patents”. And this.
  2. Language Log: “Replicate vs. reproduce
  3. A nice quantile regression use case
  4. The Guardian on the Oxford English Dictionary
  5. Geopolitical Hedging as a Service
  6. How Making Something Better Can Make Everything Worse” which links here
  7. Cool NYT figure
  8. stop() / return() Early for Shorter Code
  9. Martin Thoma:

    Example 2: This one is actually confusing. Likely also with context.

    EN: Professors say, students are doing well.
    DE: Professoren sagen, Studenten haben es gut.
    DE: Professoren, sagen Studenten, haben es gut.
    EN: Professors, students say, are doing well.

  10. Jason Collins: “Angela Duckworth’s ‘Grit’: The Power of Passion and Perseverance’

Term spreads and business cycles 3: The view from history

In the first installment in this series, I documented that term spreads tend to fall before recessions. In the second part I looked at monetary policy and showed that it’s the endogenous reaction of monetary policy that investors predict. I worked with post-war data for the US in both parts, but we can extend this analysis across countries and use longer time series.

Start a new analysis and load some packages


The dataset by Jordà, Taylor and Schularick (2016) is great and provides annual macroeconomic and financial variables for 17 countries since 1870. We can use the Stata dataset from their website like this:

mh <- read_dta("")
lbls <- data.frame(var = names(mh),
                   label = vapply(mh, attr, FUN.VALUE = "character",
                                  "label", USE.NAMES = FALSE),
                   stringsAsFactors = FALSE)

mh <- suppressWarnings(tidyr::gather(mh, var, value, -year, -country, -iso, -ifs))
mh <- merge(mh, lbls)
mh <- mh[, c("year", "country", "iso", "ifs", "var", "label", "value")]
mh <- structure(mh, class = c("tbl_mh", class(mh)))
mh$value[is.nan(mh$value)] <- NA

Importing Stata datasets with the haven package is pretty neat. In the RStudio environment, the columns even display the original Stata labels.

There are two interest rate series in the data and the documentation explains that most short-term rates are a mix of money market rates, bank lending rates and government bonds. Long-term rates are mostly government bonds.

Homer and Sylla (2005) explain why we usually study safe rates:

The method of using minimum rates to determine interest rate trends is informative. Today the use of „prime rates“ and AAA averages is customary to indicate interest rate trends. There is a very large range of rates higher than minimum rates at all times, and there is no top limit except legal maxima. Averages of rates, if the did exist, might be merely averages of good credits with bad credits. The lowest regularly reported rates, excluding eccentric rates, comprise a practical limit comparable over time. Minimum rates will not show us where most funds were lending, but they should provide a fair index number for measuring long-term interest rate trends. (p.140)


The level of interest rates is a more complex concept than the trend of interest rates. (p.555)

So let’s plot those interest rates:

mh %>% 
  filter(var %in% c("stir", "ltrate")) %>% 
  ggplot(aes(year, value / 100, color = var)) +
  geom_hline(yintercept = 0, size = 0.3, color = "grey80") +
  geom_line() +
  facet_wrap(~country, scales = "free_y", ncol = 3) +
  theme_tufte(base_family = "Helvetica") +
  scale_y_continuous(labels=percent) +
  labs(y = "Interest rate", 
       title = "Long-term and short-term interest rates over the long run",
       subtitle = "1870-2013, nominal rates in percent per year.",
       caption = "Source:",
       color = "Interest rate:") +
  theme(legend.position = c(0.85, 0.05)) +
  scale_color_manual(labels = c("long-term", "short-term"), 
                     values = c("#ef8a62", "#67a9cf"))

Which gets us:

Long-term and short-term interest rates for 17 countries since 1870

Interest rates were high everywhere during the 1980s when inflation ran high. Rates were quite low in the 19th century. There are also some interesting movements in the 1930s.

Next, select GDP and interest rates, calculate the term spread and lag it:

df <- mh %>% 
  select(-label) %>% 
  filter(var %in% c("rgdpmad", "stir", "ltrate")) %>% 
  spread(var, value) %>% 
  mutate(trm_spr = ltrate - stir) %>% 
  arrange(country, year) %>% 
  mutate(trm_spr_l = dplyr::lag(trm_spr, 1),
         gr_real = 100*(rgdpmad - dplyr::lag(rgdpmad, 1)) / dplyr::lag(rgdpmad, 1))

Add a column of 1, 2, … T for numerical dates:

df <- df %>% 
  left_join(tibble(year = min(df$year):max(df$year), 
                   numdate = 1:length(min(df$year):max(df$year))))

Check for extreme GDP events (to potentially exclude them):

df <- df %>%
  mutate(outlier_gr_real = ifelse((abs(gr_real) > 15) | (abs(gr_real) < -15), 
                                 TRUE, FALSE))

Scatter the one-year-before term spread against subsequent real GDP growth:

df %>% 
  filter(!outlier_gr_real) %>% 
  ggplot(aes(gr_real / 100, trm_spr_l / 100)) +
  geom_hline(yintercept = 0, size = 0.3, color = "grey50") +
  geom_vline(xintercept = 0, size = 0.3, color = "grey50") +
  geom_smooth(method = "lm", size = 0.3, color = "#b2182b", fill = "#fddbc7") +
  geom_jitter(aes(color = numdate), size = 1, alpha = 0.6, stroke = 0,
              show.legend = FALSE) +
  facet_wrap(~country, scales = "free", ncol = 3) +
  theme_tufte(base_family = "Helvetica") +
  scale_y_continuous(labels=percent) +
  scale_x_continuous(labels=percent) +
  labs(title = "Lagged term spread vs. real GDP growth",
       subtitle = "1870-2013, annually.",
       caption = "Source:",
       x = "Real GDP growth",
       y = "Term spread")

Which creates:

Lagged term spread against real GDP growth

Lighter shades of blue in the markers signal earlier dates. The clouds are quite mixed, so correlations don’t seem to just portray time trends in the variables.

The relationship doesn’t look as clearly positive as in our previous analysis. Let’s dig in further using a panel regression. I estimate two models where the second excludes outliers as defined above. I also control for time and country fixed effects.

r1 <- plm(gr_real ~ trm_spr_l, 
          data = df, 
          index = c("country", "year"), 
          model = "within", 
          effect = 'twoways')

r2 <- df %>% 
  filter(!outlier_gr_real) %>% 
  plm(gr_real ~ trm_spr_l, 
      data = ., 
      index = c("country", "year"), 
      model = "within", 
      effect = 'twoways')

stargazer(r1, r2, type = "html")

This produces:

Dependent variable:
Real GDP growth
Term spread (lagged)0.108*0.120**
Adjusted R2-0.075-0.074
F Statistic3.474* (df = 1; 2097)6.627** (df = 1; 2055)
Note:*p<0.1; **p<0.05; ***p<0.01

So also using this dataset we find that lower term spreads tend to be followed by recessions.


Homer, S. and R. Sylla (2005). A History of Interest Rates, Fourth Edition. Wiley Finance.

Jordà, O. M. Schularick and A. M. Taylor (2017). “Macrofinancial History and the New Business Cycle Facts”. NBER Macroeconomics Annual 2016, volume 31, edited by Martin Eichenbaum and Jonathan A. Parker. Chicago: University of Chicago Press. (link)

Related posts:

Models and Surprise

Hadley Wickham is a statistician and programmer and the creator of popular R packages such as ggplot2 or dplyr. His status in the R community has risen to such mythical levels that the set of packages he created were called the hadleyverse (renamed to tidyverse).

In a talk, he describes what he considers a sensible workflow and explains the following dichotomy between data visualization and quantitative modeling:

But visualization fundamentally is a human activity. This is making the most of your brain. In visualization, this is both a strength and a weakness […]. You can see something in a visualization that you did not expect and no computer program could have told you about. But because a human is involved in the loop, visualization fundamentally does not scale.

And so to me the complementary tool to visualization is modeling. I think of modeling very broadly. This is data mining, this is machine learning, this is statistical modeling. But basically, whenever you’ve made a question sufficiently precise, you can answer it with some numerical summary, summarize it with some algorithm, I think of this as a model. And models are fundamentally computational tools which means that they can scale much, much better. As your data gets bigger and bigger and bigger, you can keep up with that by using more sophisticated computation or simply just more computation.

But every model makes assumptions about the world and a model – by its very nature – cannot question those assumptions. So that means: on some fundamental level, a model cannot surprise you.

That definition excludes many economic models. I think of the insights of models such as Akerlof’s Lemons and Peaches, Schelling’s segregation model or the “true and non-trivial” theory of comparative advantage as surprising.

Term spreads and business cycles 2: The role of monetary policy

In the first part of this series I showed that term spreads can be used to predict real GDP about a year out. This pattern comes about, because investors expect the central bank to lower short term interest rates.

But we don’t know what’s causing what. Is the central bank driving business cycles or is it just responding to a change in the economic environment?

This matters for how we interpret the pattern we found. Investors could either have expectations about the business cycle or about arbitrary decisions by the central bank.

The central bank’s main tool is changing at the interest rate at which banks can lend, the federal funds rate. In this post, I will look at how the Fed Funds rate comoves with the term spread and how the unexpected component in that rate (the “shock”) is related to it.


First, run all the codes from the previous post.

Get the Fed Funds rate and calculate how it changes between this month and the same month next year:

fd <- fred$series.observations(series_id = "FEDFUNDS") %>%
  mutate(date = as.yearmon(date)) %>% 
  select(date, ff = value) %>% 
  mutate(ff = as.numeric(ff)) %>% 
  full_join(fd) %>% 
  mutate(ff_ch = dplyr::lead(ff, 12) - ff)

Make the same scatterplot as before:

fd %>% 
  filter(date <= 2008) %>% 
  ggplot(., aes(ff_ch, trm_spr)) +
  geom_hline(yintercept = 0, size = 0.3, color = "grey50") +
  geom_vline(xintercept = 0, size = 0.3, color = "grey50") +
  geom_point(alpha = 0.7, stroke = 0, size = 3) +
  geom_smooth(method = "lm", size = 0.2, color = "#ef8a62", fill = "#fddbc7") +
  theme_tufte(base_family = "Helvetica") +
  labs(title = "Term Spread 1 Year Earlier and Change in Fed Funds Rate",
       subtitle = "1953-2017, monthly.",
       x = "Change in Fed Funds (compared to 12 months ago)",
       y = "Term spread (lagged 1 year)")

Which gets:

Term spread against changes in the Fed Funds rate

So the pattern is still there. The term spreads drop a year before the Fed Fund rate falls.


Identifying plausible exogenous variation in monetary policy is the gold standard of monetary economics. A host of other ways have been proposed, but basically every course on empirical macroeconomics starts with the shock series by Romer and Romer (2004).1 This paper filters out the endogenous response of monetary policy with respect to the movement in other economic variables using a regression of the fed funds rate on variables that are important for the central bank’s decision, such as GDP, inflation and the unemployment rate.

I won’t reproduce their analysis here, but just take their shock series from the journal page. For this, we also need the following package to read Excel data:


The following codes go to the AER website, download the files into a temporary folder (so we don’t have to manually delete them again), unzip the the codes and extract the relevant part:

td <- tempdir() 
tf <- tempfile(tmpdir=td, fileext=".zip") 
download.file("", tf) 
unzip(tf, files="RomerandRomerDataAppendix.xls", exdir=td, overwrite=TRUE) 
fpath <- file.path(td, "RomerandRomerDataAppendix.xls")
rr <- read_excel(fpath, "DATA BY MONTH") 

Plot the shock series:

ggplot(rr, aes(DATE, RESID)) +
  geom_line() +
  theme_minimal() +
  labs(title = "Romer and Romer (2004) monetary shock",
       subtitle = "1966-1996, monthly.",
       x = "month",
       y = "Romer-Romer shock")
Romer-Romer monetary policy shocks

Merge the rr dataframe with our previous fd dataset:

fd <- fd %>% 
  left_join(rr %>% 
              gather(var, val, -DATE) %>% 
              mutate(val = ifelse(val == "NA", NA, val),
                     val = as.numeric(val),
                     DATE = as.yearmon(DATE)) %>% 
              filter(var == "RESID") %>% 
              spread(var, val) %>% 
              select(date = DATE, shock = RESID))

Make the plot:

ggplot(fd, aes(shock, trm_spr)) +
  geom_hline(yintercept = 0, size = 0.3, color = "grey50") +
  geom_vline(xintercept = 0, size = 0.3, color = "grey50") +
  geom_point(alpha = 0.7, stroke = 0, size = 3) +
  geom_smooth(method = "lm", size = 0.2, color = "#ef8a62", fill = "#fddbc7") +
  theme_tufte(base_family = "Helvetica") +
  labs(title = "Term Spread 1 Year Earlier and Romer-Romer Monetary Policy Shock",
       subtitle = "1966-1996, monthly.",
       x = "Romer-Romer shock",
       y = "Term spread (lagged 1 year)")

Which creates:

Term spread against Romer-Romer monetary policy shocks

Now the pattern is gone.

What I’m learning from this is that term spreads are informative about the endogenous component of monetary policy. Investors have sensible expectations about when the central bank will lower interest rates due to a slowing economic activity.


Bernanke, B., J. Boivin and P. Eliasz (2005). “Measuring the Effects of Monetary Policy: A Factor-Augmented Vector Autoregressive (FAVAR) Approach””, Quarterly Journal of Economics. (link)

Christiano, L., M. Eichenbaum and C. Evans (1996). ““The Effects of Monetary Policy Shocks: Some Evidence from the Flow of Funds”, Review of Economics and Statistics. (link)

Nakamura, E. and J. Steinsson (2018). “High-Frequency Identification of Monetary Non-Neutrality: The Information Effect”, Quarterly Journal of Economics. (link)

Romer, C. D. and D. H. Romer (2004). “A New Masure of Monetary Shocks: Derivation and Implications”, American Economic Review. (link)

Uhlig H. (2005). “What Are the Effects of Monetary Policy on Output? Results from an Agnostic Identification Procedure”, Journal of Monetary Economics. (link)

  1. Some other approaches are: Orderings in a vector autoregression (VAR), sign restrictions, high frequency identification and factor-augmented VARs. 

Cosine similarity

A typical problem when analyzing large amounts of text is trying to measure the similarity of documents. An established measure for this is cosine similarity.


It’s the cosine of the angle between two vectors. Two vectors have a maximum cosine similarity of 1 if they are parallel and the lowest cosine similarity of 0 if they are perpendicular to each other.

Say you have two documents and . Write these documents as vectors , where is the length of the pooled dictionary of all words that show up in either document. An entry is the number of occurences of a particular word in a document. Cosine similarity is then (Manning et al. 2008):

Given that entries can only be positive, cosine similarity will always take positive values. The division by the term in the denominator serves to normalize the length of document and bounds values between 0 and 1.

Cosine similarity is equal to the usual (Pearson’s) correlation coefficient if we first demean the word vectors.


Consider a dictionary of three words. Let’s define (in Matlab) three documents that contain some of these words:

w1 = [1; 0; 0];
w2 = [0; 1; 1];
w3 = [1; 0; 10];

W = [w1, w2, w3];

Calculate the correlation between these:


Which gets us:

ans =

    1.0000   -1.0000   -0.4193
   -1.0000    1.0000    0.4193
   -0.4193    0.4193    1.0000

Documents 1 and 2 have the lowest possible correlation while 2 and 3 and 1 and 3 are somewhat correlated.

Define a function for cosine similarity:

function cs = cosine_similarity(x1, x2)
  l1 = sqrt(sum(x1 .^ 2));
  l2 = sqrt(sum(x2 .^ 2));

  cs = (x1' * x2) / (l1 * l2);

And calculate the values for our word vectors:

cosine_similarity(w1, w2)
cosine_similarity(w2, w3)
cosine_similarity(w1, w3)

Which gets us:

ans =


ans =


ans =


Documents 1 and 2 again have the lowest possible similarity. The association between documents 2 and 3 is especially high, as both contain the third word in the dictionary which also happens to be of particular importance in document 3.

Demean the vectors and then run the same calculation:

cosine_similarity(w1 - mean(w1), w2 - mean(w2))
cosine_similarity(w2 - mean(w2), w3 - mean(w3))
cosine_similarity(w1 - mean(w1), w3 - mean(w3))


ans =


ans =


ans =


They’re indeed the same as the correlations.


Manning, C. D., P. Raghavan and H. Schütze (2008). Introduction to Information Retrieval. Cambridge University Press. (link)

Leonardo da Vinci

In Walter Isaacson’s new Leonardo da Vinci biography:

The more than 7,200 pages now extant probably represent about one-quarter of what Leonardo actually wrote, but that is a higher percentage after five hundred years than the percentage of Steve Jobs’s emails and digital documents from the 1990s that he and I were able to retrieve.

I also liked this:

Leonardo’s Vitruvian Man embodies a moment when art and science combined to allow mortal minds to probe timeless questions about who we are and how we fit into the grand order of the universe. It also symbolizes an ideal of humanism that celebrates the dignity, value, and rational agency of humans as individuals. Inside the square and the circle we can see the essence of Leonardo da Vinci, and the essence of ourselves, standing naked at the intersection of the earthly and the cosmic.

Term spreads and business cycles 1

This is the first part of a series of posts on term spreads and business cycles. There'll probably be three parts.

The term spread is the return differential between a long-term and a short-term safe bond. We can use this to learn about how market participants expect the economy to perform over the next year or so.

My explanation loosely follows Stephen Cecchetti and Kermit Schoenholtz’s book “Money, Banking, and Financial Markets”.


Consider an investor who either buys a long-run bond running for two years and pays per year or he invests in two subsequent short-run bonds. If he chooses the second option, the bond pays in the first year and he expects to earn in the second year. If we neglect any risk, it’s plausible to assume that interests rates adjust such that the investor earns the same using either strategy:

When we evaluate this expression and omit all terms that multiply two rates (these will typically be small), we get:

We can extend this for any years n:

So long-term interest rates are composed of expectations about future short term interest rates.

Combine this with what the Cecchetti and Schoenholtz call the Liquidity Premium Theory. Returns that accrue farther into the future are more risky, as the bond issuer may be bankrupt and we don’t know what inflation will be. This means that rates on bonds with longer maturities are usually higher.

The authors add a factor (the risk premium of a bond running years) to the original equation:

As is higher for greater , interest rates will tend to be higher for longer maturities.

The term spread is then

A positive term spread can mean two things. Either we expect the average future short-term interest rate to rise or the difference between the two risk premia () has increased. We would expect this difference to be positive anyway, but it might widen even more when inflation becomes more uncertain or debt becomes riskier. But disentangling the two explanations is difficult.

It’s more interesting when the term spread turns negative. The difference between the risk premia probably stays positive, so investors expect short-term interest rates to decrease.

Short-term interest rates are mostly under the control of the central bank, so this probably means that people expect monetary policy to loosen and that the central bank lends more liberally to banks.

And why would the central bank do that? That’s usually to avert a looming recession and buffer negative shocks. Given that the central bank also responds to changes in the economic environment, it’s not clear what’s causing what here.

But either way, when term spreads turn negative, investors expect bad things. This is why the term spread tells us something about investors’ expectations.


Let’s look at the empirical evidence for this, as argued and presented by the authors. They kindly provide Fred codes with all their plots, so they’re easy to reproduce. First get some packages in R:


Insert your FRED API key below:

api_key <- "yourkeyhere"
fred    <- FredR(api_key)

Pull data on 10 year and 3 month treasury bill rates:

# Long-term interest rates
fd <- fred$series.observations(series_id = "GS10") %>%
  mutate(date = as.Date(date)) %>% 
  select(date, t10 = value) %>% 
  mutate(t10 = as.numeric(t10))

# Short-term interest rates
fd <- fred$series.observations(series_id = "TB3MS") %>%
  mutate(date = as.Date(date)) %>% 
  select(date, tb3m = value) %>% 
  mutate(tb3m = as.numeric(tb3m)) %>% 

# Turn dates into month-years and order by dates
fd <- fd %>% 
  mutate(date = as.yearmon(date)) %>% 

Plot the two series:

fd %>% 
  gather(var, val, -date) %>% 
  ggplot(aes(date, val, color = var)) +
  geom_hline(yintercept = 0, size = 0.3, color = "grey80") +
  geom_line() +
  labs(title = "The Term Structure of Treasury Interest Rates",
       subtitle = "1934-2017, monthly",
       x = "month",
       y = "yield",
       color = "Treasury bill:") +
  scale_color_manual(labels = c("10-year", "3-month"),
                     values = c("#ef8a62", "#67a9cf"))  +
  theme_tufte(base_family = "Helvetica")

Which produces:

Treasury bond yields, 10 year and 3 month since 1934 monthly

The series on 3-month Treasury bond yields starts in 1934 and the other series on 10-year yields starts in 1953. Both series peak in the early 80s when inflation ran high. As argued before, long-run interest rates are usually above short-run interest rates. The term spread is the difference between the two:

fd <- fd %>% 
  mutate(trm_spr = t10 - tb3m)

Things become more interesting when we compare the behavior of the term spread with real GDP growth rates:

# Get quarterly GDP
qt <- fred$series.observations(series_id = "GDPC1") %>%
  mutate(yq = as.yearqtr(paste0(year(date), " Q", quarter(date)))) %>% 
  select(yq, rgdp = value) %>% 
  mutate(rgdp = as.numeric(rgdp),
         gr = 100*(rgdp - dplyr::lag(rgdp, 4)) / dplyr::lag(rgdp, 4))

# Aggregate monthly to quarterly and get lag
qt <- fd %>% 
  mutate(yq = as.yearqtr(date)) %>% 
  group_by(yq) %>% 
  summarize(trm_spr = mean(trm_spr)) %>% 
  mutate(trm_spr_l = dplyr::lag(trm_spr, 4)) %>% 
  full_join(qt, by = "yq") %>% 
  filter(yq >= "1947 Q1")

Make a plot of the two series:

qt %>% 
  select(-rgdp, -trm_spr_l) %>% 
  drop_na(trm_spr) %>% 
  gather(var, val, -yq) %>% 
  ggplot(aes(yq, val, color = var)) +
  geom_hline(yintercept = 0, size = 0.3, color = "grey80") +
  geom_line() +
  labs(title = "Current Term Spread and GDP Growth",
       subtitle = "1953-2017, quarterly",
       x = "quarter",
       y = "percentage points",
       color = "Variables:") +
  theme_tufte(base_family = "Helvetica") +
  scale_color_manual(labels = c("Real GDP growth", "Term spreads (lagged)"),
                     values = c("#ef8a62", "#67a9cf"))

We get:

Term spread against GDP growth

The term spread often falls before GDP growth does. When the term spread turns negative, recessions tend to happen. Compare the lagged term spread to GDP growth:

qt %>% 
  filter(yq >= 1953) %>% 
  ggplot(aes(gr, trm_spr_l)) +
  geom_hline(yintercept = 0, size = 0.3, color = "grey50") +
  geom_vline(xintercept = 0, size = 0.3, color = "grey50") +
  geom_point(alpha = 0.7, stroke = 0, size = 3) +
  labs(title = "Term Spread 1 Year Earlier and GDP Growth",
       subtitle = paste0("1953-2017, quarterly. Correlation: ", 
                         round(cor(qt$gr, qt$trm_spr_l, 
                                   use = "complete.obs"), 2), "."),
       x = "growth rate of real GDP",
       y = "Term spread (lagged 1 year)") +
  geom_smooth(method = "lm", size = 0.2, color = "#ef8a62", fill = "#fddbc7") +
  theme_tufte(base_family = "Helvetica")

Which gets us:

Term spread 1 year earlier and GDP growth

This is exactly the relationship that makes people think of term spreads as a good predictor of GDP in the near future.


Cecchetti and Schoenholz summarize it like this (p.180):

[…] [I]nformation on the term structure – particularly the slope of the yield curve – helps us to forecast general economic conditions. Recall that according to the expectations hypothesis, long-term interest rates contain information about expected future short-term interest rates. And according to the liquidity premium theory, the yield curve usually slopes upward. The key statement is usually. On rare occasions, short-term interest rates exceed long-term yields. When they do, the term stucture is said to be inverted, and the yield curve slopes downward.

[…] Because the yield curve slopes upward even when short-term yields are expected to remain constant – it’s the average of expected future short-term interest rates plus a risk premium – an inverted yield curve signals an expected fall in short-term interest rates. […] When the yield curve slopes downward, it indicates that [monetary] policy is tight because policymakers are attempting to slow economic growth and inflation.

We still don’t know what’s causing what. Is the central bank the driver of business cycles or is it just responding to a change in the economic environment?

Stay tuned for the next installment in this series in which I’ll look at the role of monetary policy.


Cecchetti, S. G. and K. L. Schoenholtz (2017). “Money, Banking, and Financial Markets”. 5th edition, McGraw-Hill Education. (link)

Presentations at the AEA

I presented at this year’s AEA in a session on automation. Apart from the papers in that session, I also enjoyed the following:

  • Lifecycle of Inventors
  • Susan Athey presented “Online Intermediaries and the Consumption of Polarized and Inaccurate News During the 2016 Presidential Election” in this session. They tried to estimate what the political leanings of media were that people consumed during the 2016 Presidential Election. They document that most of the media is left of center. But the strongest result is that they show a lack of reliable right-wing media. She explained that if they hadn’t coded Fox News as at least moderately accurate, then there would be no such right-wing media outlet.
  • Automation and the Workforce
  • John Horton’s “Labor Market Equilibration: Evidence from Uber” (pdf).
  • Trade and Innovation
  • Shopping in Macroeconomics” (best session title!)
  • Credit Booms, Aggregate Demand, and Financial Crises”, included a new paper by Matthew Baron, Emil Verner and Wei Xiong. They painstakingly digitized historical bank equity returns to create a new financial crisis indicator for 47 countries since 1800.