The NBER co-incident Recession Model – “confirmation of last resort”

NOTE : AFTER READING THIS, ALSO TAKE A LOOK AT :The effect of data revisions on the NBER recession model” and “Estimating Recession Probabilities using GDP/GDI

The National Buro for Economic Research (NBER) are the final arbiters of recession dating in the U.S. They take forever to proclaim specific starts and ends to expansions so all the revisions can “work their way through” and they can be dead accurate. Given these proclamation lags can take up to 12 months, their announcements are good for historical, academic and back-testing use only. Now given that many reputable people are claiming we (1) are already in recession or (2) are about to enter one, let us discard all our fancy models aside and look hard at what the NBER will be looking at.

The NBER does not define a recession in terms of two consecutive quarters of decline in real GDP. Rather, a recession is a significant decline in economic activity spread across the economy, lasting more than a few months, normally visible in real GDP, real income, employment, industrial production, and wholesale-retail sales.

They will be examining 4 monthly co-incident indicators:

  1. Industrial Production
  2. Real personal income less transfers deflated by personal consumption expenditure
  3. Non-farm payrolls
  4. Real retail sales deflated by consumer price index

Less weight is given by the NBER for items 1 and 4 as they are sectoral measurements for manufacturing and retail as opposed to broad measures of the economy. The NBER also give a lot of attention to the two broadest measures of the economy, Gross Domestic product (GDP) and Gross Domestic Income (GDI) which are however published quarterly. We provide a separate analysis “GDPI Report” that examines these two components in detail as described in our research note “Estimating Recession Probabilities from GDP/GDI.

The aim of this research note is to apply traditional recession forecasting and probability modelling techniques to the 4 co-incident monthly indicators examined by NBER so that we can “see what the NBER are seeing.” Bear in mind, the 4 components are co-incident and thus the recession model we build is likely to be at least 1-month lagging in its determinations. The aim here is not “real-time forecasting” of recessions (we use the Recession Forecasting Ensemble for this) but to obtain “confirmation of last resort” that we are indeed in recession.

The data obtained for the 4 monthly co-incident NBER indicators are taken from these monthly updated charts at the Federal Reserve Bank of St Louis which gives “some idea” about if we are in recession or not but is a bit difficult to determine how far from recession we are. The 2nd determination is a bit more important to the stock market operator or fund manager than the first.

The model we will built and demonstrate here “Monthly NBER Model” is combined with our quarterly (but with monthly revisions) “GDPI Model” to give subscribers a comprehensive view on ALL the indicators the NBER will be examining in their business cycle determinations. In most cases the models will provide recession signals up to 8 months before the NBER themselves proclaim official dates.

1. Industrial Production
In this instance we have found the 6-month smoothed growth rate (as originally defined by Prof. Moore) to work the best for signalling recession:

2. Real Personal Income
In this instance we have found the 3-month smoothed growth rate  to work the best for signalling recession. This is short enough to be sensitive to rapid changes in personal incomes without whipsawing you will false positives, as shown below:

3. Non-farm payroll employment
In this instance we have found the 3-month smoothed growth rate less 0.415 to work the best for signalling recession. This is short enough to be sensitive to rapid changes in employment without whipsawing you will false positives, as shown in the chart  below:

 4. Real Retail Sales
In this instance we have found the 12-month % change (not smoothed) growth rate  to work the best for signalling recession. A shorter period results in too many false positives. The growth is shown below:

 5. Syndrome Diffusion Index
We have also found 4 thresholds below which each co-incident indicators’ growth shown above must respectively fall to contribute to a recession “syndrome.” They are as follows:

  • Industrial Production -1.77%
  • Personal Income -0.25%
  • Payroll Employment +0.89%
  • Retail Sales +0.44%

When each indicator falls below its syndrome trigger, it means nothing on its own and merely counts a vote toward the Syndrome Index. We then take 2 and subtract the number of votes to get a Recession Syndrome Diffusion Index recession dating model. When the index falls below zero (more than 2 systems are below their syndrome triggers) we call recession. The syndrome triggers were obtained from an optimization program to find the values that maximize the Area Under Curve (AUC or ROC)  of the resulting recession dating model. These are the values for which the model provides the least amount of false positives, the least amount of false negatives and the highest recession/expansion sorting score. The use of this index considerably enhances the accuracy of the NBER recession dating model we are leading up to:

6.Building the weighted Composite NBER Model
We now have 5 components with which to build a multi-factor composite co-incident index:

  1. Industrial Production growth (17% weighting)
  2. Real personal income growth (31% weighting)
  3. Non-farm payrolls growth (30% weighting)
  4. Real retail sales growth (12% weighting)
  5. Syndrome Diffusion Index (10% weighting)
Using an optimization program, we take these 5 time-series (shown in the 5 charts above) and build them into a weighted composite growth index. The weights the program will assign to each component are those that maximize the performance of the resulting composite index in dating recessions. Thus, they are the weights for which the resulting composite index, when used below zero to trigger recession calls, makes the least amount of false positives, the least amount of false negatives and maximizes the recession/expansion sorting capability of the model. The weights are assigned are shown in brackets above and produce the composite index shown below:

 

7.Building the Recession Probability Model
We can now take all 6 time series, namely Industrial production, Retail Sales, Employment, Personal Incomes, Syndrome Diffusion Index and the weighted composite and put them through a Probit statistical process to develop a 6-factor recession probability model as shown below:
For timely recession dating using the probability model on its own, you will generally call recession when probability of recession rises above 0.1 (10% chance of recession). This yielded 2 false alarms in the past. To use a threshold that has never resulted in a false alarm, use 0.4 (40%) although you sacrifice 1-2 months lead time for this privilege.
.
8.Testing model resilience
The recession model weighted composite we built in section-6 was based on an optimization running through the entire data set, namely 1959 through to July 2012. As part of normal rigorous statistical testing we need to ensure that this optimization was not a lucky random configuration and that the 4 co-incident indicators in an optimized weighted average model do indeed consistently reflect behavior that accurately identifies recessions.
.
We achieve this by simulating a person sitting in December 1968, right before the second recession shown on all the above charts and attempting the optimization on known co-incident data to that point. We then project that model into the future to present day to see how it would have continued to perform in “out of sample” data. We then re-do the exercise but including data up to just before the 3rd recession. We repeat again including data to the 4th recession, 5th recession and so on until we have 7 vintages of the weighted composite. Basically we simulate what kind of models (vintages) a RecessionALERT researcher would have built through time until present day, with a view to seeing just how varied the results would have been and how well the models performed into the future.
.
If the results are very varied, or perform poorly into the future (or both) then we conclude the model is at risk of inaccuracy and/or over-optimization. If however, each successive model slightly improves results as more and more data is included into the optimization and each model built over time accurately signals recessions into the future, then we know we have a robust model. The chart below shows two of the models built, namely the very first one built hypothetically  in December 1968 (“1st Optimization”) and the 6th one built in November 2000 (“6th Optimization.”) The very last 7th model we built is the one reflected in sections 1 through 7 in this research paper.
We make several remarkable discoveries from the tests. The first model built hypothetically in 1969 using data from 1st July 1959 through to Nov 1969 and encompassing one recession and expansion (red line in above chart) continued to perform remarkably well into the future. Remember, once this model was constituted in November 1969, it was never modified or altered.
.
The second important observation is that there is not much difference between the 1st model and the 6th. In fact the correlation coefficient between the two is 0.90. Although the two models differed, they differed less around business cycle turning points than during expansion. This means their differences would not have had much impact on dating accuracy as shown in the zoomed view below:
.
The third remarkable observation is that all 7 models built just before the start of each NBER recession have the same two characteristics described above – they all performed well into the future (out of sample data) and they all did not differ that much when it came to detecting business cycle turning points (starts and ends of recessions.) The charts below show all 7 model vintages built over time, with the maximum readings, minimum readings as well as the average of all 7 readings for each month. You will also note that the average of all 7 models hugs the maximum high watermark rather than sitting centrally between the low and high watermark – suggesting the low readings are outliers  of all 7 vintage models.
An obviously interesting point to make is that even the lowest current reading of all 7 vintage optimization models is not currently flagging recession. In other words, the most pessimistic reading of all 7 models is currently not flagging recession.
.
The visual inspection of the above charts leads us to believe that although the time-series of each model may differ, these differences are small at turning points. But we should rather confirm this by looking at the “area under the curve” (AUC or sometimes referred to as ROC) scores for each model, where we rank their ability to sort all months correctly between expansion and recession. A model that perfectly sorts all 580 months into proper buckets for expansion and recession scores a 1.0 in this method. The results are shown in the table below and they indeed confirm that turning point identification accuracy among all model vintages is highly comparable. We see that the more months included in the sample for the models’ optimization, the more accurate the model becomes over the whole time period. But there is not much difference between the models in relative terms.
.
We can thus safely conclude that the weighted average model we developed through optimization with the 4 specified “NBER co-incident indicators” plus the diffusion index, does not owe its accuracy to random luck or over optimization and each successive model we build over time with more in-sample data is likely to continue to perform well into the future and improve overall accuracy.
.
9.Testing effects of revisions on the model
Economic time series used in measuring business cycles and forecasting recessions are subject to revisions and re-benchmarking. Over time, more up-to-date and accurate data become available and time series are revised to reflect the updates. Some economic time series are subject to more drastic revisions than others. For example, the unemployment rate is subject to far smaller revisions than a broader time series such as GDP which is known to encounter very large revisions. Each instance of an update or revision to a time series is referred to as a “vintage”. Generally every time-series spawns a vintage on a weekly, monthly or quarterly basis depending on how often it is published.
.
When building leading or co-incident indices to measure business cycles and forecast recessions, the forecaster generally uses the latest vintage available and optimises his algorithms according to that. If the data he uses in his model is subject to large revisions he runs the risk that a current optimistic view of the indicator may very well turn out later to be a pessimistic one (or even the converse.) There is thus always this niggling concern about how much you can trust your models.
.
As of June 2012, many forecasters are claiming we are in recession even though this view is not supported by many mainstream leading and co-incident indicators. Their claim is based on the fact that subsequent downward revisions will eventually show us in many months time that we were indeed in recession in the past.To research and demonstrate the effects of data revisions and re-benchmarking on the NBER Model we rebuilt it using real-time data observations taken from the ALFRED Archival database of the Federal Reserve Bank of St Louis.
.
This allows us to compare real-time observations of each of the 4 NBER Model components, as available to an outside observer at the time, to the latest August 2012 vintage. This will give us a feel for how significant the revisions are, especially around turning points just before and after NBER declared recessions . We then construct a real-time version of the weighted NBER composite co-incident economic index to see in turn how revisions to its components effect recession forecasting. The results are discussed in the “Effects of Revisions on Recession Forecasting” Research note.
.
NOTE : RecessionALERT  subscribers have all the charts shown in this research note updated once per month in a separate report.

About RecessionALERT

Dwaine has a Bachelor of Science (BSc Hons) university degree majoring in computer science, math & statistics and is a full-time trader and investor. His passion for numbers and keen research & analytic ability has helped grow RecessionALERT into a company used by hundreds of hedge funds, brokerage firms and financial advisers around the world.
Comments are closed.

 IMPORTANT : IN ORDER TO SYNCHRONIZE WITH PUBLICATION EVERY 2ND AND 4TH WEEK OF A MONTH, THE NEXT SUPERINDEX REPORT DATED FRIDAY 9TH FEBRUARY WILL BE PUBLISHED ON MONDAY 12TH FEB 2024. THE REPORT AFTER THAT WILL BE MONDAY 26TH FEBRUARY, THEN MONDAY 11TH MARCH, THEN MONDAY 25TH MARCH THEN MONDAY 8 APRIL.