# Payroll Employment Data: You Can’t Believe Your Eyes

The payroll employment report is regarded as the most important economic news of the month. It is also the most misunderstood.

To illustrate, please consider this chart. Does it represent a relatively straight line?

More about that below, but first we need some background.

# Background

Each month the Bureau of Labor Statistics makes an estimate of the entire number of payroll jobs in the U.S. By subtracting the estimated total for the prior month from that of the current month, they calculate a net change in payroll jobs. It is this change that is the center of attention for financial markets. It is often wrongly cited as the number of jobs created. No! It is the net effect of jobs created and jobs lost. Creation and destruction are each over 2 million per month, so the net effect is relatively small.

The estimates are done via a survey of 142,000 business establishments representing 689,000 work sites. The sample is selected using state-of-the-art methods. There is a regular rotation of the sample so that new businesses are included.

Respondents are asked to provide the number of jobs represented by their payrolls for the pay period that includes the 12th of the month. The employment report is scheduled for the third Friday following the week of the prior month that includes the 12th. The first estimate is only a few weeks old, but some of the firms have not yet responded. The BLS does revisions for the next two months. These revisions reflect additional survey responses and small changes in seasonal adjustments. The initial estimate is based on about 75% of the respondents. The second is about 90% and the final represents about 95%. The response rate has improved over time as automated methods have replaced mailed surveys.

While the process is complicated, it is done in a highly professional fashion and is carefully documented. Part of the documentation covers sampling error, the subject of this post.

# Sampling Error

Taking a sample of the entire working population provides a pretty good estimate, but an estimate it is. Wikipedia has a reasonable, understandable description of the question. For our purposes we can imagine that we did many different surveys for a given month. Each would provide a somewhat different result, which we can estimate using the standard error for a one-month change. Sampling error for the establishment survey is relatively small because the sample size is large, and it covers one-third of the entire universe.

What is small? For the entire non-farm population, it is 65,280. This is quite good for a work force over 150 million, but not nearly as helpful when our focus is on the monthly change. The 90% confidence interval is about 112,000.

Please note that the sampling error calculation is based on complete responses. It is not improved nor eliminated by the subsequent monthly revisions.

# Effect of Sampling Error

To illustrate the effect of the sampling error, I will borrow a concept from a favorite statistics professor – the distinction is between “truth” and the “sample result.” Since we never really know truth, his suggestion involved divine knowledge. For my example, I will start with a rolling two-year average of job changes. We will use the result for a given month as our best guess at truth. Here is what we see.

This is the “relatively straight line” that I referred to in the introduction. Now suppose we add an error term to each point. These terms are based upon the BLS standard error, randomly selected for each month.

To emphasize, for any given month we do not know the true result, nor do we know the sampling error. If I recalculate my spreadsheet, the pattern of sampling error will change. In each case the errors will exhibit the properties of the underlying distribution. Since we know that the 90% confidence interval is +/- 112K for example, we should expect 5% of the cases in each direction to show even larger errors.

Notice also that there is no predictable pattern to the errors. They are random. A large positive error for one month is not offset by a negative one the next. It is only in the long run that the errors balance out.

# Interpreting the Results

If you keep this in mind you will quickly spot major errors by the monthly crew of payroll pundits.

 This low result will be revised higher. There is no reason to expect the late-submitting firms to offset those who made the first deadline. Two (or three) months show a trend. The example shows that big sampling errors in the same direction can occur in consecutive months, or even several months. A change of 50K jobs is important. Maybe, but maybe not. We can expect errors of this amount or greater nearly half of the time. The ADP report provides confirmation. The ADP report is a good second method, but it shows the same properties. I duplicated the analysis. Since the errors are random there as well, they might point the same direction, or they might not. We can derive a lot of information from the “internals” of the report. It is all drawn from a sample, so there is sampling error for each subgroup. The table below (for the May 2019 report) illustrates this point.

The emphasis on the employment report reflects an enduring market tendency:

When a subject is important, the thirst for data overcomes our judgement about its relevance.

When others over-react to this report, do not get caught up in the frenzy. A longer time period and a variety of economic sources are needed for solid decision-making.

### You may also like

#### Weighing the Week Ahead: Does the Reddit Rebellion Threaten Investor Portfolios?

• John Rathbun July 5, 2019

Thanks for this scholarly exposition!

• Pingback: Morning News: July 5, 2019 – Paydee July 5, 2019
• wkevinw July 5, 2019

Thanks for the addition of stats instead of adding to the noise/silly short term discussion. The TV and www personalities need to show lots of enthusiasm and urgency every day all the time, so the coverage of these data releases is completely misleading. Judging by the discussions, the supposed experts really are either as incompetent or unethical as they seem. The professions of journalism, economics and some others have been severely damaged by all of this.

Yes, these numbers require some averaging, filtering or similar, such as 3 month moving average, etc.

Thanks again for adding some sanity.