Posts filed under “Data Analysis”
*Sigh* Another month, another Jobs number.
My thinking about all aspects of this monthly exercise has evolved. I have gone from regularly taking "The Under" — a winning bet until we retired it — to something more philosophical (dare I say "profound?"). We have come to recognize that the initial release of the NFP data by BLS is a number subject to such significant revisions that it is not of use to investors; Traders can play for a rally or a fade by identifying when Job sentiment in either direction gets excessive.
I used to be surprised/dumfounded/shocked by the initial reads. Now, I merely shrug and acknowledge that the hard number is of little value. The relative performance to prior cycles, and the overall trend (acceleration/deceleration) is all that matters.
The data from both BLS and ADP is not consistent with a strong economy. Remember, it takes about 150k per month to merely keep up with population growth — in other words, 150k new jobs per month is zero job growth as a percentage relative of total population and the labor pool. We were adding 250k-300k per month in the 90s, and even just two years ago when the housing boom was in full swing the numbers were in the 175-225k.
So where are we? As the chart below shows, this remains one of the weakest post-recession job creation cycles in the post war era. And as we have seen over the past few quarters, even that modest pace of job creation is now decelerating. Following the ADP report (more on that below), there is very little expectation of much strength. That makes for several potential surprise outcomes (you can do the math yourselves).
Chart Courtesy Spencer England Equity Review (SEER)
Back to NFP: With ADP showing job gains of only 57,000 (the lowest since July 2003), and job growth revised down to 121K last month, the consensus for NFP of 95K might even be high. The trend is decelerating job growth as the economy slows.
Back in January of this year, I gave a quasi-mea culpa about the ADP Report:
"Note: I used to mock the ADP data as being so awful, but since the big BLS revision, ADP turned out to be more accurate than originally appeared. While the jury is still out, I am trying to remain open-minded about their analysis;"
The original ADP data appeared to be wildly off from BLS. ADP’s accuracy rate improved significantly., once BLS did their revisions. One can hope that the competitive pressure from ADP might spur BLS to become more accurate in their earlier releases. So I am grudgingly moving towards giving them a small bump up in credibility.
I still retain my criticism that ADP is not a neutral observer. Indeed, I retain my grave reservations about the entire trend towards non-government, non-academic entities with a vested interest in a particular outcome releasing these Econ data as a form or marketing/PR. ADP is but one in a series of private firms/associations that do this, including NAR, ATA, NAHB, ICSC, etc. And while much of the data is worthwhile and helpful, some of the bad apples — especially those potentially interested party in the outcome — do spoil the entire bushel. (I can show you many ways the NAR data gets gamed).
Back to ADP: According to a spokesperson for the firm, they have been working to improve their process:
* Quadrupling the size of the sample from the average historical sample size
* Improved outlier detection
* Enhanced seasonal adjustment procedures
* Reporting of industry and firm-size data
As I told them, I am retaining an open mind — more out of frustration with BLS than anything. There is a definite space in the market for better NFP data, and if ADP keeps improving their methodlogy, they might shake up BLS a bit.
UPDATE: March 9, 2007 2:55pm
Just a statistical correction to note: The Birth Death adjustment goes into the total employment count, not just the monthly totals — do not assume that 118k was tacked on to February, or 175k was subtracted from January monthly changes — I believe these guesstimates are incorporated into the entire Employment total (i.e., 144.6 million), then the monthly numbers are deduced.
Yes, its somewhat fictional, and creates an inherent bias in the overall model, but its not like they are merely tacking on 100k per month . . .
Blame the professors: Just as the option backdating scandal started with academic researchers noting mathematical anomalies, so too might the next brewing scandal: the I/B/E/S Analyst ratings back dating scandal.
According to a Barron’s article by Bill Alpert (buried on page 39), several professors have discovered what they describe as 54,729 non-random, ex-post changes out of 280,463 observations — a little over 19.5% of analyst recs (abstract below):
"The professors found
almost 55,000 changes that had been made in the I/B/E/S database of
stock-analyst recommendations maintained by Thomson, the Stamford,
Conn., firm that is a leading vendor of financial data. The alterations
made Wall Street’s record of recommendations look more conservative –
hiding Strong Buy recommendations and adding Sell recommendations from
1993 to 2002. That is a period for which Wall Street has drawn heat and
government sanctions for touting Internet bubble stocks.
As a result of the changes, the stock picks shown in
the database would have created annual gains that were 15% to 42%
better than the originally recorded recommendations, using a trading
strategy based on analysts’ recommendations."
The firms were the most significant participants in the data backdating were also the firms who had the closest relationship between banking and research and were the hardest hit by the Spitzer enforced settlement.
From page four of the academic working paper notes exactly how significant this was:
"Why do the historical data now look different than they once did? The contents of the database changed at some point between September 2002 and May 2004, a period that not only coincided with close scrutiny of Wall Street research by regulators, Congress, and the courts, but also saw a substantial downsizing of research departments at most major brokerage firms in the U.S.
The paper outlines four types of data changes: 1) non-random removal of analyst names from historic recommendations (anonymizations); 2) the addition of new records not previously part of the database; 3) the removal of records that had been in the data; and 4) alterations to historical recommendation levels.
The net result of this was to make many specific trading strategies appear better in retrospect than they actually were. Buying top rated stocks and shorting lowest rated stocks, based on the changed data, now perform 15.9% to 42.4% better on the 2004 revised data than on the 2002 tape, the professors state.