The Individual Investor Challenge: Can you go it alone?

Here at "A Dash" we have a special concern for individual investors.  This was at the heart of my reason for starting this blog.  Many evenings I imagine and write directly to such a person.

With this in mind, let us imagine an investor who is unhappy with his/her manager, and is ready to embark on a self-directed program.  The TV ads from brokerage companies all suggest that you can beat the performance of your former advisor, needing only a few consultations from their helpful telephone staff.  They will help you make a lot of trades to implement your strategy.

The first step is to set up your trading account.  Then you go online to get a lot of information about stock ideas and the economy, starting with the leading websites.

You need to have four attributes:

  1. Intelligence
  2. Knowledge
  3. Training
  4. Discipline

I assume that all readers are intelligent.  I also assume that you are informed, but please keep in mind that most people are not.  My favorite knowledge test is that more people can name the Three Stooges than can name the three branches of government.  I cite many similar examples in this humorous post.  But try this one.

James Fallows writes that 44% of Americans are crazy, since that is the percentage who believe that China is the top global economic power.  I know that you would like to play poker against these people — or take the other side of their stock trades — but the least knowledgeable people do not constitute the market.

Training is different.  Everyone understands that you need knowledge.  When it comes to training something really strange happens.  People do not see the relevance.

Discipline is also a test.  Most individual investors fail here as well.  In a recent article, Dr. Brett Steenbarger writes about "weather(ing) setbacks without losing either self-control or self-confidence."  In an article that everyone should read, he wisely observes the following:

You don't gain resilience by winning.
Rather, you become resilient by losing–and by seeing that you can
learn from (and overcome) those losses.

Two Examples of the Need for Training

Here are two recent examples (more to come in future articles) of analytical errors related to the lack of the analytical skill that comes from training.

Case one comes from a pundit critical of the recent retail sales report.  Much of the analysis compares the official government data to other sources and notes the margin of error from the survey.  So far, so good.  But the author, seeking something more, writes as follows, first citing the report:

Special Notice – The advance estimates in this report are the first
estimates from a new sample. The new sample for the Advance Monthly
Retail Trade Survey is selected about once every two and a half years.
For further information on the sample revision, see our website at

Now comes the analysis:

Did any pundit or guru actually read the Advance Retail Sales
report? This first paragraph in the report warns that a ‘new sample’
technique has been employed.

Ergo, comparisons are futile at this point.

Next, the analysis is picked up by a widely-respected voice, that of Doug Kass.

Perhaps this "new sampling" methodology (and wide sampling error) and message by the U.S. Census Bureau
helps to explain the difference between the strength reported late last
week and the weakness in private and state tax data. (Hat tip to Bill

Our take?  I am perfectly willing to analyze and embrace data wherever it leads.  I recently questioned the BLS employment data.  But the criticism should be informed and accurate.  This is not.  The Census analysis does not change the "sampling methodology" or the "technique."  It draws a new sample based upon new information.  Many surveys, including the most popular private sources, use a different sample for each poll.  It is not a change in method.

It is an analytical error to cite this as a change in method.  It happens when analysts reach too far in trying to make a point.  The original error is lost in the later citations and media coverage.  There is a cottage industry based upon substituting anecdotal information for data.

How many readers have the skill to spot the mistake?  Those who cannot get a distorted view of economic reality.

Case Two comes from the academic world, which can also be a source of misinformation. A news report tells us that companies reporting earnings on days when the sun is shining in New York have stronger reactions in their stock prices.

The study, which analyzed earnings announcements for all publicly
traded companies from 1982 to 2004, found that companies that announce
earnings during sunny days saw their stocks perform better than
expected. This held true for companies that announced earnings that
were below expectations, said John J. Shon, an assistant professor of
accounting and taxation at Fordham University, and one of the authors
of the study. “Those companies didn't do as badly as expected.”

the difference in performance wasn't huge — around 50 basis points —
it's enough for investors and companies to pay attention to, said Mr.
Shon, who worked with Ping Zhou, vice president in the quantitative
investment group of Neuberger Berman LLC, on the study.

Does anyone really believe this?  I am going out on a limb here, but only because I have seen this so many times.  I am willing to make a bet with the accounting profs.  This will not work over the next year.

Why not?  It is an obvious case of data mining, taking many variables and all of the old data and searching for a relationship.  When you slice and dice the data and then look for your hypothesis, you can always invent an explanation.  In this case, even the post-facto explanation does not seem credible.  Nonetheless, it seems to have been accepted at face value.  My Google search showed no criticism.  And the article was published in an accounting journal.  Wow!

Strong research starts with the hypothesis and then tests.  I am out on a limb since I am speculating — with some confidence — that the researchers did not begin with the idea that sunshine in NY led to better stock returns for companies announcing earnings.  This looks like one of many possible "causal" variables used in the research.  Readers of "A Dash" might recall this great illustration of the problem.

Anyone who has experience in research methods spots this immediately.  Those without the right training accept the findings at face value.  There are many other similar conclusions to befuddle the average investor, most recently the September effect, which I discussed in timely fashion on September 1st.

Briefly put, I would not be buying stocks about to announce earnings based upon the NY weather report.


In the investment world, there is little respect for the training and education of others.  There is a lot of attention to who has had the hot hand.  This is a serious mistake.

If you needed to get your car fixed, you would look to an expert in car repair.  Not so in research and economic analysis, where everyone is invited to substitute his own opinion.

Here at "A Dash" we take a distinctly different perspective.  We respect all sources of information, but we carefully assess what can be learned from each.  That is the challenge for an investor seeking to "go it alone."

You may also like


  • Jay Weinstein December 16, 2009  

    It’s a lot easier to say Larry, Moe, and Curly than Legislative, Executive, and Judicial. Plus, what would you rather watch, The Stooges or C-Span?
    The single best thing this country could ever do is require each incoming elected official to take an intense course in elementary statistics.
    It used to drive me ape how much play the old Super Bowl indicator would get every year—it still pops up every now and again…
    Thanks to Jeff–I am also a charter member of the David Merkel fan club.

  • Jeff Miller December 16, 2009  

    Jay — I really teed it up for comments with the Stooges! Nice going.
    I could have also cited the Supreme Court versus the Seven Dwarfs with similar effect:)
    Thanks –

  • stu December 17, 2009  

    is there reallya difference between the 3 stooges and the 3 layers of government?

  • Robert Simmons December 18, 2009  

    If you’re changing your sample every 2.5 years, clearly you’re not taking a random sample. Doesn’t that suggest that each change in the sample should make us take the results with a few extra grains of salt?

  • Jeff Miller December 18, 2009  

    Robert — The frequency of changing the sample has nothing to do with whether or not it is random. There are many important considerations in survey research design. The Census Bureau has an excellent and expert staff.
    One of the biggest mistakes people make is accepting the opinions of pseudo-experts who are playing on your pre-conceived notions. The critics that I cite in the article have no background in research design and have never conducted a survey. They are typical “market strategists” who have an opinion about every subject.
    If you and I sat in a room with the census experts on one side of a table and these guys on the other, you would soon see the difference. Since I can’t arrange that for you (heh heh), we have to figure it out for ourselves.
    I understand and appreciate your comment, but I think you are throwing the salt over the wrong shoulder!

  • Robert Simmons December 18, 2009  

    Are you saying that they randomly select a sample, then use that sample repeatedly over the course of 2.5 years, then randomly pick a new one? If so, I’m still going to take it with salt. It’s not meaningless like Ritholtz would have us believe, but still

  • Jeff Miller December 18, 2009  

    Robert — Many (most?) polls use a new sample each time. Should we take each new one with a grain of salt? The BLS payroll survey adjusts once a year. The UM consumer confidence survey uses a rolling panel. The various Fed surveys (like the Philly Fed) change frequently as they find new businesses. Who knows what the ISM does?
    Do you really think that the Census Bureau, arguably the best on survey techniques, chooses a method that knowingly creates a break in data interpretation every 2 1/2 years? It is like saying that they are incompetent — that they know how to adjust their sample.
    BTW, I did not realize that Barry Ritholtz had any opinion on this subject.
    To summarize, drawing a new sample is not a change in methodology or in the data series. Those who think otherwise never took the first course in methods. They are just blowing smoke, and you seem to be buying it.
    But I am curious about why people are so very confident about things like this. There is such a great willingness to disbelieve actual data — especially from the government.