Developing and Evaluating Trading Systems

Improved technology, more power.  We would expect this to be good.

In fact, more power can enable us to do exactly the wrong thing.

This happens all of the time with the world’s most powerful computer, the human mind.  A year ago we reviewed analysts who thought the market looked like a replay of the 1987 crash.  This type of analysis crops up all of the time, often using old charts as evidence.  With the power to search among thousands of choices, picking the time frame, and adjusting the scales, the human computer can "prove" nearly anything.

Those developing computer-based trading systems face the same problem.  The modern software makes it easy to include many variables — too many!

Some Helpful Illustrations

Bill Rempel missed the Kentucky Derby by a few days, but his story highlighting horse race handicappers is excellent.  A group of handicappers were tested, using gradually increasing amounts of information.  The extra data increased their confidence, but not their performance!  (Read the entire discussion.)

Bill discusses Occam’s Razor and points out the importance of reducing the number of independent variables:

I use this paring down or pruning technique at work as well as when
examining trading strategies or opportunities. My first question, when
faced with complex models, has for a long time been “I wonder how many
of those variables actually do most of the work?”

This is pretty convincing to us, since Bill sounds just like our own Vince Castelli.  It is easy to develop a model using all of the available data and lots of variables.  You will generate a perfect "post-diction" but not anything useful for prediction.

The result:  Over-fitting and over-confidence, a dangerous brew!

Unfortunately, consumers of system strategies, including a few big-time "gatekeepers" we have met, have become accustomed to seeing eye-popping (and unrealistic) results.  They apply an automatic discount, regardless of the methodology employed.

The TCA Model Applied to the S&P 500

For the purposes of comparison, the chart below shows our TCA Model (Trend, Cycle, Anticipation) as applied to the S&P 500.  Without giving away the store, we can say that the model uses a relatively small number of variables — some designed to choose between trend and cycle, and others representing indicators for each.  Much of the power comes from advanced techniques for filtering and smoothing data, thereby improving signal to noise.  The chart below is not a back-test, but the signals actually used in trading during the last year.

Tca_sp_500

The overall performance shows a gain of about 6% during a time when the S&P declined by a few percent.  It accomplishes this while reducing risk by staying out of the market for significant periods.

A key point is that the model gets the investor into the market to enjoy the big moves.  The cost?  There are some losses at times of rapid changes or churning.

Finding the big moves is very important.  Some traders have trouble joining in when the market has already made a move.  They are reluctant to "chase."  It is difficult to show gains when missing the big rallies.

Anyone interested in trading systems should join us as regular readers of The Rempel Report, where he updates and reports on several interesting trading systems.  One of these is similar to our own sector rotation approach.

TCA-ETF Update

Each Thursday (a day late this week) we share with the investment community a recent report from our ETF ratings.  We have been doing this in real time for eight months.  Our purpose is partly to gain visibility for the approach (free report available on request), but also as information for other ETF traders, and most importantly to provide a laboratory for others trying to develop trading systems.  We discuss the issues surrounding system development in many of the articles in this series.

As we noted last week, we have expanded the ETF universe, and we seek more additions.  Adding more targets is helpful, as long as they can be shown to have characteristics suitable for one’s model.

The current ratings show some dramatic changes from recent weeks, and include one of the new ETF’s, KOL.

Etf_sector_update_05072008

You may also like

7 comments

  • Bill aka NO DooDahs! May 9, 2008  

    Well, I’m not a horse-racing fan, even though I did enjoy “Let It Ride” (mostly for the combination of great writing, David Johannson, and that red dress). I had seen the CIA story before, but it recently made Digg and that sparked my post.
    Another “keep it simple” story I like, and one you’ll remember: The Space Pen. Developed by NASA at a cost of millions of dollars, it’s a pen with a specially pressurized ink cartridge that can write ANY ANGLE!!!!! Buy one now on this special TV offer!
    The Russian Cosmonauts used a pencil. Writes at any angle. Developed over a hundred years ago.
    Thanks for the link love!

  • RB May 12, 2008  

    I think Dubya is the author of the keep it simple policy, but he did not get to read that you are supposed to make it no simpler. Buyer beware, I suppose.

  • Bill aka NO DooDahs! May 12, 2008  

    Good Snopes link! Thanks!

  • Jeff May 16, 2008  

    How timely. I have been setting up a system over the last four months…trial and error make the learning curve steep and risky.
    I started with different indicators and overlays finding the ones that were the most pertinent to the price moves. After four months of active trading to test my theories I have hit the nail on the head…Modified MACD, some trendlines and a handful of MAs. It doesn’t get much simpler than that.
    I now am making consistant profits for three weeks running and beating my weekly goals.
    Keep it as simple as will produce results.
    Right on!
    Jeff.

  • Bill aka NO DooDahs! May 16, 2008  

    I’m loving the MACD and its percentage-based cousin the PPO more and more as time goes by, just a question of matching the parameters to the timeframes traded.

  • Brian May 31, 2008  

    This makes me think of Malcom Gladwell’s book, Blink where he entertains the idea of making major decisions on only the most important of information data points. Interesting read and it backs up a bit of your argument.