firewalker99
Legendary member
- Messages
- 6,655
- Likes
- 612
question about backtesting
Hi barjon, this one is not directed at you in particular, it's a question about backtesting. At what point does your backtesting becomes curve-fitting or optimizing instead of truly testing a strategy. Suppose I found a setup that holds a 50% win ratio. Included in my setup is a fixed stop of 1 point. If I'd use the setup with a 3 point stop the setup would have a 60% win ratio, of course with greater risk. Or if I only had a target of 3 points instead of 5, my ratio would increase by 5%, and so forth,...
For the moment I'm defining parameters on sample data for about two months. That's not a lot, I know, but I was wondering if going back to a couple of years data is very useful as markets change over time (but what time?). The next phase is comparing my setup with another two months of unseen data so that every part of the market cycle is included. Is there a reason to assume seasonal or cycliclal components that would render data analysis of a couple of months more or less useful than other months? I've calculated that I have to test about three to fourhundred "elements" to have results that are statistically significant up to 5% error. [elements] could be a [signals] or [days]. One possible answer could be that I had to use as much data as possible in order to be confident of the results of the backtesting. However, I'd like not to relate my personal "feeling" of confidence in this matter and only rely on the numbers.
In essence, what would you define as curve-fitting, where is the line between optimizing your strategy or just looking for the optimal elements that are part of the best possible setup? I've checked out the thread "Tradestation (or any other type of...) backtesting, optimising, curve-fitting" but there hasn't been much activity going on there for the last half year.
barjon said:firewalker
That back testing will then answer questions about expectancy, stop loss, "target" etc.
jon
Hi barjon, this one is not directed at you in particular, it's a question about backtesting. At what point does your backtesting becomes curve-fitting or optimizing instead of truly testing a strategy. Suppose I found a setup that holds a 50% win ratio. Included in my setup is a fixed stop of 1 point. If I'd use the setup with a 3 point stop the setup would have a 60% win ratio, of course with greater risk. Or if I only had a target of 3 points instead of 5, my ratio would increase by 5%, and so forth,...
For the moment I'm defining parameters on sample data for about two months. That's not a lot, I know, but I was wondering if going back to a couple of years data is very useful as markets change over time (but what time?). The next phase is comparing my setup with another two months of unseen data so that every part of the market cycle is included. Is there a reason to assume seasonal or cycliclal components that would render data analysis of a couple of months more or less useful than other months? I've calculated that I have to test about three to fourhundred "elements" to have results that are statistically significant up to 5% error. [elements] could be a [signals] or [days]. One possible answer could be that I had to use as much data as possible in order to be confident of the results of the backtesting. However, I'd like not to relate my personal "feeling" of confidence in this matter and only rely on the numbers.
In essence, what would you define as curve-fitting, where is the line between optimizing your strategy or just looking for the optimal elements that are part of the best possible setup? I've checked out the thread "Tradestation (or any other type of...) backtesting, optimising, curve-fitting" but there hasn't been much activity going on there for the last half year.