3rd generation NN, deep learning, deep belief nets and Restricted Boltzmann Machines

I am thinking to do the backtesting in Chaos Hunter.The problem with Chaos Hunter is I can only create buy-sell signals, and CH fitness function is '' maximize the equity'' .One can maximize the equity with large drawdowns and I dont' like large drawndowns.I like signals wih smallest MAE percent.

in MC you can make custom fitness function in java script so is much better.
At the top of it it uses easy language where is already a lot ready software
available.

Some years ago I switched from MT4, than NS than tradestation to MC
and so far is the best for me as a back testing tool. It has also build in Walk forward tester so just with a few clicks you can have proper OOS results.

Krzysztof
 
Fabwa:

So are you going to post some results also or you are just interested in others results ??

Krzysztof

Do i hear some complains there? :cool:
I am very busy at the moment- i just tried to give some from my view useful tips for now..

@RF: as i stated earlier your performance analysis form my point of view is flawed. And statements like "i tried RF but it didnt perform well" cant be taken serious. There are so many ways you can use tree ensembles.. by just clicking a button in some Machine Learning framework to test its capabilities is not reasonable.. checkout headhunter platforms looking for algorithmic traders, many of them have as requirement "practical knowledge of RF" for a reason...

I can only repeat myself: Either you try to "classify" points in time where +- move is statistically > 50%, this involves getting rid of as much bias and noise as possible (including extensive cluster analysis to possibly find any working concept). Or you go the regression way (ARMA etc.). Your current approach is just to be doomed - sorry to be harsh :love:

fabwa
 
trees and others

Fabwa:

well, I just stated

Tree algos are known to describe data very well and perform poor out of sample.


so is this statement true or false or you don't know ??

Anyway, there is a lot of answers about it e.g. on stackexchange here is one of them.

http://stackoverflow.com/questions/14714270/what-is-the-best-classifier

tree algoritms are even not mentioned there.

Regarding your other comments. Of course I tried different type of labeling just didn't post results so here you have a one more.

exit/label signal is as MACD 12,26,9

cond(2) = (MACDSign(i)) > 0 && (MAhist(i) > 0);

so there is no 'nonsense adding of TP on quiet market' as you say and performance

19.89158537 0.747794118 0.028902439 0.63004902 0.019411765 -4700.661765 PF=0.49 W/L=0.24

comparing to fixed SLTP 15/30 pips

31.89832402 0.608317308 0.038870056 0.499903846 0.023317308 -10141.17308 PF=0.47 W/L=0.6

so average precision (% profitable) falls from 31% to 19%

accuracy rises from 0.6 to 0.74 it means TN rate rise but we are not intrested in TN rate !!!

PFs look similar

W/L ratio falls from 0.6 to 0.24 !!!

So for me its clear that it is worse performance.

I know you are busy that you can not prove you theories with hard results but general question: Do you have any PRACTICAL experience in applying AI algos to HF finance data or it is just your 1st try on this forum ??

Krzysztof
 
Last edited:
(n) We are getting emotional..

I do have a sound background with 2 degrees in Machine Learning, Mathematics and Statistics and research industry experience. Part of that was intense time series analysis. If you refer to SO questions like "What is the best Classifier [closed]" that tells me that your theoretical understanding of ML could be improved.. (btw. there is a reason why this question was closed)

So.. stay calm :clover: if you dont want to hear my bits, just ignore me :whistling i do think my suggestions make sense- if you dont think so, thats fine. You keep ignoring my clusteranalysis..

cheers

ps: to your question

"Tree algos are known to describe data very well and perform poor out of sample."

Most definately FALSE
 
Last edited:
"Tree algos are known to describe data very well and perform poor out of sample."

Most definately FALSE

False ??? See attached picture from Elements of Statistical Learning book from
Hasti and Tibshirani who are main statisticians on Stanford.. They claim that predictive power of trees is poor....so mismatch in opinions...

and not closed question which gave me this reference from stackexchange..

http://stats.stackexchange.com/ques...es-a-comparison-of-the-pros-and-cons-of-diffe

see 1st answer.

Krzysztof
 

Attachments

  • picture.jpg
    picture.jpg
    174.3 KB · Views: 250
False ??? See attached picture from Elements of Statistical Learning book from
Hasti and Tibshirani who are main statisticians on Stanford.. They claim that predictive power of trees is poor....so mismatch in opinions...

and not closed question which gave me this reference from stackexchange..

http://stats.stackexchange.com/ques...es-a-comparison-of-the-pros-and-cons-of-diffe

see 1st answer.

Krzysztof

When i say tree i am talking ensembles of tree i.e. Random Forests, further i would never promote that one classifier is better than another its always case specific..

see this IEEE classification challenge 2013
http://hyperspectral.ee.uh.edu/?page_id=695

Winner team used random forests
 
Last edited:
Chirp

When i say tree i am talking ensembles of tree i.e. Random Forests, further i would never promote that one classifier is better than another its always case specific..

see this IEEE classification challenge 2013
http://hyperspectral.ee.uh.edu/?page_id=695

Winner team used random forests

and here CHIRP is a winner. According to this comparision RF is worse than CHIRP, LMT and J48. From my observation they overfitt a lot, in sample they have almost 100% hit sometimes....out of sample they die.
 

Attachments

  • picture1.jpg
    picture1.jpg
    341 KB · Views: 249
  • CHIRP classifier.pdf
    741.8 KB · Views: 452
Last edited:
and here CHIRP is a winner. According to this comparision RF is worse than CHIRP, LMT and J48. From my observation they overfitt a lot, in sample they have almost 100% hit sometimes....out of sample they die.

Do what you have to do :)
 
unfortunatelly I dont have control on training process of them so I cant interrupt it using some kind of evaluation to prevent overfitting.

Otherwise I'm trying different things now like e.g. different exit types, Ehler Super Smoother and cost sensitive over/under sampling.
 
i would say stop trying algorithms, start understanding them!

my last bit for now:
http://de.slideshare.net/Pammy98/using-random-forest-proximity-for-unsupervised-learning-in-tissue

the set of 17 algorithms i have is all the time the same, Im changing the data prepocessing, features and label creation methods and checking average results
from all of them.

There is no need to understand them in deep at the moment, all of the behave similar on average, just some is better and some worse, they are just to dedect proper preprocessing and labeling method for me.

When I will choose the final algo(s), preprocessing, feature set and labeling method than I digg more in details...

Anyway,without proper testing and verification all those what is written in different papers are just paper results on different data sets so not so much applicable for us. In one paper deep nets are the best, in another SVMs, in another CHIRP and maybe RF.....so testing, testing and testing this is my approach
 
I have developed several Neural Network strategies with constant retraining (retrain on every bar using information from the past X bars) with very good results on the EUR/USD. Attached you can see the results of an implementation using a committee of 3 different NN implementations (non-compounding simulation, highly linear and profitable). Note that the NN is constantly retrained to make every trading decision, so you could consider the whole back-test as an out-of-sample regarding the NN's predictive powers. So I want to let you know that you can in fact develop Neural Network systems for the FX market that can work really well :)
 

Attachments

  • results.png
    results.png
    67.3 KB · Views: 543
Last edited:
I have developed several Neural Network strategies with constant retraining (retrain on every bar using information from the past X bars) with very good results on the EUR/USD. Attached you can see the results of an implementation using a committee of 3 different NN implementations (non-compounding simulation, highly linear and profitable). Note that the NN is constantly retrained to make every trading decision, so you could consider the whole back-test as an out-of-sample regarding the NN's predictive powers. So I want to let you know that you can in fact develop Neural Network systems for the FX market that can work really well :)


What a coincidence to see you here! I am the one commenting your blogpost 2 days ago (clustering for mathematical expectation optimization) :) welcome to the discussion!
 
I have developed several Neural Network strategies with constant retraining (retrain on every bar using information from the past X bars) with very good results on the EUR/USD. Attached you can see the results of an implementation using a committee of 3 different NN implementations (non-compounding simulation, highly linear and profitable). Note that the NN is constantly retrained to make every trading decision, so you could consider the whole back-test as an out-of-sample regarding the NN's predictive powers. So I want to let you know that you can in fact develop Neural Network systems for the FX market that can work really well :)[/QUOTSoE]

So is it classification or regression ?? is it daily charts ???

Some years ago I was trying this approach with regression and applied to 1min EURUSD chart with not big success. Results are somewhere there

http://www.trade2win.com/boards/tra...twork-indicator-mt4-using-neuroshell-141.html

did you back test it using some trading platform ???
 
Hi Krzysiaczek99,

It combines several NN, two based on classification, one on regression. However each one is profitable on its own as well. The systems are based on the daily charts, going lower is very hard in this manner (back-tests could take months), since I am constantly retraining the system for each trading decision.

The system is coded in the Asirikuy F4 framework (ANSI C) and the back-test was done on our python back-tester (the NST), however the back-test can also be done in MT4/5 with the same results (I only posted the NST image because it's much nicer in my opinion). The neural networks are implemented in the F4 framework using FANN but I also had success with other machine learning strategies implemented using the Waffles and Shark C libraries. It's very easy for us to do things like retraining machine learning approaches while this can be a nightmare on other platforms, etc.

I hope the above helps out :)

Daniel
 
Hi Krzysiaczek99,

It combines several NN, two based on classification, one on regression. However each one is profitable on its own as well. The systems are based on the daily charts, going lower is very hard in this manner (back-tests could take months), since I am constantly retraining the system for each trading decision.

The system is coded in the Asirikuy F4 framework (ANSI C) and the back-test was done on our python back-tester (the NST), however the back-test can also be done in MT4/5 with the same results (I only posted the NST image because it's much nicer in my opinion). The neural networks are implemented in the F4 framework using FANN but I also had success with other machine learning strategies implemented using the Waffles and Shark C libraries. It's very easy for us to do things like retraining machine learning approaches while this can be a nightmare on other platforms, etc.

I hope the above helps out :)

Daniel

What is NST ?? Neuroshell or something else ??

Here I try HF data (1min EURUSD), running scripts on MATLAB cluster.

So maybe you apply you system to 1min data using the same trading days like me so we can compare results ?? I post here results from 6 trading days, you can find them in excel sheet what I posted. I run like a group of 17 different algos on this data.

Krzysztof
 
The NST (New Strategy Tester) is a python back-testing platform we created at Asirikuy to allow us to back-test our systems outside the MT4/5 platforms and in Linux/MacOSX. Here is a link so that you can read more about it if you wish: http://mechanicalforex.com/2014/02/the-f4-programming-framework-and-asirikuy-tester-a-simple-faq.html. The great thing is that our machine learning code is already ready for both back-testing and live trading. The above mentioned strategy is already being traded across several accounts.

Also the models have no predictive power on such low time frames, I tried it once on the 1H and the models fail completely. They are only successful on upper time frames because lower time frames have seasonality components (such as daily volatility cycles) that the models don't account for. It is my belief that in order to have some degree of success on any low TF you need to properly account for all these phenomena, but I would say that the noise at the 1M would be too large and it is extremely difficult to develop a machine learning method that works on this TF.
 
The NST (New Strategy Tester) is a python back-testing platform we created at Asirikuy to allow us to back-test our systems outside the MT4/5 platforms and in Linux/MacOSX. You can read more about it if you wish on my blog (mechanicalForex dot com), I wrote a post this week about the F4 framework and this platform. The great thing is that our machine learning code is already ready for both back-testing and live trading. The above mentioned strategy is already being live traded across several accounts.

Also these models have no predictive power on such low time frames, I tried it once on the 1H and the models fail completely. They are only successful on upper time frames because lower time frames have seasonality components (such as daily volatility cycles) that the models don't account for. It is my belief that in order to have some degree of success on any low TF you need to properly account for all these phenomena, but I would say that the noise at the 1M would be too large and it is extremely difficult to develop a machine learning method that works on this TF.
 
Top