Build Neural Network Indicator in MT4 using Neuroshell

NSClassifier compare to PNN/GRNN in NS2:
- Do not require you to set parameters such as learning rate and momentum as NS2
- The nets can only have one output, while in PNN NS2 need to specify each outputs (0 or 1) then it said better with continuous value inputs rather than binary inputs.
- It grows hidden neurons as needed, and trains quickly while in NS2 the layer is fix and hidden neurons not automatically adjusted.
- There are two options either neural network or GA.
I will post and compare the result later.
Arry
 
I agree, can't be OOS. But why do you say that PNN's do not generalize? Seems like they can if the data is stationary. So it depends to some extent on the choice of training data and target.

Precisely but because PNNs are shallow classifiers the training set must be thoroughly representative of the actual population even more so than other types of NN; PNNs memorize well but don’t infer new behaviors unless fed with all possible variations.

Hopefully Arryex's results will be OOS
 
A couple of MBP nets for EUH1

Some time ago (post #40) I pointed out that using the rmse as a measure of the quality of a prediction was probably not as good as other possible measures. I have now got a reasonable predictor for both high and low EUH1 using MBP nets, and can look at other measures.

Here is an example of the prediction. The predicted high and low have been delayed 1 bar for the display, so they can be easily compared to the high and low that they are trying to predict.
EUH1 prediction.jpg

All predictions are OOS, since the nets were trained on data prior to June 2009, and errors were measured on approximately 5000 samples since then. The nets are simple 5-node single hidden layer topography (11-5-1). The data base used for training is quasi-stationary, consisting of the lagged difference of H,L,C and the 48 bar average of H,L,C + a couple of other features. With this data base the errors are also quasi-stationary in the sense that the mean and stdev are approximately constant.

The stdev of pH is 12.81 pips and for pL is 12.87 pips. But the stdev does not tell the whole story, Probably more important is the error probability distribution shown in the other images, and the cumulative probability distribution (not shown). Although 13 pips seems to be pretty large to use in a system, using the error distributions one can devise a strategy for trading that is controlled and profitable using the approach I posted previously (post #40).
Low errors.jpg High errors.jpg

To devise such a strategy, one must also know the distribution of the predicted difference between high and low, shown below.
High - Low.jpg

From these perhaps I can devise a reasonable trading system... who knows? :D
fralo
 
Very interesting observation with this error distribution. So majority of the samples is

Hp>H
Lp<L
Hp-Lp imajority of samples is in range 22-34 pips

for first two for majority of samples looks like kind of extension of range, so accuracy of strategy will be even lower. However maybe is possible to offset those errors bu adding/subtracting kind of average value ??

The Hp - Lp result I'm not so sure how to use.

Krzysztof
 
Very interesting ...

The Hp - Lp result I'm not so sure how to use.

Krzysztof
Neither am I. But suppose that we try a strategy (say for long trades) that sets a stop (SL) and a pending buy (BP) somewhere below the pL and a TP below pH, then there will be a threshold value for the estimated range (pH-pL) at which we are willing to trade. Above that threshold we expect positive return that depends on BP,TP, SL and the distributions. We could find the optimum settings for SL,BP, and TP using NST, or MT4. But that requires us to use fixed offsets from pL and pH for SL,BP and TP, and it finds the best offsets over the training set. It seems that we could do better with adaptive offsets that depend on the predicted range..if the range is large we may want to allow more headroom for SL, or maybe less headroom for TP. Neither NST or MT4 will find an appropriate adaption. But perhaps by thinking about the distribution of range we can find something appropriate.

Even though I believe in AI, I still think the human mind can do some things better.:)
fralo
 
Now training on the same data, the same number of epochs but results very different.
MBP seems to be not perfect...

Krzysztof

This is problem CUDA implementation (abscissae errors neural outputs in train graph). When I use CPU calculation (but very slow), the results is OK. When I use CUDA (primary GPU GF 7950 GT - no CUDA support, secondary GPU GF 240 GT - CUDA OK (96 pixel shaders engines) - graphic driver 197.45, Windows XP SP3, GF240GT no monitors connect and in display properties is no active - only for calculation CUDA)then sometimes the results are incorrect when total numbers of neurons > cca 98. Sometimes CUDA results is OK, after restart computers... When i switch to CPU recalculate (Intel Core Duo E6700), then results are OK.
The other problem, from mi opinion, is the drastically slow down learning using CUDA- results is - for the same RMS Error compared to CPU calculation we must start calculate too much epochs. I probe change configuration robustness with CUDA, stop robustness with CUDA (but calculation sometimes can switch to bad result (abscissae errors in train graph) - calculate on CPU without CUDA its no problem), but no success.

Perhaps its problem 32 versus 64 bit accuracy on GPGPU. CPU has FPU - 80 bit accuracy. But I dont know the implementation - I only speculate.

I must contact developer.
 
This is problem CUDA implementation (abscissae errors neural outputs in train graph). When I use CPU calculation (but very slow), the results is OK. When I use CUDA (primary GPU GF 7950 GT - no CUDA support, secondary GPU GF 240 GT - CUDA OK (96 pixel shaders engines) - graphic driver 197.45, Windows XP SP3, GF240GT no monitors connect and in display properties is no active - only for calculation CUDA)then sometimes the results are incorrect when total numbers of neurons > cca 98. Sometimes CUDA results is OK, after restart computers... When i switch to CPU recalculate (Intel Core Duo E6700), then results are OK.
The other problem, from mi opinion, is the drastically slow down learning using CUDA- results is - for the same RMS Error compared to CPU calculation we must start calculate too much epochs. I probe change configuration robustness with CUDA, stop robustness with CUDA (but calculation sometimes can switch to bad result (abscissae errors in train graph) - calculate on CPU without CUDA its no problem), but no success.

Perhaps its problem 32 versus 64 bit accuracy on GPGPU. CPU has FPU - 80 bit accuracy. But I dont know the implementation - I only speculate.

I must contact developer.

Are you a programmer ?? I'm just implementing multiGPU support for MBP/GPUMlib
Just look this. Here is a pure CUDA part of MBP.

http://switch.dl.sourceforge.net/project/gpumlib/

version before last (0.1.2) Is fully compilable with VS2008 project and solution

Krzysztof
 
Are you a programmer ?? I'm just implementing multiGPU support for MBP/GPUMlib
Just look this. Here is a pure CUDA part of MBP.

http://switch.dl.sourceforge.net/project/gpumlib/

version before last (0.1.2) Is fully compilable with VS2008 project and solution

Krzysztof

No, i know only turbopascal, basic and assembler Z80A 20 years ago. :)
But I disconnect monitor from secondary graphic card (GT240 - CUDA support), in windows display properties NO attached, shutdown computer. Start computer and now MBP software 2.2.0 calculate with CUDA OK, i will do stress test - NN 200-220-110-55-1. I will see. I have 2 graphic cards, but only GT240 has CUDA support.

My recommendation is now use CUDA graphic cards only for CUDA computing no monitors connect. Just be sure i set [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\GraphicsDrivers]
"TdrLevel"=dword:00000000
and will test connect GT240 to display.
 
No, i know only turbopascal, basic and assembler Z80A 20 years ago. :)
But I disconnect monitor from secondary graphic card (GT240 - CUDA support), in windows display properties NO attached, shutdown computer. Start computer and now MBP software 2.2.0 calculate with CUDA OK, i will do stress test - NN 200-220-110-55-1. I will see. I have 2 graphic cards, but only GT240 has CUDA support.

My recommendation is now use CUDA graphic cards only for CUDA computing no monitors connect. Just be sure i set [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\GraphicsDrivers]
"TdrLevel"=dword:00000000
and will test connect GT240 to display.


First I not sure if such configuration with 200 neurons is allowed, as far as I remember there was a limit for it. Just have a look to MBP site and videos.

By default MBP makes calculation on CUDA device 0 so the best to use another card for displaying, then there will be no delays. If you have 2 NVIDA CUDA capable cards than
in case of using VISTA you can make 'headless' setup so display on device 1 and calculate on 0. I thin in case of XP its not possible, you can only extend monitor on device 1 but it always display on 0 so on the same what makes calculations.

So the simplest is to use another card for displaying.

Krzysztof
 
Here is my update related with stock trading strategy development.
- Stock: BMRI.JK
- Output: optimum trading strategy (from Trading Solutions)
- Inputs: 28 inputs indicators (which having high correlation with output)
- Predict output using NS2 (MBP, GRRN), NS Predictor and Chaos Hunter
- Data range for optimization within NS2, NSPred and CH: Jul 2003 - 28 Feb 10
- Out of Sample Test: 1st Mar 2010 till to day
- Implementation Trading Strategy within NSDT are without optimization!!
So far CH result is better than others when using NS2 and NSPredictor output as CH inputs.

Arryex
www.ai-traders.org
 

Attachments

  • Optimum signal TS.png
    Optimum signal TS.png
    83.6 KB · Views: 782
  • BMRI.JK All NS inputs.png
    BMRI.JK All NS inputs.png
    10.8 KB · Views: 548
  • BMRI.JK All NS.png
    BMRI.JK All NS.png
    34.3 KB · Views: 793
Here is display after optimization with NSPredictor and Chaos hunter.

Arryex
www.ai-traders.org
 

Attachments

  • CH Optimization.png
    CH Optimization.png
    51.9 KB · Views: 1,660
  • NSPred optimization.png
    NSPred optimization.png
    40.1 KB · Views: 729
Hello!

I would like to ask you where can I download "kTrend" or "ssa" indicators?

Thanks.

Sugus
 
So if I understand correct one can use a neural net to train a set of data.
nets would perform better if trained on small information of data with repetative patterns.

Hence if i were to use neuroshell and wish to train a net to learn the double top and bottoms then I need to extract data which has got similar repetative patterns and train my net on that data? correct me if i am wrong?
 
Hi Supremegizmo,

I think, if we can define (or let NSDT to make it for you) when the double top and bottoms are occurred, for example based on %change of Close value between base, double top and bottoms, what are difference values between these tops/bottoms then you may trigger your signal.

You can use the available indicators within NSDT for detecting candlestick (bullish/bearish continuation/reversal).

Sorry for the late reply.

Arryex
www.ai-traders.org
 
Top