<>1. RNN Realize stock forecast

* data source ： use tushare Module downloaded SH600519 Sun in Maotai, Guizhou k Line data , Using only this data C Column data , Using continuous 60 Day opening price forecast No 61 Day's opening price .

* Source code ：p37_tushare.py import tushare as ts import matplotlib.pyplot as plt df1 =

ts.get_k_data('600519', ktype='D', start='2010-04-26', end='2020-04-26')

datapath1= "./SH600519.csv" df1.to_csv(datapath1)

<>2. LSTM

tradition RNN Short term memory can be realized through memory , Forecast continuous data . But when the sequence of consecutive data becomes longer , It will become longer with the deployment time step , When backpropagation updates parameters , Gradients are multiplied continuously in time steps , Yes, the gradient disappears . In response to this problem ,Hochreitere Waiting for you 1997 It was proposed in LSTM：

Hochreiter S , Schmidhuber J . Long Short-Term Memory[J]. Neural computation,

1997, 9(8):1735-1780.

<>1） Three thresholds

The three thresholds are the input characteristics of the current time x t x_t xt And the short-term memory of the last moment h t − 1 h_{t-1} ht−1 Function of . In these three formulas W i , W

f and W o W_i, W_f and W_oWi,Wf and Wo Is the training parameter matrix , b i , b f and b o b_i, b_f and b_o bi,bf and bo

It's a bias to be trained . They all passed by sigmoid Activation function , Make the threshold range within 0-1 between .

* Input gate （ threshold ）： i t = σ ( W t ⋅ [ h t − 1 , x t ] + b i ) i_t=\sigma(W_t·[h_{t-1},

x_t]+b_i)it=σ(Wt⋅[ht−1,xt]+bi)

* Forgetting gate （ threshold ）： f t = σ ( W f ⋅ [ h t − 1 , x t ] + b f ) f_t=\sigma(W_f·[h_{t-1},

x_t]+b_f)ft=σ(Wf⋅[ht−1,xt]+bf)

* Output gate （ threshold ）： o t = σ ( W o ⋅ [ h t − 1 , x t ] + b o ) o_t=\sigma(W_o·[h_{t-1},

x_t]+b_o)ot=σ(Wo⋅[ht−1,xt]+bo)

<>2） Cellular state （ long-term memory ）

* What you remember in your mind is today ppt Page 1 to page 45 Long term memory of pages C t C_t Ct

, It consists of two parts. The first part is ppt Page 1 to page 44 Contents of the page , That's the long-term memory of the last moment C t − 1 C_{t-1} Ct−1

, But you can't remember everything word for word , Will not consciously forget some , So the long-term memory of the last moment C t − 1 C_{t-1} Ct−1

Multiply by the forgetting gate , The product term represents the memory of the past that remains in your mind .

* What I'm talking about now is new knowledge , It's the memory of the present that will be stored in your mind . Memory now consists of two parts , Part of it is what I'm talking about 45 page ppt, Is the input of the current moment x t x_t xt

, There is also a part of it 44 page ppt Short term memory retention , The last moment of memory is short-term h t − 1 h_{t-1} ht−1. Your brain inputs the current moment x t x_t xt

And the short-term memory of the last moment h t − 1 h_{t-1} ht−1 Induction forms the present memory that will be stored in your mind C t C_t Ct Wave sign .

* Memory now C t C_t Ct The wave is multiplied by the input gate and stored together with past memory as long-term memory .

* Cellular state （ long-term memory ）： C t = f f + C t − 1 + i t ∗ C ~ t C_t=f_f+C_{t-1}+i_t*\widetilde

C_tCt=ff+Ct−1+it∗C t

* Memory （ Short term memory ）： h t = o t ∗ t a n h ( C t ) h_t=o_t*tanh(C_t) ht=ot∗tanh(Ct)

* Candidate states （ New knowledge induced ）： C ~ t = t a n h ( W c ⋅ [ h t − 1 , x t ] + b c ) \widetilde

C_t=tanh(W_c·[h_{t-1}, x_t]+b_c)C t=tanh(Wc⋅[ht−1,xt]+bc)

<>3） Memory （ Short term memory ）

* When you retell the story to your friend , You can't tell it all the time , You're talking about long-term memories that exist in your brain , Output the filtered content , This is the output of memory h t h_t ht.

* When there are multilayer cyclic networks , Input of layer 2 cyclic network x t x_t xt It's the output of the first layer of cyclic network h t h_t ht

. Input the second layer network is the essence of the first level network extraction .

*

for instance ： I'm playing the first layer of circular network , Every page ppt I am exporting the essence from a single article to you . The data you receive as the second layer of the loop network is what I remember for a long time tanh Activation function , And then multiply the short-term memory extracted by the output gate

h t h_tht.

* Memory （ Short term memory ）： h t = o t ∗ t a n h ( C t ) h_t=o_t*tanh(C_t) ht=ot∗tanh(Ct)

<>4）tensorflow describe LSTM layer

tf.keras.layers.LSTM( Number of memory , return_sequences= Return output ) return_sequences = True #

Output of each time step ht return_sequences = False # Output in last time step only ht（ default ） model = tf.keras.Sequential(

[ LSTM(80, return_sequences=True), Dropout(0.2), LSTM(100), Dropout(0.2), Dense(

1) ])

<>3. GRU

* stay 2014 year cho It is simplified by et al LSTM structure ,GRU Make memory h t h_t ht It combines long-term memory with short-term memory .

Cho K , Van Merrienboer B , Gulcehre C , et al. Learning Phrase

Representations using RNN Encoder-Decoder for Statistical Machine

Translation[J]. Computer Science, 2014.

h t h_t ht Contains information about the past h t − 1 h_{t-1} ht−1 And now information h t h_t ht Wave sign . Now information is past information h t −

1 h_{t-1}ht−1 The over reset gate is determined jointly with the current input , The value range of the two thresholds is also 0 reach 1 between . This memory update formula is used directly for forward propagation , You can calculate the value of each moment h t

h_tht It's worth it .

* tensorflow describe GRU layer tf.keras.layers.GRU( Number of memory , return_sequences= Return output )

return_sequences= True # Output of each time step ht return_sequences = False # Output in last time step only ht（ default ）

model= tf.keras.Sequential([ GRU(80, return_sequences=True), Dropout(0.2), GRU(

100), Dropout(0.2), Dense(1) ])

Technology

- Java296 blogs
- Python265 blogs
- Vue125 blogs
- C Language122 blogs
- Algorithm108 blogs
- MySQL96 blogs
- Flow Chart84 blogs
- JavaScript79 blogs
- More...

©2020-2024 ioDraw All rights reserved