<> one , demand

Given several known stock market factors ( the opening quotation , closing quotation , highest , minimum , Turnover , Turnover ) And a large number of data corresponding to each factor , Train a prediction model of the rise and fall trend of the stock . Under the condition of given test data, the next rise and fall trend is obtained . That is, the following figure is obtained label value .-1 On behalf of falling ,1 On behalf of rising .

<> two , analysis

<>1,LSTM Brief introduction

LSTM This algorithm is specially used to train data with time series information , That is, these data are not only arranged in the order of increasing time , And the data before and after have a strong connection . Personally, I think it is similar to Markov's thought , That is, the following value is determined by the previous value . This demand is to analyze the rise and fall trend of a certain period of time according to the known stock market data , And predict whether the stock will rise or fall next , But there are some problems in the stock forecast here :
(1) The rise and fall of stocks do not depend only on the factors given in the figure above , There are many, many more , So there will be some deviation in the prediction results .
(2)LSTM The algorithm itself has an algorithm that only considers the whole and pre information , Some emergencies were not considered ( For example, after musk announced that bitcoin could not buy Tesla , Bitcoin plummeted ...) Deviation caused by .

(3)LSTM The algorithm defaults that all factors are positively correlated with each other , such as A The factor value increases over the next period of time ,B reduce , If these two factors are combined, the effect is not as good as considering only one A The effect is good .( This is not empty , I brought every factor into the network for training and testing , The results obtained vary greatly , In other words, the predicted trend of the model is completely different from the test data )

however ... This algorithm is better enough to predict the stock trend with a slightly larger probability .

<>2, code

The code package is divided into several parts :
1,run.py: Main running code
2,core.model.py: Main model code
3,core.dataprocessor.py: Data processing code
4,config.json: configuration file

The modifications required in this code are only some parameters in the configuration file , The meaning of the parameter is marked in the following code .
{ "data": { "filename": "StockData.csv", // This is the address where the dataset is saved "columns": [ "close_1500"
// Here are the factors that determine the rise and fall of stocks in the data set , Multiple factors can be written , Factors are separated by commas . // After changing here, you should change the following imput_dim Number of changed factors ],
"sequence_length": 50, // step , Is to process the data according to the step size 50 To take value , such as 100 If there is one piece of data , Just for two 50 Article for processing .
"train_test_split": 0.5,// Set the dataset as 0.5 The proportion is divided into training set and test set "normalise": true // Normalized },
"training": { "epochs":1, "batch_size": 32 // Data loaded at one time during training , It can be bigger }, "model": {
"loss": "mse", "optimizer": "adam", "save_dir": "saved_models", "layers": [ {
"type": "lstm", "neurons": 100, "input_timesteps": 49,// Here is the step size above -1, Input to lstm in
"input_dim": 1, "return_seq": true }, { "type": "dropout", "rate": 0.2 }, {
"type": "lstm", "neurons": 100, "return_seq": true }, { "type": "lstm",
"neurons": 100, "return_seq": false }, { "type": "dropout", "rate": 0.2 }, {
"type": "dense", "neurons": 1, "activation": "linear" } ] } }
After configuring parameters , The next step is running run.py
There are three functions that need to be explained :
predictions_multiseq = model.predict_sequences_multiple(x_test, configs['data']
['sequence_length'], configs['data']['sequence_length']) predictions_fullseq =
model.predict_sequence_full(x_test, configs['data']['sequence_length'])
predictions_pointbypoint= model.predict_point_by_point(x_test)
(1)model.predict_point_by_point: Single point prediction
The effect is as follows :

The code of this function is as follows :
def predict_point_by_point(self, data): #Predict each timestep given the last
sequence of true data, in effect only predicting 1 step ahead each time
# Predict each time step of a given last real data sequence , In fact, we only predict each time 1 step print('[Model] Predicting Point-by-Point...')
predicted= self.model.predict(data) predicted = np.reshape(predicted, (predicted
.size,)) return predicted

data Test data , The test data is directly substituted into the model to obtain the predicted value , Only one value is predicted at a time , That is, it only predicts the next data based on the previous data , Because the stock data is changing rapidly because of the factors that we think , Therefore, this prediction method has no reference value at all . When predicting the stock trend, if you only predict according to one value , The result will be a very uncertain forecast trend .

(2)predictions_fullseq: Global prediction
def predict_sequence_full(self, data, window_size): #Shift the window by 1 new
prediction each time, re-run predictions on new window # Every time you move a window 1 A new forecast , Rerun the forecast for the new window
print('[Model] Predicting Sequences Full...') curr_frame = data[0] predicted = [
] for i in range(len(data)): predicted.append(self.model.predict(curr_frame[
newaxis,:,:])[0,0]) curr_frame = curr_frame[1:] curr_frame = np.insert(
curr_frame, [window_size-2], predicted[-1], axis=0) return predicted
there window_size Is the step size ,data Test data .
The function is to substitute the overall test data into the model to get a predicted value , Add this prediction to the list predicted in , Then the following function
curr_frame = np.insert(curr_frame, [window_size-2], predicted[-1], axis=0)
The function is to assign the last value in the predicted value list to curr_frame All dimensions of the last piece of data ( The dimension is set in the configuration file as soon as you open it input_dim).
for example :
curr_frame=[[5,6,7,8][9,10,11,12]] Become [[9,10,11,12][4,4,4,4]]
In this case, the original curr_frame You have the previous prediction information 4, And so on
next predicted Add as follows :
curr_frame=[[5,6,7,8][9,10,11,12]] Become [[4,4,4,4][5,5,5,5]
After this goes on , be-all curr_frame The information of each dimension is the same , The prediction curve tends to average .

To sum up

This function is used to throw all the test data into the model and predict a value , This value is then used to update the original test data , and so on , Cycle len(data) second , You can get the overall rise and fall trend , of course , The results show that , It's terrible !!!

**(3)predict_sequences_multiple:** Multi sequence prediction
def predict_sequences_multiple(self, data, window_size, prediction_len):
#Predict sequence of 50 steps before shifting prediction run forward by 50 steps
# Prediction order 50 Before step , Before moving forecast 50 step print('[Model] Predicting Sequences Multiple...')
prediction_seqs= [] for i in range(int(len(data)/prediction_len)): curr_frame =
data[i*prediction_len] predicted = [] for j in range(prediction_len): predicted.
append(self.model.predict(curr_frame[newaxis,:,:])[0,0]) curr_frame = curr_frame
[1:] curr_frame = np.insert(curr_frame, [window_size-2], predicted[-1], axis=0)
prediction_seqs.append(predicted) return prediction_seqs

Here compared to (2) Global detection , One more partition , That is, divide all the test data into several sections on average , The length of each segment is the step size set in the configuration file . Method and method used in each paragraph (2) Same in , Finally, this detection method can capture local and global trends ( Single point detection predicts only one piece of data , Too partial ; Global detection predicts based on all data , Too Global )

The effect of this method is as follows :

The blue line in the figure above is the numerical diagram of the first factor in the test data . Each segment of color is the predicted trend , That is, when the curve upward represents this range, the stock may rise , The downward curve indicates that stocks may fall after this range , of course , The range of the curve also determines the range of rise and fall . Another thing to emphasize is ,
The piecewise prediction curve of this model can only predict the rise and fall in the next short period of time , It can't be predicted in the long run , You can't predict a value

remarks :
(1) If I give some data , How to predict the rise and fall ?

Firstly, the step size should be determined before model training , The step size determines the amount of data you need to enter . For example, what is the step size of your model 50, Then the number of data you enter during the test must be greater than 50( Otherwise, it cannot be substituted into lstm In the model , Because this model requires input data , Available in the configuration file layers View in ). In this way, the rise and fall curve can be shown in the figure , These rise and fall curves are not any factor values ( It's not money or trading volume ), It is an abstract quantity describing the relationship between stock determinants .
(2) The statement may not be very clear , If you have any questions, please explain them in the comment area , I'll add it to the remarks .