Because in the actual project , use DenseNet The model of character recognition , In order to improve the recognition effect, the model is followed LSTM, But there were problems in training , find Loss And Accuracy All remain the same . To solve this problem, try printing the output of each layer to troubleshoot the problem .

 

Test code :
# For testing if __name__ == "__main__": reload(densenet2) characters =
keys.alphabet[:] characters = characters[1:] + u' A kind of ' nclass = len(characters)
input = Input(shape=(32, None, 1), name='the_input') x =
densenet2.dense_cnn(input, nclass) rnnunit = 256 x = Bidirectional(GRU(rnnunit,
return_sequences=True, implementation=2), name='blstm1')(x) x = Dense(rnnunit,
name='blstm1_out', activation='linear')(x) x = Bidirectional(GRU(rnnunit,
return_sequences=True, implementation=2), name='blstm2')(x) y_pred =
Dense(nclass, name='out2', activation='softmax')(x) # According to the choice outputs Output layer of basemodel
= Model(inputs= input, outputs=y_pred) #basemodel.summary() #print("the lenth
of layer:{}".format(len(basemodel.layers))) i = -3 model = Model(inputs= input,
outputs=basemodel.layers[i].output)
model.load_weights("..\\100_test_50w_weights_densenet-71-0.08.h5",
by_name=True) out = predict("..\\img1.JPEG", model) name = model.layers[i].name
print("the net name is :{} \n the out is :\n {} ".format(name, np.array(out)))
explain :

*  basemodel It's in densenet After that lstm Network of
* model The final model , It can be modified basemodel.layer[i].output Of "i","i" What layer of the network are you referring to , You can also read the name of the network layer
* predict Custom function , It mainly calls the model.predict(). It also includes image preprocessing .

Technology