shape ) ( 32, 10, 4 ) > print ( final_memory_state. LSTM ( 4, return_sequences = True, return_state = True ) > whole_seq_output, final_memory_state, final_carry_state = lstm ( inputs ) > print ( whole_seq_output. LSTM ( 4 ) > output = lstm ( inputs ) > print ( output. Eager execution is enabled in the outermost context.Inputs, if use masking, are strictly right-padded.The requirements to use the cuDNN implementation are: (see below for details), the layer will use a fast cuDNN implementation. The arguments to the layer meet the requirement of the cuDNN kernel Will choose different implementations (cuDNN-based or pure-TensorFlow) Long Short-Term Memory layer - Hochreiter 1997.īased on available runtime hardware and constraints, this layer LSTM ( units, activation = "tanh", recurrent_activation = "sigmoid", use_bias = True, kernel_initializer = "glorot_uniform", recurrent_initializer = "orthogonal", bias_initializer = "zeros", unit_forget_bias = True, kernel_regularizer = None, recurrent_regularizer = None, bias_regularizer = None, activity_regularizer = None, kernel_constraint = None, recurrent_constraint = None, bias_constraint = None, dropout = 0.0, recurrent_dropout = 0.0, return_sequences = False, return_state = False, go_backwards = False, stateful = False, time_major = False, unroll = False, ** kwargs )
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
January 2023
Categories |