LostTech.TensorFlow : API Documentation

Type MaskedLSTMCell

Namespace tensorflow.contrib.model_pruning

Parent LSTMCell

Interfaces IMaskedLSTMCell

Public static methods

MaskedLSTMCell NewDyn(object num_units, ImplicitContainer<T> use_peepholes, object cell_clip, object initializer, object num_proj, object proj_clip, object num_unit_shards, object num_proj_shards, ImplicitContainer<T> forget_bias, ImplicitContainer<T> state_is_tuple, object activation, object reuse)

Initialize the parameters for an LSTM cell with masks for pruning.
Parameters
object num_units
int, The number of units in the LSTM cell
ImplicitContainer<T> use_peepholes
bool, set True to enable diagonal/peephole connections.
object cell_clip
(optional) A float value, if provided the cell state is clipped by this value prior to the cell output activation.
object initializer
(optional) The initializer to use for the weight and projection matrices.
object num_proj
(optional) int, The output dimensionality for the projection matrices. If None, no projection is performed.
object proj_clip
(optional) A float value. If `num_proj > 0` and `proj_clip` is provided, then the projected values are clipped elementwise to within `[-proj_clip, proj_clip]`.
object num_unit_shards
Deprecated, will be removed by Jan. 2017. Use a variable_scope partitioner instead.
object num_proj_shards
Deprecated, will be removed by Jan. 2017. Use a variable_scope partitioner instead.
ImplicitContainer<T> forget_bias
Biases of the forget gate are initialized by default to 1 in order to reduce the scale of forgetting at the beginning of the training. Must set it manually to `0.0` when restoring from CudnnLSTM trained checkpoints.
ImplicitContainer<T> state_is_tuple
If True, accepted and returned states are 2-tuples of the `c_state` and `m_state`. If False, they are concatenated along the column axis. This latter behavior will soon be deprecated.
object activation
Activation function of the inner states. Default: `tanh`.
object reuse
(optional) Python boolean describing whether to reuse variables in an existing scope. If not `True`, and the existing scope already has the given variables, an error is raised.

When restoring from CudnnLSTM-trained checkpoints, must use CudnnCompatibleLSTMCell instead.

Public properties

PythonFunctionContainer activity_regularizer get; set;

object activity_regularizer_dyn get; set;

bool built get; set;

object dtype get;

object dtype_dyn get;

bool dynamic get;

object dynamic_dyn get;

object graph get;

object graph_dyn get;

IList<Node> inbound_nodes get;

object inbound_nodes_dyn get;

IList<object> input get;

object input_dyn get;

object input_mask get;

object input_mask_dyn get;

IList<object> input_shape get;

object input_shape_dyn get;

InputSpec input_spec get; set;

object input_spec_dyn get; set;

IList<object> losses get;

object losses_dyn get;

IList<object> metrics get;

object metrics_dyn get;

object name get;

object name_dyn get;

object name_scope get;

object name_scope_dyn get;

IList<object> non_trainable_variables get;

object non_trainable_variables_dyn get;

IList<object> non_trainable_weights get;

object non_trainable_weights_dyn get;

IList<object> outbound_nodes get;

object outbound_nodes_dyn get;

IList<object> output get;

object output_dyn get;

object output_mask get;

object output_mask_dyn get;

object output_shape get;

object output_shape_dyn get;

object output_size get;

object output_size_dyn get;

object PythonObject get;

object rnncell_scope get; set;

string scope_name get;

object scope_name_dyn get;

object state_size get;

object state_size_dyn get;

bool stateful get; set;

ValueTuple<object> submodules get;

object submodules_dyn get;

bool supports_masking get; set;

bool trainable get; set;

object trainable_dyn get; set;

object trainable_variables get;

object trainable_variables_dyn get;

IList<object> trainable_weights get;

object trainable_weights_dyn get;

IList<object> updates get;

object updates_dyn get;

object variables get;

object variables_dyn get;

IList<object> weights get;

object weights_dyn get;