LostTech.TensorFlow : API Documentation

Type rnn

Namespace tensorflow.lite.experimental.examples.lstm.rnn

Public static methods

ValueTuple<object, object> bidirectional_dynamic_rnn(object cell_fw, object cell_bw, IGraphNodeBase inputs, IEnumerable<int> sequence_length, object initial_state_fw, object initial_state_bw, string dtype, object parallel_iterations, bool swap_memory, bool time_major, object scope)

object bidirectional_dynamic_rnn_dyn(object cell_fw, object cell_bw, object inputs, object sequence_length, object initial_state_fw, object initial_state_bw, object dtype, object parallel_iterations, ImplicitContainer<T> swap_memory, ImplicitContainer<T> time_major, object scope)

Creates a dynamic version of bidirectional recurrent neural network. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `keras.layers.Bidirectional(keras.layers.RNN(cell))`, which is equivalent to this API

Takes input and builds independent forward and backward RNNs. The input_size of forward and backward cell must match. The initial state for both directions is zero by default (but can be set optionally) and no intermediate states are ever returned -- the network is fully unrolled for the given (passed in) length(s) of the sequence(s) or completely unrolled if length(s) is not given.
Parameters
object cell_fw
An instance of RNNCell, to be used for forward direction.
object cell_bw
An instance of RNNCell, to be used for backward direction.
object inputs
The RNN inputs. If time_major == False (default), this must be a tensor of shape: `[batch_size, max_time,...]`, or a nested tuple of such elements. If time_major == True, this must be a tensor of shape: `[max_time, batch_size,...]`, or a nested tuple of such elements.
object sequence_length
(optional) An int32/int64 vector, size `[batch_size]`, containing the actual lengths for each of the sequences in the batch. If not provided, all batch entries are assumed to be full sequences; and time reversal is applied from time `0` to `max_time` for each sequence.
object initial_state_fw
(optional) An initial state for the forward RNN. This must be a tensor of appropriate type and shape `[batch_size, cell_fw.state_size]`. If `cell_fw.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell_fw.state_size`.
object initial_state_bw
(optional) Same as for `initial_state_fw`, but using the corresponding properties of `cell_bw`.
object dtype
(optional) The data type for the initial states and expected output. Required if initial_states are not provided or RNN states have a heterogeneous dtype.
object parallel_iterations
(Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
ImplicitContainer<T> swap_memory
Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
ImplicitContainer<T> time_major
The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
object scope
VariableScope for the created subgraph; defaults to "bidirectional_rnn"
Returns
object
A tuple (outputs, output_states) where:

Public properties

PythonFunctionContainer bidirectional_dynamic_rnn_fn get;