Type tf.lite.experimental.nn
Namespace tensorflow
Methods
- dynamic_rnn
- dynamic_rnn
- dynamic_rnn
- dynamic_rnn
- dynamic_rnn
- dynamic_rnn
- dynamic_rnn
- dynamic_rnn
- dynamic_rnn
- dynamic_rnn
- dynamic_rnn
- dynamic_rnn
- dynamic_rnn
- dynamic_rnn
- dynamic_rnn
- dynamic_rnn
- dynamic_rnn_dyn
Properties
Public static methods
ValueTuple<object, object> dynamic_rnn(object cell, IEnumerable<object> inputs, IEnumerable<int> sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
ValueTuple<object, object> dynamic_rnn(object cell, object inputs, IndexedSlices sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
ValueTuple<object, object> dynamic_rnn(object cell, object inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
ValueTuple<object, object> dynamic_rnn(object cell, object inputs, IEnumerable<int> sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
ValueTuple<object, object> dynamic_rnn(object cell, PythonClassContainer inputs, IGraphNodeBase sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
ValueTuple<object, object> dynamic_rnn(object cell, PythonClassContainer inputs, IndexedSlices sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
ValueTuple<object, object> dynamic_rnn(object cell, PythonClassContainer inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
ValueTuple<object, object> dynamic_rnn(object cell, object inputs, IGraphNodeBase sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
ValueTuple<object, object> dynamic_rnn(object cell, PythonClassContainer inputs, IEnumerable<int> sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
ValueTuple<object, object> dynamic_rnn(object cell, IGraphNodeBase inputs, IndexedSlices sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
ValueTuple<object, object> dynamic_rnn(object cell, IGraphNodeBase inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
ValueTuple<object, object> dynamic_rnn(object cell, IGraphNodeBase inputs, IEnumerable<int> sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
ValueTuple<object, object> dynamic_rnn(object cell, IEnumerable<object> inputs, IGraphNodeBase sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
ValueTuple<object, object> dynamic_rnn(object cell, IEnumerable<object> inputs, IndexedSlices sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
ValueTuple<object, object> dynamic_rnn(object cell, IEnumerable<object> inputs, ValueTuple<PythonClassContainer, PythonClassContainer> sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
ValueTuple<object, object> dynamic_rnn(object cell, IGraphNodeBase inputs, IGraphNodeBase sequence_length, object initial_state, string dtype, Nullable<int> parallel_iterations, bool swap_memory, bool time_major, VariableScope scope)
object dynamic_rnn_dyn(object cell, object inputs, object sequence_length, object initial_state, object dtype, object parallel_iterations, ImplicitContainer<T> swap_memory, ImplicitContainer<T> time_major, object scope)
Creates a recurrent neural network specified by RNNCell `cell`. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Please use `keras.layers.RNN(cell)`, which is equivalent to this API Performs fully dynamic unrolling of `inputs`. Example:
Parameters
-
object
cell - An instance of RNNCell.
-
object
inputs - The RNN inputs. If `time_major == False` (default), this must be a `Tensor` of shape: `[batch_size, max_time,...]`, or a nested tuple of such elements. If `time_major == True`, this must be a `Tensor` of shape: `[max_time, batch_size,...]`, or a nested tuple of such elements. This may also be a (possibly nested) tuple of Tensors satisfying this property. The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ. In this case, input to `cell` at each time-step will replicate the structure of these tuples, except for the time dimension (from which the time is taken). The input to `cell` at each time step will be a `Tensor` or (possibly nested) tuple of Tensors each with dimensions `[batch_size,...]`.
-
object
sequence_length - (optional) An int32/int64 vector sized `[batch_size]`. Used to copy-through state and zero-out outputs when past a batch element's sequence length. This parameter enables users to extract the last valid state and properly padded outputs, so it is provided for correctness.
-
object
initial_state - (optional) An initial state for the RNN. If `cell.state_size` is an integer, this must be a `Tensor` of appropriate type and shape `[batch_size, cell.state_size]`. If `cell.state_size` is a tuple, this should be a tuple of tensors having shapes `[batch_size, s] for s in cell.state_size`.
-
object
dtype - (optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.
-
object
parallel_iterations - (Default: 32). The number of iterations to run in parallel. Those operations which do not have any temporal dependency and can be run in parallel, will be. This parameter trades off time for space. Values >> 1 use more memory but take less time, while smaller values use less memory but computations take longer.
-
ImplicitContainer<T>
swap_memory - Transparently swap the tensors produced in forward inference but needed for back prop from GPU to CPU. This allows training RNNs which would typically not fit on a single GPU, with very minimal (or no) performance penalty.
-
ImplicitContainer<T>
time_major - The shape format of the `inputs` and `outputs` Tensors. If true, these `Tensors` must be shaped `[max_time, batch_size, depth]`. If false, these `Tensors` must be shaped `[batch_size, max_time, depth]`. Using `time_major = True` is a bit more efficient because it avoids transposes at the beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
-
object
scope - VariableScope for the created subgraph; defaults to "rnn".
Returns
-
object
- A pair (outputs, state) where:
Show Example
# create a BasicRNNCell rnn_cell = tf.compat.v1.nn.rnn_cell.BasicRNNCell(hidden_size) # 'outputs' is a tensor of shape [batch_size, max_time, cell_state_size] # defining initial state initial_state = rnn_cell.zero_state(batch_size, dtype=tf.float32) # 'state' is a tensor of shape [batch_size, cell_state_size] outputs, state = tf.compat.v1.nn.dynamic_rnn(rnn_cell, input_data, initial_state=initial_state, dtype=tf.float32)