LostTech.TensorFlow : API Documentation

Type tf.io

Namespace tensorflow

Public static methods

object decode_image(PythonClassContainer contents, Nullable<int> channels, ImplicitContainer<T> dtype, string name, bool expand_animations)

Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`.

Detects whether an image is a BMP, GIF, JPEG, or PNG, and performs the appropriate operation to convert the input bytes `string` into a `Tensor` of type `dtype`.

Note: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`, as opposed to `decode_bmp`, `decode_jpeg` and `decode_png`, which return 3-D arrays `[height, width, num_channels]`. Make sure to take this into account when constructing your graph if you are intermixing GIF files with BMP, JPEG, and/or PNG files. Alternately, set the `expand_animations` argument of this function to `False`, in which case the op will return 3-dimensional tensors and will truncate animated GIF files to the first frame.
Parameters
PythonClassContainer contents
0-D `string`. The encoded image bytes.
Nullable<int> channels
An optional `int`. Defaults to `0`. Number of color channels for the decoded image.
ImplicitContainer<T> dtype
The desired DType of the returned `Tensor`.
string name
A name for the operation (optional)
bool expand_animations
Controls the shape of the returned op's output. If `True`, the returned op will produce a 3-D tensor for PNG, JPEG, and BMP files; and a 4-D tensor for all GIFs, whether animated or not. If, `False`, the returned op will produce a 3-D tensor for all file types and will truncate animated GIFs to the first frame.
Returns
object
`Tensor` with type `dtype` and a 3- or 4-dimensional shape, depending on the file type and the value of the `expand_animations` parameter.

object decode_image(IGraphNodeBase contents, Nullable<int> channels, ImplicitContainer<T> dtype, string name, bool expand_animations)

Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`.

Detects whether an image is a BMP, GIF, JPEG, or PNG, and performs the appropriate operation to convert the input bytes `string` into a `Tensor` of type `dtype`.

Note: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`, as opposed to `decode_bmp`, `decode_jpeg` and `decode_png`, which return 3-D arrays `[height, width, num_channels]`. Make sure to take this into account when constructing your graph if you are intermixing GIF files with BMP, JPEG, and/or PNG files. Alternately, set the `expand_animations` argument of this function to `False`, in which case the op will return 3-dimensional tensors and will truncate animated GIF files to the first frame.
Parameters
IGraphNodeBase contents
0-D `string`. The encoded image bytes.
Nullable<int> channels
An optional `int`. Defaults to `0`. Number of color channels for the decoded image.
ImplicitContainer<T> dtype
The desired DType of the returned `Tensor`.
string name
A name for the operation (optional)
bool expand_animations
Controls the shape of the returned op's output. If `True`, the returned op will produce a 3-D tensor for PNG, JPEG, and BMP files; and a 4-D tensor for all GIFs, whether animated or not. If, `False`, the returned op will produce a 3-D tensor for all file types and will truncate animated GIFs to the first frame.
Returns
object
`Tensor` with type `dtype` and a 3- or 4-dimensional shape, depending on the file type and the value of the `expand_animations` parameter.

object decode_image(IEnumerable<object> contents, Nullable<int> channels, ImplicitContainer<T> dtype, string name, bool expand_animations)

Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`.

Detects whether an image is a BMP, GIF, JPEG, or PNG, and performs the appropriate operation to convert the input bytes `string` into a `Tensor` of type `dtype`.

Note: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`, as opposed to `decode_bmp`, `decode_jpeg` and `decode_png`, which return 3-D arrays `[height, width, num_channels]`. Make sure to take this into account when constructing your graph if you are intermixing GIF files with BMP, JPEG, and/or PNG files. Alternately, set the `expand_animations` argument of this function to `False`, in which case the op will return 3-dimensional tensors and will truncate animated GIF files to the first frame.
Parameters
IEnumerable<object> contents
0-D `string`. The encoded image bytes.
Nullable<int> channels
An optional `int`. Defaults to `0`. Number of color channels for the decoded image.
ImplicitContainer<T> dtype
The desired DType of the returned `Tensor`.
string name
A name for the operation (optional)
bool expand_animations
Controls the shape of the returned op's output. If `True`, the returned op will produce a 3-D tensor for PNG, JPEG, and BMP files; and a 4-D tensor for all GIFs, whether animated or not. If, `False`, the returned op will produce a 3-D tensor for all file types and will truncate animated GIFs to the first frame.
Returns
object
`Tensor` with type `dtype` and a 3- or 4-dimensional shape, depending on the file type and the value of the `expand_animations` parameter.

object decode_image(Byte[] contents, Nullable<int> channels, ImplicitContainer<T> dtype, string name, bool expand_animations)

Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`.

Detects whether an image is a BMP, GIF, JPEG, or PNG, and performs the appropriate operation to convert the input bytes `string` into a `Tensor` of type `dtype`.

Note: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`, as opposed to `decode_bmp`, `decode_jpeg` and `decode_png`, which return 3-D arrays `[height, width, num_channels]`. Make sure to take this into account when constructing your graph if you are intermixing GIF files with BMP, JPEG, and/or PNG files. Alternately, set the `expand_animations` argument of this function to `False`, in which case the op will return 3-dimensional tensors and will truncate animated GIF files to the first frame.
Parameters
Byte[] contents
0-D `string`. The encoded image bytes.
Nullable<int> channels
An optional `int`. Defaults to `0`. Number of color channels for the decoded image.
ImplicitContainer<T> dtype
The desired DType of the returned `Tensor`.
string name
A name for the operation (optional)
bool expand_animations
Controls the shape of the returned op's output. If `True`, the returned op will produce a 3-D tensor for PNG, JPEG, and BMP files; and a 4-D tensor for all GIFs, whether animated or not. If, `False`, the returned op will produce a 3-D tensor for all file types and will truncate animated GIFs to the first frame.
Returns
object
`Tensor` with type `dtype` and a 3- or 4-dimensional shape, depending on the file type and the value of the `expand_animations` parameter.

object decode_image_dyn(object contents, object channels, ImplicitContainer<T> dtype, object name, ImplicitContainer<T> expand_animations)

Function for `decode_bmp`, `decode_gif`, `decode_jpeg`, and `decode_png`.

Detects whether an image is a BMP, GIF, JPEG, or PNG, and performs the appropriate operation to convert the input bytes `string` into a `Tensor` of type `dtype`.

Note: `decode_gif` returns a 4-D array `[num_frames, height, width, 3]`, as opposed to `decode_bmp`, `decode_jpeg` and `decode_png`, which return 3-D arrays `[height, width, num_channels]`. Make sure to take this into account when constructing your graph if you are intermixing GIF files with BMP, JPEG, and/or PNG files. Alternately, set the `expand_animations` argument of this function to `False`, in which case the op will return 3-dimensional tensors and will truncate animated GIF files to the first frame.
Parameters
object contents
0-D `string`. The encoded image bytes.
object channels
An optional `int`. Defaults to `0`. Number of color channels for the decoded image.
ImplicitContainer<T> dtype
The desired DType of the returned `Tensor`.
object name
A name for the operation (optional)
ImplicitContainer<T> expand_animations
Controls the shape of the returned op's output. If `True`, the returned op will produce a 3-D tensor for PNG, JPEG, and BMP files; and a 4-D tensor for all GIFs, whether animated or not. If, `False`, the returned op will produce a 3-D tensor for all file types and will truncate animated GIFs to the first frame.
Returns
object
`Tensor` with type `dtype` and a 3- or 4-dimensional shape, depending on the file type and the value of the `expand_animations` parameter.

Tensor is_jpeg(IGraphNodeBase contents, string name)

Convenience function to check if the 'contents' encodes a JPEG image.
Parameters
IGraphNodeBase contents
0-D `string`. The encoded image bytes.
string name
A name for the operation (optional)
Returns
Tensor
A scalar boolean tensor indicating if 'contents' may be a JPEG image. is_jpeg is susceptible to false positives.

Tensor is_jpeg(PythonClassContainer contents, string name)

Convenience function to check if the 'contents' encodes a JPEG image.
Parameters
PythonClassContainer contents
0-D `string`. The encoded image bytes.
string name
A name for the operation (optional)
Returns
Tensor
A scalar boolean tensor indicating if 'contents' may be a JPEG image. is_jpeg is susceptible to false positives.

Tensor is_jpeg(Byte[] contents, string name)

Convenience function to check if the 'contents' encodes a JPEG image.
Parameters
Byte[] contents
0-D `string`. The encoded image bytes.
string name
A name for the operation (optional)
Returns
Tensor
A scalar boolean tensor indicating if 'contents' may be a JPEG image. is_jpeg is susceptible to false positives.

Tensor is_jpeg(IEnumerable<object> contents, string name)

Convenience function to check if the 'contents' encodes a JPEG image.
Parameters
IEnumerable<object> contents
0-D `string`. The encoded image bytes.
string name
A name for the operation (optional)
Returns
Tensor
A scalar boolean tensor indicating if 'contents' may be a JPEG image. is_jpeg is susceptible to false positives.

object is_jpeg_dyn(object contents, object name)

Convenience function to check if the 'contents' encodes a JPEG image.
Parameters
object contents
0-D `string`. The encoded image bytes.
object name
A name for the operation (optional)
Returns
object
A scalar boolean tensor indicating if 'contents' may be a JPEG image. is_jpeg is susceptible to false positives.

Variable match_filenames_once(object pattern, string name)

Save the list of files matching pattern, so it is only computed once.

NOTE: The order of the files returned is deterministic.
Parameters
object pattern
A file pattern (glob), or 1D tensor of file patterns.
string name
A name for the operations (optional).
Returns
Variable
A variable that is initialized to the list of files matching the pattern(s).

object match_filenames_once_dyn(object pattern, object name)

Save the list of files matching pattern, so it is only computed once.

NOTE: The order of the files returned is deterministic.
Parameters
object pattern
A file pattern (glob), or 1D tensor of file patterns.
object name
A name for the operations (optional).
Returns
object
A variable that is initialized to the list of files matching the pattern(s).

ValueTuple<IDictionary<object, object>, object, object> parse_sequence_example(object serialized, object context_features, object sequence_features, object example_names, string name)

Parses a batch of `SequenceExample` protos.

Parses a vector of serialized [`SequenceExample`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto) protos given in `serialized`.

This op parses serialized sequence examples into a tuple of dictionaries, each mapping keys to `Tensor` and `SparseTensor` objects. The first dictionary contains mappings for keys appearing in `context_features`, and the second dictionary contains mappings for keys appearing in `sequence_features`.

At least one of `context_features` and `sequence_features` must be provided and non-empty.

The `context_features` keys are associated with a `SequenceExample` as a whole, independent of time / frame. In contrast, the `sequence_features` keys provide a way to access variable-length data within the `FeatureList` section of the `SequenceExample` proto. While the shapes of `context_features` values are fixed with respect to frame, the frame dimension (the first dimension) of `sequence_features` values may vary between `SequenceExample` protos, and even between `feature_list` keys within the same `SequenceExample`.

`context_features` contains `VarLenFeature` and `FixedLenFeature` objects. Each `VarLenFeature` is mapped to a `SparseTensor`, and each `FixedLenFeature` is mapped to a `Tensor`, of the specified type, shape, and default value.

`sequence_features` contains `VarLenFeature` and `FixedLenSequenceFeature` objects. Each `VarLenFeature` is mapped to a `SparseTensor`, and each `FixedLenSequenceFeature` is mapped to a `Tensor`, each of the specified type. The shape will be `(B,T,) + df.dense_shape` for `FixedLenSequenceFeature` `df`, where `B` is the batch size, and `T` is the length of the associated `FeatureList` in the `SequenceExample`. For instance, `FixedLenSequenceFeature([])` yields a scalar 2-D `Tensor` of static shape `[None, None]` and dynamic shape `[B, T]`, while `FixedLenSequenceFeature([k])` (for `int k >= 1`) yields a 3-D matrix `Tensor` of static shape `[None, None, k]` and dynamic shape `[B, T, k]`.

Like the input, the resulting output tensors have a batch dimension. This means that the original per-example shapes of `VarLenFeature`s and `FixedLenSequenceFeature`s can be lost. To handle that situation, this op also provides dicts of shape tensors as part of the output. There is one dict for the context features, and one for the feature_list features. Context features of type `FixedLenFeature`s will not be present, since their shapes are already known by the caller. In situations where the input 'FixedLenFeature`s are of different lengths across examples, the shorter examples will be padded with default datatype values: 0 for numeric types, and the empty string for string types.

Each `SparseTensor` corresponding to `sequence_features` represents a ragged vector. Its indices are `[time, index]`, where `time` is the `FeatureList` entry and `index` is the value's index in the list of values associated with that time.

`FixedLenFeature` entries with a `default_value` and `FixedLenSequenceFeature` entries with `allow_missing=True` are optional; otherwise, we will fail if that `Feature` or `FeatureList` is missing from any example in `serialized`.

`example_name` may contain a descriptive name for the corresponding serialized proto. This may be useful for debugging purposes, but it has no effect on the output. If not `None`, `example_name` must be a scalar.
Parameters
object serialized
A vector (1-D Tensor) of type string containing binary serialized `SequenceExample` protos.
object context_features
A `dict` mapping feature keys to `FixedLenFeature` or `VarLenFeature` values. These features are associated with a `SequenceExample` as a whole.
object sequence_features
A `dict` mapping feature keys to `FixedLenSequenceFeature` or `VarLenFeature` values. These features are associated with data within the `FeatureList` section of the `SequenceExample` proto.
object example_names
A vector (1-D Tensor) of strings (optional), the name of the serialized protos.
string name
A name for this operation (optional).
Returns
ValueTuple<IDictionary<object, object>, object, object>
A tuple of three `dict`s, each mapping keys to `Tensor`s and `SparseTensor`s. The first dict contains the context key/values, the second dict contains the feature_list key/values, and the final dict contains the lengths of any dense feature_list features.

object parse_sequence_example_dyn(object serialized, object context_features, object sequence_features, object example_names, object name)

Parses a batch of `SequenceExample` protos.

Parses a vector of serialized [`SequenceExample`](https://www.tensorflow.org/code/tensorflow/core/example/example.proto) protos given in `serialized`.

This op parses serialized sequence examples into a tuple of dictionaries, each mapping keys to `Tensor` and `SparseTensor` objects. The first dictionary contains mappings for keys appearing in `context_features`, and the second dictionary contains mappings for keys appearing in `sequence_features`.

At least one of `context_features` and `sequence_features` must be provided and non-empty.

The `context_features` keys are associated with a `SequenceExample` as a whole, independent of time / frame. In contrast, the `sequence_features` keys provide a way to access variable-length data within the `FeatureList` section of the `SequenceExample` proto. While the shapes of `context_features` values are fixed with respect to frame, the frame dimension (the first dimension) of `sequence_features` values may vary between `SequenceExample` protos, and even between `feature_list` keys within the same `SequenceExample`.

`context_features` contains `VarLenFeature` and `FixedLenFeature` objects. Each `VarLenFeature` is mapped to a `SparseTensor`, and each `FixedLenFeature` is mapped to a `Tensor`, of the specified type, shape, and default value.

`sequence_features` contains `VarLenFeature` and `FixedLenSequenceFeature` objects. Each `VarLenFeature` is mapped to a `SparseTensor`, and each `FixedLenSequenceFeature` is mapped to a `Tensor`, each of the specified type. The shape will be `(B,T,) + df.dense_shape` for `FixedLenSequenceFeature` `df`, where `B` is the batch size, and `T` is the length of the associated `FeatureList` in the `SequenceExample`. For instance, `FixedLenSequenceFeature([])` yields a scalar 2-D `Tensor` of static shape `[None, None]` and dynamic shape `[B, T]`, while `FixedLenSequenceFeature([k])` (for `int k >= 1`) yields a 3-D matrix `Tensor` of static shape `[None, None, k]` and dynamic shape `[B, T, k]`.

Like the input, the resulting output tensors have a batch dimension. This means that the original per-example shapes of `VarLenFeature`s and `FixedLenSequenceFeature`s can be lost. To handle that situation, this op also provides dicts of shape tensors as part of the output. There is one dict for the context features, and one for the feature_list features. Context features of type `FixedLenFeature`s will not be present, since their shapes are already known by the caller. In situations where the input 'FixedLenFeature`s are of different lengths across examples, the shorter examples will be padded with default datatype values: 0 for numeric types, and the empty string for string types.

Each `SparseTensor` corresponding to `sequence_features` represents a ragged vector. Its indices are `[time, index]`, where `time` is the `FeatureList` entry and `index` is the value's index in the list of values associated with that time.

`FixedLenFeature` entries with a `default_value` and `FixedLenSequenceFeature` entries with `allow_missing=True` are optional; otherwise, we will fail if that `Feature` or `FeatureList` is missing from any example in `serialized`.

`example_name` may contain a descriptive name for the corresponding serialized proto. This may be useful for debugging purposes, but it has no effect on the output. If not `None`, `example_name` must be a scalar.
Parameters
object serialized
A vector (1-D Tensor) of type string containing binary serialized `SequenceExample` protos.
object context_features
A `dict` mapping feature keys to `FixedLenFeature` or `VarLenFeature` values. These features are associated with a `SequenceExample` as a whole.
object sequence_features
A `dict` mapping feature keys to `FixedLenSequenceFeature` or `VarLenFeature` values. These features are associated with data within the `FeatureList` section of the `SequenceExample` proto.
object example_names
A vector (1-D Tensor) of strings (optional), the name of the serialized protos.
object name
A name for this operation (optional).
Returns
object
A tuple of three `dict`s, each mapping keys to `Tensor`s and `SparseTensor`s. The first dict contains the context key/values, the second dict contains the feature_list key/values, and the final dict contains the lengths of any dense feature_list features.

IEnumerator<object> tf_record_iterator(string path, TFRecordOptions options)

An iterator that read the records from a TFRecords file. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use eager execution and: `tf.data.TFRecordDataset(path)`
Parameters
string path
The path to the TFRecords file.
TFRecordOptions options
(optional) A TFRecordOptions object.

IEnumerator<object> tf_record_iterator(IEnumerable<object> path, TFRecordOptions options)

An iterator that read the records from a TFRecords file. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use eager execution and: `tf.data.TFRecordDataset(path)`
Parameters
IEnumerable<object> path
The path to the TFRecords file.
TFRecordOptions options
(optional) A TFRecordOptions object.

object tf_record_iterator_dyn(object path, object options)

An iterator that read the records from a TFRecords file. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use eager execution and: `tf.data.TFRecordDataset(path)`
Parameters
object path
The path to the TFRecords file.
object options
(optional) A TFRecordOptions object.

object write_graph(object graph_or_graph_def, string logdir, string name, bool as_text)

Writes a graph proto to a file.

The graph is written as a text proto unless `as_text` is `False`. or
Parameters
object graph_or_graph_def
A `Graph` or a `GraphDef` protocol buffer.
string logdir
Directory where to write the graph. This can refer to remote filesystems, such as Google Cloud Storage (GCS).
string name
Filename for the graph.
bool as_text
If `True`, writes the graph as an ASCII proto.
Returns
object
The path of the output proto file.
Show Example
v = tf.Variable(0, name='my_variable')
            sess = tf.compat.v1.Session()
            tf.io.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt') 

object write_graph(object graph_or_graph_def, Byte[] logdir, string name, bool as_text)

Writes a graph proto to a file.

The graph is written as a text proto unless `as_text` is `False`. or
Parameters
object graph_or_graph_def
A `Graph` or a `GraphDef` protocol buffer.
Byte[] logdir
Directory where to write the graph. This can refer to remote filesystems, such as Google Cloud Storage (GCS).
string name
Filename for the graph.
bool as_text
If `True`, writes the graph as an ASCII proto.
Returns
object
The path of the output proto file.
Show Example
v = tf.Variable(0, name='my_variable')
            sess = tf.compat.v1.Session()
            tf.io.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt') 

object write_graph(Graph graph_or_graph_def, string logdir, string name, bool as_text)

Writes a graph proto to a file.

The graph is written as a text proto unless `as_text` is `False`. or
Parameters
Graph graph_or_graph_def
A `Graph` or a `GraphDef` protocol buffer.
string logdir
Directory where to write the graph. This can refer to remote filesystems, such as Google Cloud Storage (GCS).
string name
Filename for the graph.
bool as_text
If `True`, writes the graph as an ASCII proto.
Returns
object
The path of the output proto file.
Show Example
v = tf.Variable(0, name='my_variable')
            sess = tf.compat.v1.Session()
            tf.io.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt') 

object write_graph(Graph graph_or_graph_def, Byte[] logdir, string name, bool as_text)

Writes a graph proto to a file.

The graph is written as a text proto unless `as_text` is `False`. or
Parameters
Graph graph_or_graph_def
A `Graph` or a `GraphDef` protocol buffer.
Byte[] logdir
Directory where to write the graph. This can refer to remote filesystems, such as Google Cloud Storage (GCS).
string name
Filename for the graph.
bool as_text
If `True`, writes the graph as an ASCII proto.
Returns
object
The path of the output proto file.
Show Example
v = tf.Variable(0, name='my_variable')
            sess = tf.compat.v1.Session()
            tf.io.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt') 

object write_graph_dyn(object graph_or_graph_def, object logdir, object name, ImplicitContainer<T> as_text)

Writes a graph proto to a file.

The graph is written as a text proto unless `as_text` is `False`. or
Parameters
object graph_or_graph_def
A `Graph` or a `GraphDef` protocol buffer.
object logdir
Directory where to write the graph. This can refer to remote filesystems, such as Google Cloud Storage (GCS).
object name
Filename for the graph.
ImplicitContainer<T> as_text
If `True`, writes the graph as an ASCII proto.
Returns
object
The path of the output proto file.
Show Example
v = tf.Variable(0, name='my_variable')
            sess = tf.compat.v1.Session()
            tf.io.write_graph(sess.graph_def, '/tmp/my-model', 'train.pbtxt') 

Public properties

PythonFunctionContainer decode_image_fn get;

PythonFunctionContainer is_jpeg_fn get;

PythonFunctionContainer match_filenames_once_fn get;

PythonFunctionContainer parse_sequence_example_fn get;

PythonFunctionContainer tf_record_iterator_fn get;

PythonFunctionContainer write_graph_fn get;