LostTech.TensorFlow : API Documentation

Type tf.keras.experimental

Namespace tensorflow

Public static methods

void export_saved_model(Model model, string saved_model_path, IDictionary<string, object> custom_objects, bool as_text, TensorSpec input_signature, bool serving_only)

Exports a tf.keras.Model as a Tensorflow SavedModel. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `model.save(..., save_format="tf")` or `tf.keras.models.save_model(..., save_format="tf")`.

Note that at this time, subclassed models can only be saved using `serving_only=True`.

The exported `SavedModel` is a standalone serialization of Tensorflow objects, and is supported by TF language APIs and the Tensorflow Serving system. To load the model, use the function tf.keras.experimental.load_from_saved_model.

The `SavedModel` contains:

1. a checkpoint containing the model weights. 2. a `SavedModel` proto containing the Tensorflow backend graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model has not been compiled, then only the graph computing predictions will be exported. 3. the model's json config. If the model is subclassed, this will only be included if the model's `get_config()` method is overwritten.

Example:
Parameters
Model model
A tf.keras.Model to be saved. If the model is subclassed, the flag `serving_only` must be set to True.
string saved_model_path
a string specifying the path to the SavedModel directory.
IDictionary<string, object> custom_objects
Optional dictionary mapping string names to custom classes or functions (e.g. custom loss functions).
bool as_text
bool, `False` by default. Whether to write the `SavedModel` proto in text format. Currently unavailable in serving-only mode.
TensorSpec input_signature
A possibly nested sequence of tf.TensorSpec objects, used to specify the expected model inputs. See tf.function for more details.
bool serving_only
bool, `False` by default. When this is true, only the prediction graph is saved.
Show Example
import tensorflow as tf 

# Create a tf.keras model. model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(1, input_shape=[10])) model.summary()

# Save the tf.keras model in the SavedModel format. path = '/tmp/simple_keras_model' tf.keras.experimental.export_saved_model(model, path)

# Load the saved keras model back. new_model = tf.keras.experimental.load_from_saved_model(path) new_model.summary()

void export_saved_model(Sequential model, string saved_model_path, IDictionary<string, object> custom_objects, bool as_text, TensorSpec input_signature, bool serving_only)

Exports a tf.keras.Model as a Tensorflow SavedModel. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Please use `model.save(..., save_format="tf")` or `tf.keras.models.save_model(..., save_format="tf")`.

Note that at this time, subclassed models can only be saved using `serving_only=True`.

The exported `SavedModel` is a standalone serialization of Tensorflow objects, and is supported by TF language APIs and the Tensorflow Serving system. To load the model, use the function tf.keras.experimental.load_from_saved_model.

The `SavedModel` contains:

1. a checkpoint containing the model weights. 2. a `SavedModel` proto containing the Tensorflow backend graph. Separate graphs are saved for prediction (serving), train, and evaluation. If the model has not been compiled, then only the graph computing predictions will be exported. 3. the model's json config. If the model is subclassed, this will only be included if the model's `get_config()` method is overwritten.

Example:
Parameters
Sequential model
A tf.keras.Model to be saved. If the model is subclassed, the flag `serving_only` must be set to True.
string saved_model_path
a string specifying the path to the SavedModel directory.
IDictionary<string, object> custom_objects
Optional dictionary mapping string names to custom classes or functions (e.g. custom loss functions).
bool as_text
bool, `False` by default. Whether to write the `SavedModel` proto in text format. Currently unavailable in serving-only mode.
TensorSpec input_signature
A possibly nested sequence of tf.TensorSpec objects, used to specify the expected model inputs. See tf.function for more details.
bool serving_only
bool, `False` by default. When this is true, only the prediction graph is saved.
Show Example
import tensorflow as tf 

# Create a tf.keras model. model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(1, input_shape=[10])) model.summary()

# Save the tf.keras model in the SavedModel format. path = '/tmp/simple_keras_model' tf.keras.experimental.export_saved_model(model, path)

# Load the saved keras model back. new_model = tf.keras.experimental.load_from_saved_model(path) new_model.summary()

object load_from_saved_model(object saved_model_path, object custom_objects)

Loads a keras Model from a SavedModel created by `export_saved_model()`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: The experimental save and load functions have been deprecated. Please switch to tf.keras.models.load_model.

This function reinstantiates model state by: 1) loading model topology from json (this will eventually come from metagraph). 2) loading model weights from checkpoint.

Example:
Parameters
object saved_model_path
a string specifying the path to an existing SavedModel.
object custom_objects
Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
Returns
object
a keras.Model instance.
Show Example
import tensorflow as tf 

# Create a tf.keras model. model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(1, input_shape=[10])) model.summary()

# Save the tf.keras model in the SavedModel format. path = '/tmp/simple_keras_model' tf.keras.experimental.export_saved_model(model, path)

# Load the saved keras model back. new_model = tf.keras.experimental.load_from_saved_model(path) new_model.summary()

object load_from_saved_model_dyn(object saved_model_path, object custom_objects)

Loads a keras Model from a SavedModel created by `export_saved_model()`. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: The experimental save and load functions have been deprecated. Please switch to tf.keras.models.load_model.

This function reinstantiates model state by: 1) loading model topology from json (this will eventually come from metagraph). 2) loading model weights from checkpoint.

Example:
Parameters
object saved_model_path
a string specifying the path to an existing SavedModel.
object custom_objects
Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
Returns
object
a keras.Model instance.
Show Example
import tensorflow as tf 

# Create a tf.keras model. model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(1, input_shape=[10])) model.summary()

# Save the tf.keras model in the SavedModel format. path = '/tmp/simple_keras_model' tf.keras.experimental.export_saved_model(model, path)

# Load the saved keras model back. new_model = tf.keras.experimental.load_from_saved_model(path) new_model.summary()

IList<string> terminate_keras_multiprocessing_pools(double grace_period, bool use_sigkill)

Destroy Keras' multiprocessing pools to prevent deadlocks.

In general multiprocessing.Pool can interact quite badly with other, seemingly unrelated, parts of a codebase due to Pool's reliance on fork. This method cleans up all pools which are known to belong to Keras (and thus can be safely terminated).
Parameters
double grace_period
Time (in seconds) to wait for process cleanup to propagate.
bool use_sigkill
Boolean of whether or not to perform a cleanup pass using SIGKILL.
Returns
IList<string>
A list of human readable strings describing all issues encountered. It is up to the caller to decide whether to treat this as an error condition.

object terminate_keras_multiprocessing_pools_dyn(ImplicitContainer<T> grace_period, ImplicitContainer<T> use_sigkill)

Destroy Keras' multiprocessing pools to prevent deadlocks.

In general multiprocessing.Pool can interact quite badly with other, seemingly unrelated, parts of a codebase due to Pool's reliance on fork. This method cleans up all pools which are known to belong to Keras (and thus can be safely terminated).
Parameters
ImplicitContainer<T> grace_period
Time (in seconds) to wait for process cleanup to propagate.
ImplicitContainer<T> use_sigkill
Boolean of whether or not to perform a cleanup pass using SIGKILL.
Returns
object
A list of human readable strings describing all issues encountered. It is up to the caller to decide whether to treat this as an error condition.

Public properties

PythonFunctionContainer export_saved_model_fn get;

PythonFunctionContainer load_from_saved_model_fn get;

PythonFunctionContainer terminate_keras_multiprocessing_pools_fn get;