LostTech.TensorFlow : API Documentation

Type tf.keras.models

Namespace tensorflow

Public static methods

object clone_model(Model model, IGraphNodeBase input_tensors, object clone_function)

Clone any `Model` instance.

Model cloning is similar to calling a model on new inputs, except that it creates new layers (and thus new weights) instead of sharing the weights of the existing layers.
Parameters
Model model
Instance of `Model` (could be a functional model or a Sequential model).
IGraphNodeBase input_tensors
optional list of input tensors or InputLayer objects to build the model upon. If not provided, placeholders will be created.
object clone_function
Callable to be used to clone each layer in the target model (except `InputLayer` instances). It takes as argument the layer instance to be cloned, and returns the corresponding layer instance to be used in the model copy. If unspecified, this callable defaults to the following serialization/deserialization function: `lambda layer: layer.__class__.from_config(layer.get_config())`. By passing a custom callable, you can customize your copy of the model, e.g. by wrapping certain layers of interest (you might want to replace all `LSTM` instances with equivalent `Bidirectional(LSTM(...))` instances, for example).
Returns
object
An instance of `Model` reproducing the behavior of the original model, on top of new inputs tensors, using newly instantiated weights. The cloned model might behave differently from the original model if a custom clone_function modifies the layer.

object clone_model(Model model, ValueTuple<IEnumerable<object>, object> input_tensors, object clone_function)

Clone any `Model` instance.

Model cloning is similar to calling a model on new inputs, except that it creates new layers (and thus new weights) instead of sharing the weights of the existing layers.
Parameters
Model model
Instance of `Model` (could be a functional model or a Sequential model).
ValueTuple<IEnumerable<object>, object> input_tensors
optional list of input tensors or InputLayer objects to build the model upon. If not provided, placeholders will be created.
object clone_function
Callable to be used to clone each layer in the target model (except `InputLayer` instances). It takes as argument the layer instance to be cloned, and returns the corresponding layer instance to be used in the model copy. If unspecified, this callable defaults to the following serialization/deserialization function: `lambda layer: layer.__class__.from_config(layer.get_config())`. By passing a custom callable, you can customize your copy of the model, e.g. by wrapping certain layers of interest (you might want to replace all `LSTM` instances with equivalent `Bidirectional(LSTM(...))` instances, for example).
Returns
object
An instance of `Model` reproducing the behavior of the original model, on top of new inputs tensors, using newly instantiated weights. The cloned model might behave differently from the original model if a custom clone_function modifies the layer.

object clone_model(Model model, IEnumerable<IGraphNodeBase> input_tensors, object clone_function)

Clone any `Model` instance.

Model cloning is similar to calling a model on new inputs, except that it creates new layers (and thus new weights) instead of sharing the weights of the existing layers.
Parameters
Model model
Instance of `Model` (could be a functional model or a Sequential model).
IEnumerable<IGraphNodeBase> input_tensors
optional list of input tensors or InputLayer objects to build the model upon. If not provided, placeholders will be created.
object clone_function
Callable to be used to clone each layer in the target model (except `InputLayer` instances). It takes as argument the layer instance to be cloned, and returns the corresponding layer instance to be used in the model copy. If unspecified, this callable defaults to the following serialization/deserialization function: `lambda layer: layer.__class__.from_config(layer.get_config())`. By passing a custom callable, you can customize your copy of the model, e.g. by wrapping certain layers of interest (you might want to replace all `LSTM` instances with equivalent `Bidirectional(LSTM(...))` instances, for example).
Returns
object
An instance of `Model` reproducing the behavior of the original model, on top of new inputs tensors, using newly instantiated weights. The cloned model might behave differently from the original model if a custom clone_function modifies the layer.

object clone_model(Model model, IDictionary<object, object> input_tensors, object clone_function)

Clone any `Model` instance.

Model cloning is similar to calling a model on new inputs, except that it creates new layers (and thus new weights) instead of sharing the weights of the existing layers.
Parameters
Model model
Instance of `Model` (could be a functional model or a Sequential model).
IDictionary<object, object> input_tensors
optional list of input tensors or InputLayer objects to build the model upon. If not provided, placeholders will be created.
object clone_function
Callable to be used to clone each layer in the target model (except `InputLayer` instances). It takes as argument the layer instance to be cloned, and returns the corresponding layer instance to be used in the model copy. If unspecified, this callable defaults to the following serialization/deserialization function: `lambda layer: layer.__class__.from_config(layer.get_config())`. By passing a custom callable, you can customize your copy of the model, e.g. by wrapping certain layers of interest (you might want to replace all `LSTM` instances with equivalent `Bidirectional(LSTM(...))` instances, for example).
Returns
object
An instance of `Model` reproducing the behavior of the original model, on top of new inputs tensors, using newly instantiated weights. The cloned model might behave differently from the original model if a custom clone_function modifies the layer.

object clone_model_dyn(object model, object input_tensors, object clone_function)

Clone any `Model` instance.

Model cloning is similar to calling a model on new inputs, except that it creates new layers (and thus new weights) instead of sharing the weights of the existing layers.
Parameters
object model
Instance of `Model` (could be a functional model or a Sequential model).
object input_tensors
optional list of input tensors or InputLayer objects to build the model upon. If not provided, placeholders will be created.
object clone_function
Callable to be used to clone each layer in the target model (except `InputLayer` instances). It takes as argument the layer instance to be cloned, and returns the corresponding layer instance to be used in the model copy. If unspecified, this callable defaults to the following serialization/deserialization function: `lambda layer: layer.__class__.from_config(layer.get_config())`. By passing a custom callable, you can customize your copy of the model, e.g. by wrapping certain layers of interest (you might want to replace all `LSTM` instances with equivalent `Bidirectional(LSTM(...))` instances, for example).
Returns
object
An instance of `Model` reproducing the behavior of the original model, on top of new inputs tensors, using newly instantiated weights. The cloned model might behave differently from the original model if a custom clone_function modifies the layer.

RevivedModel load_model(object filepath, object custom_objects, bool compile)

Loads a model saved via `save_model`.
Parameters
object filepath
One of the following: - String, path to the saved model - `h5py.File` object from which to load the model
object custom_objects
Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
bool compile
Boolean, whether to compile the model after loading.
Returns
RevivedModel
A Keras model instance. If an optimizer was found as part of the saved model, the model is already compiled. Otherwise, the model is uncompiled and a warning will be displayed. When `compile` is set to False, the compilation is omitted without any warning.

object load_model_dyn(object filepath, object custom_objects, ImplicitContainer<T> compile)

Loads a model saved via `save_model`.
Parameters
object filepath
One of the following: - String, path to the saved model - `h5py.File` object from which to load the model
object custom_objects
Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
ImplicitContainer<T> compile
Boolean, whether to compile the model after loading.
Returns
object
A Keras model instance. If an optimizer was found as part of the saved model, the model is already compiled. Otherwise, the model is uncompiled and a warning will be displayed. When `compile` is set to False, the compilation is omitted without any warning.

object model_from_config(IEnumerable<object> config, IDictionary<object, object> custom_objects)

Instantiates a Keras model from its config.
Parameters
IEnumerable<object> config
Configuration dictionary.
IDictionary<object, object> custom_objects
Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
Returns
object
A Keras model instance (uncompiled).

object model_from_config(object config, IDictionary<object, object> custom_objects)

Instantiates a Keras model from its config.
Parameters
object config
Configuration dictionary.
IDictionary<object, object> custom_objects
Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
Returns
object
A Keras model instance (uncompiled).

object model_from_config_dyn(object config, object custom_objects)

Instantiates a Keras model from its config.
Parameters
object config
Configuration dictionary.
object custom_objects
Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
Returns
object
A Keras model instance (uncompiled).

object model_from_json(string json_string, object custom_objects)

Parses a JSON model configuration file and returns a model instance.
Parameters
string json_string
JSON string encoding a model configuration.
object custom_objects
Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
Returns
object
A Keras model instance (uncompiled).

object model_from_json_dyn(object json_string, object custom_objects)

Parses a JSON model configuration file and returns a model instance.
Parameters
object json_string
JSON string encoding a model configuration.
object custom_objects
Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
Returns
object
A Keras model instance (uncompiled).

object model_from_yaml(object yaml_string, object custom_objects)

Parses a yaml model configuration file and returns a model instance.
Parameters
object yaml_string
YAML string encoding a model configuration.
object custom_objects
Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
Returns
object
A Keras model instance (uncompiled).

object model_from_yaml_dyn(object yaml_string, object custom_objects)

Parses a yaml model configuration file and returns a model instance.
Parameters
object yaml_string
YAML string encoding a model configuration.
object custom_objects
Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.
Returns
object
A Keras model instance (uncompiled).

void save_model(AutoTrackable model, IGraphNodeBase filepath, bool overwrite, bool include_optimizer, string save_format, object signatures)

Saves a model as a TensorFlow SavedModel or HDF5 file.

The saved model contains: - the model's configuration (topology) - the model's weights - the model's optimizer's state (if any)

Thus the saved model can be reinstantiated in the exact same state, without any of the code used for model definition or training.

_SavedModel serialization_ (not yet added)

The SavedModel serialization path uses tf.saved_model.save to save the model and all trackable objects attached to the model (e.g. layers and variables). `@tf.function`-decorated methods are also saved. Additional trackable objects and functions are added to the SavedModel to allow the model to be loaded back as a Keras Model object.
Parameters
AutoTrackable model
Keras model instance to be saved.
IGraphNodeBase filepath
One of the following: - String, path where to save the model - `h5py.File` object where to save the model
bool overwrite
Whether we should overwrite any existing model at the target location, or instead ask the user with a manual prompt.
bool include_optimizer
If True, save optimizer's state together.
string save_format
Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' in TF 1.X.
object signatures
Signatures to save with the SavedModel. Applicable to the 'tf' format only. Please see the `signatures` argument in tf.saved_model.save for details.

void save_model(AutoTrackable model, Byte[] filepath, bool overwrite, bool include_optimizer, string save_format, object signatures)

Saves a model as a TensorFlow SavedModel or HDF5 file.

The saved model contains: - the model's configuration (topology) - the model's weights - the model's optimizer's state (if any)

Thus the saved model can be reinstantiated in the exact same state, without any of the code used for model definition or training.

_SavedModel serialization_ (not yet added)

The SavedModel serialization path uses tf.saved_model.save to save the model and all trackable objects attached to the model (e.g. layers and variables). `@tf.function`-decorated methods are also saved. Additional trackable objects and functions are added to the SavedModel to allow the model to be loaded back as a Keras Model object.
Parameters
AutoTrackable model
Keras model instance to be saved.
Byte[] filepath
One of the following: - String, path where to save the model - `h5py.File` object where to save the model
bool overwrite
Whether we should overwrite any existing model at the target location, or instead ask the user with a manual prompt.
bool include_optimizer
If True, save optimizer's state together.
string save_format
Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' in TF 1.X.
object signatures
Signatures to save with the SavedModel. Applicable to the 'tf' format only. Please see the `signatures` argument in tf.saved_model.save for details.

void save_model(Checkpoint model, string filepath, bool overwrite, bool include_optimizer, string save_format, object signatures)

Saves a model as a TensorFlow SavedModel or HDF5 file.

The saved model contains: - the model's configuration (topology) - the model's weights - the model's optimizer's state (if any)

Thus the saved model can be reinstantiated in the exact same state, without any of the code used for model definition or training.

_SavedModel serialization_ (not yet added)

The SavedModel serialization path uses tf.saved_model.save to save the model and all trackable objects attached to the model (e.g. layers and variables). `@tf.function`-decorated methods are also saved. Additional trackable objects and functions are added to the SavedModel to allow the model to be loaded back as a Keras Model object.
Parameters
Checkpoint model
Keras model instance to be saved.
string filepath
One of the following: - String, path where to save the model - `h5py.File` object where to save the model
bool overwrite
Whether we should overwrite any existing model at the target location, or instead ask the user with a manual prompt.
bool include_optimizer
If True, save optimizer's state together.
string save_format
Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' in TF 1.X.
object signatures
Signatures to save with the SavedModel. Applicable to the 'tf' format only. Please see the `signatures` argument in tf.saved_model.save for details.

void save_model(Checkpoint model, IGraphNodeBase filepath, bool overwrite, bool include_optimizer, string save_format, object signatures)

Saves a model as a TensorFlow SavedModel or HDF5 file.

The saved model contains: - the model's configuration (topology) - the model's weights - the model's optimizer's state (if any)

Thus the saved model can be reinstantiated in the exact same state, without any of the code used for model definition or training.

_SavedModel serialization_ (not yet added)

The SavedModel serialization path uses tf.saved_model.save to save the model and all trackable objects attached to the model (e.g. layers and variables). `@tf.function`-decorated methods are also saved. Additional trackable objects and functions are added to the SavedModel to allow the model to be loaded back as a Keras Model object.
Parameters
Checkpoint model
Keras model instance to be saved.
IGraphNodeBase filepath
One of the following: - String, path where to save the model - `h5py.File` object where to save the model
bool overwrite
Whether we should overwrite any existing model at the target location, or instead ask the user with a manual prompt.
bool include_optimizer
If True, save optimizer's state together.
string save_format
Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' in TF 1.X.
object signatures
Signatures to save with the SavedModel. Applicable to the 'tf' format only. Please see the `signatures` argument in tf.saved_model.save for details.

void save_model(Checkpoint model, Byte[] filepath, bool overwrite, bool include_optimizer, string save_format, object signatures)

Saves a model as a TensorFlow SavedModel or HDF5 file.

The saved model contains: - the model's configuration (topology) - the model's weights - the model's optimizer's state (if any)

Thus the saved model can be reinstantiated in the exact same state, without any of the code used for model definition or training.

_SavedModel serialization_ (not yet added)

The SavedModel serialization path uses tf.saved_model.save to save the model and all trackable objects attached to the model (e.g. layers and variables). `@tf.function`-decorated methods are also saved. Additional trackable objects and functions are added to the SavedModel to allow the model to be loaded back as a Keras Model object.
Parameters
Checkpoint model
Keras model instance to be saved.
Byte[] filepath
One of the following: - String, path where to save the model - `h5py.File` object where to save the model
bool overwrite
Whether we should overwrite any existing model at the target location, or instead ask the user with a manual prompt.
bool include_optimizer
If True, save optimizer's state together.
string save_format
Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' in TF 1.X.
object signatures
Signatures to save with the SavedModel. Applicable to the 'tf' format only. Please see the `signatures` argument in tf.saved_model.save for details.

void save_model(AutoTrackable model, string filepath, bool overwrite, bool include_optimizer, string save_format, object signatures)

Saves a model as a TensorFlow SavedModel or HDF5 file.

The saved model contains: - the model's configuration (topology) - the model's weights - the model's optimizer's state (if any)

Thus the saved model can be reinstantiated in the exact same state, without any of the code used for model definition or training.

_SavedModel serialization_ (not yet added)

The SavedModel serialization path uses tf.saved_model.save to save the model and all trackable objects attached to the model (e.g. layers and variables). `@tf.function`-decorated methods are also saved. Additional trackable objects and functions are added to the SavedModel to allow the model to be loaded back as a Keras Model object.
Parameters
AutoTrackable model
Keras model instance to be saved.
string filepath
One of the following: - String, path where to save the model - `h5py.File` object where to save the model
bool overwrite
Whether we should overwrite any existing model at the target location, or instead ask the user with a manual prompt.
bool include_optimizer
If True, save optimizer's state together.
string save_format
Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' in TF 1.X.
object signatures
Signatures to save with the SavedModel. Applicable to the 'tf' format only. Please see the `signatures` argument in tf.saved_model.save for details.

object save_model_dyn(object model, object filepath, ImplicitContainer<T> overwrite, ImplicitContainer<T> include_optimizer, object save_format, object signatures)

Saves a model as a TensorFlow SavedModel or HDF5 file.

The saved model contains: - the model's configuration (topology) - the model's weights - the model's optimizer's state (if any)

Thus the saved model can be reinstantiated in the exact same state, without any of the code used for model definition or training.

_SavedModel serialization_ (not yet added)

The SavedModel serialization path uses tf.saved_model.save to save the model and all trackable objects attached to the model (e.g. layers and variables). `@tf.function`-decorated methods are also saved. Additional trackable objects and functions are added to the SavedModel to allow the model to be loaded back as a Keras Model object.
Parameters
object model
Keras model instance to be saved.
object filepath
One of the following: - String, path where to save the model - `h5py.File` object where to save the model
ImplicitContainer<T> overwrite
Whether we should overwrite any existing model at the target location, or instead ask the user with a manual prompt.
ImplicitContainer<T> include_optimizer
If True, save optimizer's state together.
object save_format
Either 'tf' or 'h5', indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to 'tf' in TF 2.X, and 'h5' in TF 1.X.
object signatures
Signatures to save with the SavedModel. Applicable to the 'tf' format only. Please see the `signatures` argument in tf.saved_model.save for details.

Public properties

PythonFunctionContainer clone_model_fn get;

PythonFunctionContainer load_model_fn get;

PythonFunctionContainer model_from_config_fn get;

PythonFunctionContainer model_from_json_fn get;

PythonFunctionContainer model_from_yaml_fn get;

PythonFunctionContainer save_model_fn get;