Type tf.config
Namespace tensorflow
Methods
- experimental_connect_to_cluster
- experimental_connect_to_cluster
- experimental_connect_to_cluster_dyn
- experimental_connect_to_host
- experimental_connect_to_host_dyn
- experimental_list_devices
- experimental_list_devices_dyn
- experimental_run_functions_eagerly
- experimental_run_functions_eagerly_dyn
- get_soft_device_placement
- get_soft_device_placement_dyn
- set_soft_device_placement
- set_soft_device_placement_dyn
Properties
Public static methods
void experimental_connect_to_cluster(ClusterResolver cluster_spec_or_resolver, string job_name, int task_index, object protocol)
Connects to the given cluster. Will make devices on the cluster available to use. Note that calling this more
than once will work, but will invalidate any tensor handles on the old remote
devices. If the given local job name is not present in the cluster specification, it
will be automatically added, using an unused port on the localhost.
Parameters
-
ClusterResolver
cluster_spec_or_resolver - A `ClusterSpec` or `ClusterResolver` describing the cluster.
-
string
job_name - The name of the local job.
-
int
task_index - The local task index.
-
object
protocol - The communication protocol, such as `"grpc"`. If unspecified, will use the default from `python/platform/remote_utils.py`.
void experimental_connect_to_cluster(ClusterSpec cluster_spec_or_resolver, string job_name, int task_index, object protocol)
Connects to the given cluster. Will make devices on the cluster available to use. Note that calling this more
than once will work, but will invalidate any tensor handles on the old remote
devices. If the given local job name is not present in the cluster specification, it
will be automatically added, using an unused port on the localhost.
Parameters
-
ClusterSpec
cluster_spec_or_resolver - A `ClusterSpec` or `ClusterResolver` describing the cluster.
-
string
job_name - The name of the local job.
-
int
task_index - The local task index.
-
object
protocol - The communication protocol, such as `"grpc"`. If unspecified, will use the default from `python/platform/remote_utils.py`.
object experimental_connect_to_cluster_dyn(object cluster_spec_or_resolver, ImplicitContainer<T> job_name, ImplicitContainer<T> task_index, object protocol)
Connects to the given cluster. Will make devices on the cluster available to use. Note that calling this more
than once will work, but will invalidate any tensor handles on the old remote
devices. If the given local job name is not present in the cluster specification, it
will be automatically added, using an unused port on the localhost.
Parameters
-
object
cluster_spec_or_resolver - A `ClusterSpec` or `ClusterResolver` describing the cluster.
-
ImplicitContainer<T>
job_name - The name of the local job.
-
ImplicitContainer<T>
task_index - The local task index.
-
object
protocol - The communication protocol, such as `"grpc"`. If unspecified, will use the default from `python/platform/remote_utils.py`.
void experimental_connect_to_host(IEnumerable<object> remote_host, string job_name)
Connects to a single machine to enable remote execution on it. Will make devices on the remote host available to use. Note that calling this
more than once will work, but will invalidate any tensor handles on the old
remote devices. Using the default job_name of worker, you can schedule ops to run remotely as
follows:
Parameters
-
IEnumerable<object>
remote_host - a single or a list the remote server addr in host-port format.
-
string
job_name - The job name under which the new server will be accessible.
Show Example
# Enable eager execution, and connect to the remote host. tf.compat.v1.enable_eager_execution() tf.contrib.eager.connect_to_remote_host("exampleaddr.com:9876") with ops.device("job:worker/replica:0/task:1/device:CPU:0"): # The following tensors should be resident on the remote device, and the op # will also execute remotely. x1 = array_ops.ones([2, 2]) x2 = array_ops.ones([2, 2]) y = math_ops.matmul(x1, x2)
object experimental_connect_to_host_dyn(object remote_host, ImplicitContainer<T> job_name)
Connects to a single machine to enable remote execution on it. Will make devices on the remote host available to use. Note that calling this
more than once will work, but will invalidate any tensor handles on the old
remote devices. Using the default job_name of worker, you can schedule ops to run remotely as
follows:
Parameters
-
object
remote_host - a single or a list the remote server addr in host-port format.
-
ImplicitContainer<T>
job_name - The job name under which the new server will be accessible.
Show Example
# Enable eager execution, and connect to the remote host. tf.compat.v1.enable_eager_execution() tf.contrib.eager.connect_to_remote_host("exampleaddr.com:9876") with ops.device("job:worker/replica:0/task:1/device:CPU:0"): # The following tensors should be resident on the remote device, and the op # will also execute remotely. x1 = array_ops.ones([2, 2]) x2 = array_ops.ones([2, 2]) y = math_ops.matmul(x1, x2)
IList<object> experimental_list_devices()
List the names of the available devices.
Returns
-
IList<object>
- Names of the available devices, as a `list`.
object experimental_list_devices_dyn()
List the names of the available devices.
Returns
-
object
- Names of the available devices, as a `list`.
void experimental_run_functions_eagerly(bool run_eagerly)
Enables / disables eager execution of
tf.function
s. After calling `tf.config.experimental_run_functions_eagerly(True)` all
invocations of tf.function will run eagerly instead of running through a graph
function. This can be useful for debugging or profiling. Similarly, calling `tf.config.experimental_run_functions_eagerly(False)` will
revert the behavior of all functions to graph functions.
Parameters
-
bool
run_eagerly - Boolean. Whether to run functions eagerly.
object experimental_run_functions_eagerly_dyn(object run_eagerly)
Enables / disables eager execution of
tf.function
s. After calling `tf.config.experimental_run_functions_eagerly(True)` all
invocations of tf.function will run eagerly instead of running through a graph
function. This can be useful for debugging or profiling. Similarly, calling `tf.config.experimental_run_functions_eagerly(False)` will
revert the behavior of all functions to graph functions.
Parameters
-
object
run_eagerly - Boolean. Whether to run functions eagerly.
bool get_soft_device_placement()
Get if soft device placement is enabled. If enabled, an op will be placed on CPU if any of the following are true
1. there's no GPU implementation for the OP
2. no GPU devices are known or registered
3. need to co-locate with reftype input(s) which are from CPU
Returns
-
bool
- If soft placement is enabled.
object get_soft_device_placement_dyn()
Get if soft device placement is enabled. If enabled, an op will be placed on CPU if any of the following are true
1. there's no GPU implementation for the OP
2. no GPU devices are known or registered
3. need to co-locate with reftype input(s) which are from CPU
Returns
-
object
- If soft placement is enabled.
void set_soft_device_placement(bool enabled)
Set if soft device placement is enabled. If enabled, an op will be placed on CPU if any of the following are true
1. there's no GPU implementation for the OP
2. no GPU devices are known or registered
3. need to co-locate with reftype input(s) which are from CPU
Parameters
-
bool
enabled - Whether to enable soft placement.
object set_soft_device_placement_dyn(object enabled)
Set if soft device placement is enabled. If enabled, an op will be placed on CPU if any of the following are true
1. there's no GPU implementation for the OP
2. no GPU devices are known or registered
3. need to co-locate with reftype input(s) which are from CPU
Parameters
-
object
enabled - Whether to enable soft placement.