Type tf.test
Namespace tensorflow
Methods
- benchmark_config
- benchmark_config_dyn
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient
- compute_gradient_dyn
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error
- compute_gradient_error_dyn
- create_local_cluster
- create_local_cluster_dyn
- gpu_device_name
- gpu_device_name_dyn
- is_built_with_cuda
- is_built_with_cuda_dyn
- is_built_with_gpu_support
- is_built_with_gpu_support_dyn
- is_built_with_rocm
- is_built_with_rocm_dyn
- is_gpu_available
- is_gpu_available_dyn
Properties
Public static methods
object benchmark_config()
Returns a tf.compat.v1.ConfigProto for disabling the dependency optimizer.
Returns
-
object
- A TensorFlow ConfigProto object.
object benchmark_config_dyn()
Returns a tf.compat.v1.ConfigProto for disabling the dependency optimizer.
Returns
-
object
- A TensorFlow ConfigProto object.
object compute_gradient(IGraphNodeBase x, TensorShape x_shape, object y, TensorShape y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
IGraphNodeBase
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
TensorShape
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(ndarray x, IEnumerable<object> x_shape, object y, TensorShape y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
ndarray
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
TensorShape
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(ndarray x, TensorShape x_shape, object y, IEnumerable<int> y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
ndarray
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
IEnumerable<int>
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(ndarray x, TensorShape x_shape, object y, TensorShape y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
ndarray
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
TensorShape
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(IDictionary<object, object> x, IEnumerable<object> x_shape, object y, IEnumerable<int> y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
IDictionary<object, object>
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
IEnumerable<int>
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(IDictionary<object, object> x, IEnumerable<object> x_shape, object y, TensorShape y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
IDictionary<object, object>
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
TensorShape
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(IDictionary<object, object> x, TensorShape x_shape, object y, IEnumerable<int> y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
IDictionary<object, object>
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
IEnumerable<int>
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(IDictionary<object, object> x, TensorShape x_shape, object y, TensorShape y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
IDictionary<object, object>
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
TensorShape
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(IEnumerable<object> x, IEnumerable<object> x_shape, object y, IEnumerable<int> y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
IEnumerable<object>
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
IEnumerable<int>
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(IEnumerable<object> x, IEnumerable<object> x_shape, object y, TensorShape y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
IEnumerable<object>
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
TensorShape
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(IEnumerable<object> x, TensorShape x_shape, object y, IEnumerable<int> y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
IEnumerable<object>
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
IEnumerable<int>
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(IEnumerable<object> x, TensorShape x_shape, object y, TensorShape y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
IEnumerable<object>
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
TensorShape
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(ValueTuple<double, object> x, IEnumerable<object> x_shape, object y, IEnumerable<int> y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
ValueTuple<double, object>
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
IEnumerable<int>
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(ValueTuple<double, object> x, IEnumerable<object> x_shape, object y, TensorShape y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
ValueTuple<double, object>
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
TensorShape
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(ValueTuple<double, object> x, TensorShape x_shape, object y, IEnumerable<int> y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
ValueTuple<double, object>
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
IEnumerable<int>
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(ValueTuple<double, object> x, TensorShape x_shape, object y, TensorShape y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
ValueTuple<double, object>
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
TensorShape
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(RaggedTensor x, IEnumerable<object> x_shape, object y, IEnumerable<int> y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
RaggedTensor
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
IEnumerable<int>
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(RaggedTensor x, IEnumerable<object> x_shape, object y, TensorShape y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
RaggedTensor
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
TensorShape
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(RaggedTensor x, TensorShape x_shape, object y, IEnumerable<int> y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
RaggedTensor
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
IEnumerable<int>
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(RaggedTensor x, TensorShape x_shape, object y, TensorShape y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
RaggedTensor
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
TensorShape
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(IGraphNodeBase x, IEnumerable<object> x_shape, object y, IEnumerable<int> y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
IGraphNodeBase
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
IEnumerable<int>
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(IGraphNodeBase x, IEnumerable<object> x_shape, object y, TensorShape y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
IGraphNodeBase
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
TensorShape
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(ndarray x, IEnumerable<object> x_shape, object y, IEnumerable<int> y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
ndarray
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
IEnumerable<int>
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient(IGraphNodeBase x, TensorShape x_shape, object y, IEnumerable<int> y_shape, object x_init_value, ImplicitContainer<T> delta, Nullable<ValueTuple<IEnumerable<object>, object>> init_targets, IDictionary<object, object> extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
IGraphNodeBase
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
IEnumerable<int>
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
Nullable<ValueTuple<IEnumerable<object>, object>>
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
object compute_gradient_dyn(object x, object x_shape, object y, object y_shape, object x_init_value, ImplicitContainer<T> delta, object init_targets, object extra_feed_dict)
Computes and returns the theoretical and numerical Jacobian. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. If `x` or `y` is complex, the Jacobian will still be real but the
corresponding Jacobian dimension(s) will be twice as large. This is required
even if both input and output is complex since TensorFlow graphs are not
necessarily holomorphic, and may have gradients not expressible as complex
numbers. For example, if `x` is complex with shape `[m]` and `y` is complex
with shape `[n]`, each Jacobian `J` will have shape `[m * 2, n * 2]` with J[:m, :n] = d(Re y)/d(Re x)
J[:m, n:] = d(Im y)/d(Re x)
J[m:, :n] = d(Re y)/d(Im x)
J[m:, n:] = d(Im y)/d(Im x)
Parameters
-
object
x - a tensor or list of tensors
-
object
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
object
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- Two 2-d numpy arrays representing the theoretical and numerical Jacobian for dy/dx. Each has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y. If x is a list, returns a list of two numpy arrays.
int compute_gradient_error(RaggedTensor x, ValueTuple<int, object, int> x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
RaggedTensor
x - a tensor or list of tensors
-
ValueTuple<int, object, int>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(ValueTuple<double, object> x, ValueTuple<int, object, int> x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
ValueTuple<double, object>
x - a tensor or list of tensors
-
ValueTuple<int, object, int>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(ValueTuple<double, object> x, TensorShape x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
ValueTuple<double, object>
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(RaggedTensor x, IEnumerable<object> x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
RaggedTensor
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(IEnumerable<IGraphNodeBase> x, ValueTuple<int, object, int> x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
IEnumerable<IGraphNodeBase>
x - a tensor or list of tensors
-
ValueTuple<int, object, int>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(RaggedTensor x, TensorShape x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
RaggedTensor
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(IGraphNodeBase x, IEnumerable<object> x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
IGraphNodeBase
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(IGraphNodeBase x, ValueTuple<int, object, int> x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
IGraphNodeBase
x - a tensor or list of tensors
-
ValueTuple<int, object, int>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(IGraphNodeBase x, TensorShape x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
IGraphNodeBase
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(ValueTuple<double, object> x, IEnumerable<object> x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
ValueTuple<double, object>
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(IEnumerable<IGraphNodeBase> x, TensorShape x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
IEnumerable<IGraphNodeBase>
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(IDictionary<object, object> x, TensorShape x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
IDictionary<object, object>
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(IDictionary<object, object> x, ValueTuple<int, object, int> x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
IDictionary<object, object>
x - a tensor or list of tensors
-
ValueTuple<int, object, int>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(IDictionary<object, object> x, IEnumerable<object> x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
IDictionary<object, object>
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(ndarray x, TensorShape x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
ndarray
x - a tensor or list of tensors
-
TensorShape
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(ndarray x, ValueTuple<int, object, int> x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
ndarray
x - a tensor or list of tensors
-
ValueTuple<int, object, int>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(ndarray x, IEnumerable<object> x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
ndarray
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
int compute_gradient_error(IEnumerable<IGraphNodeBase> x, IEnumerable<object> x_shape, object y, object y_shape, object x_init_value, double delta, object init_targets, IDictionary<object, object> extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
IEnumerable<IGraphNodeBase>
x - a tensor or list of tensors
-
IEnumerable<object>
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
double
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
IDictionary<object, object>
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
int
- The maximum error in between the two Jacobians.
object compute_gradient_error_dyn(object x, object x_shape, object y, object y_shape, object x_init_value, ImplicitContainer<T> delta, object init_targets, object extra_feed_dict)
Computes the gradient error. (deprecated) Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version.
Instructions for updating:
Use tf.test.compute_gradient in 2.0, which has better support for functions. Note that the two versions have different usage, so code change is needed. Computes the maximum error for dy/dx between the computed Jacobian and the
numerically estimated Jacobian. This function will modify the tensors passed in as it adds more operations
and hence changing the consumers of the operations of the input tensors. This function adds operations to the current session. To compute the error
using a particular device, such as a GPU, use the standard methods for
setting a device (e.g. using with sess.graph.device() or setting a device
function in the session constructor).
Parameters
-
object
x - a tensor or list of tensors
-
object
x_shape - the dimensions of x as a tuple or an array of ints. If x is a list, then this is the list of shapes.
-
object
y - a tensor
-
object
y_shape - the dimensions of y as a tuple or an array of ints.
-
object
x_init_value - (optional) a numpy array of the same shape as "x" representing the initial value of x. If x is a list, this should be a list of numpy arrays. If this is none, the function will pick a random tensor as the initial value.
-
ImplicitContainer<T>
delta - (optional) the amount of perturbation.
-
object
init_targets - list of targets to run to initialize model params.
-
object
extra_feed_dict - dict that allows fixing specified tensor values during the Jacobian calculation.
Returns
-
object
- The maximum error in between the two Jacobians.
ValueTuple<IList<Server>, object> create_local_cluster(int num_workers, int num_ps, string protocol, object worker_config, object ps_config)
Create and start local servers and return the associated `Server` objects. "PS" stands for "parameter server": a task responsible for storing and
updating the model's parameters. Other tasks send updates to these parameters
as they work on optimizing the parameters. This particular division of labor
between tasks is not required, but is common for distributed training. Read more at https://www.tensorflow.org/guide/extend/architecture ![components](https://www.tensorflow.org/images/diag1.svg "components") Figure illustrates the interaction of these components.
"/job:worker/task:0" and "/job:ps/task:0" are both tasks with worker services. Example:
Parameters
-
int
num_workers - Number of worker servers to start.
-
int
num_ps - Number of PS servers to start.
-
string
protocol - Communication protocol. Allowed values are documented in the
documentation of
tf.distribute.Server
. -
object
worker_config - (optional)
tf.ConfigProto
to initialize workers. Can be used to instantiate multiple devices etc. -
object
ps_config - (optional)
tf.ConfigProto
to initialize PS servers.
Returns
-
ValueTuple<IList<Server>, object>
- A tuple `(worker_servers, ps_servers)`. `worker_servers` is a list
of `num_workers` objects of type
tf.distribute.Server
(all running locally); and `ps_servers` is a list of `num_ps` objects of similar type.
Show Example
workers, _ = tf.test.create_local_cluster(num_workers=2, num_ps=2) worker_sessions = [tf.compat.v1.Session(w.target) for w in workers] with tf.device("/job:ps/task:0"): ... with tf.device("/job:ps/task:1"): ... with tf.device("/job:worker/task:0"): ... with tf.device("/job:worker/task:1"): ... worker_sessions[0].run(...)
object create_local_cluster_dyn(object num_workers, object num_ps, ImplicitContainer<T> protocol, object worker_config, object ps_config)
Create and start local servers and return the associated `Server` objects. "PS" stands for "parameter server": a task responsible for storing and
updating the model's parameters. Other tasks send updates to these parameters
as they work on optimizing the parameters. This particular division of labor
between tasks is not required, but is common for distributed training. Read more at https://www.tensorflow.org/guide/extend/architecture ![components](https://www.tensorflow.org/images/diag1.svg "components") Figure illustrates the interaction of these components.
"/job:worker/task:0" and "/job:ps/task:0" are both tasks with worker services. Example:
Parameters
-
object
num_workers - Number of worker servers to start.
-
object
num_ps - Number of PS servers to start.
-
ImplicitContainer<T>
protocol - Communication protocol. Allowed values are documented in the
documentation of
tf.distribute.Server
. -
object
worker_config - (optional)
tf.ConfigProto
to initialize workers. Can be used to instantiate multiple devices etc. -
object
ps_config - (optional)
tf.ConfigProto
to initialize PS servers.
Returns
-
object
- A tuple `(worker_servers, ps_servers)`. `worker_servers` is a list
of `num_workers` objects of type
tf.distribute.Server
(all running locally); and `ps_servers` is a list of `num_ps` objects of similar type.
Show Example
workers, _ = tf.test.create_local_cluster(num_workers=2, num_ps=2) worker_sessions = [tf.compat.v1.Session(w.target) for w in workers] with tf.device("/job:ps/task:0"): ... with tf.device("/job:ps/task:1"): ... with tf.device("/job:worker/task:0"): ... with tf.device("/job:worker/task:1"): ... worker_sessions[0].run(...)
string gpu_device_name()
Returns the name of a GPU device if available or the empty string.
object gpu_device_name_dyn()
Returns the name of a GPU device if available or the empty string.
object is_built_with_cuda()
Returns whether TensorFlow was built with CUDA (GPU) support.
object is_built_with_cuda_dyn()
Returns whether TensorFlow was built with CUDA (GPU) support.
object is_built_with_gpu_support()
Returns whether TensorFlow was built with GPU (i.e. CUDA or ROCm) support.
object is_built_with_gpu_support_dyn()
Returns whether TensorFlow was built with GPU (i.e. CUDA or ROCm) support.
object is_built_with_rocm()
Returns whether TensorFlow was built with ROCm (GPU) support.
object is_built_with_rocm_dyn()
Returns whether TensorFlow was built with ROCm (GPU) support.
bool is_gpu_available(bool cuda_only, Nullable<ValueTuple<int, int>> min_cuda_compute_capability)
Returns whether TensorFlow can access a GPU. Warning: if a non-GPU version of the package is installed, the function would
also return False. Use
tf.test.is_built_with_cuda
to validate if TensorFlow
was build with CUDA support.
Parameters
-
bool
cuda_only - limit the search to CUDA GPUs.
-
Nullable<ValueTuple<int, int>>
min_cuda_compute_capability - a (major,minor) pair that indicates the minimum CUDA compute capability required, or None if no requirement. Note that the keyword arg name "cuda_only" is misleading (since routine will return true when a GPU device is available irrespective of whether TF was built with CUDA support or ROCm support. However no changes here because ++ Changing the name "cuda_only" to something more generic would break backward compatibility ++ Adding an equivalent "rocm_only" would require the implementation check the build type. This in turn would require doing the same for CUDA and thus potentially break backward compatibility ++ Adding a new "cuda_or_rocm_only" would not break backward compatibility, but would require most (if not all) callers to update the call to use "cuda_or_rocm_only" instead of "cuda_only"
Returns
-
bool
- True if a GPU device of the requested kind is available.
object is_gpu_available_dyn(ImplicitContainer<T> cuda_only, object min_cuda_compute_capability)
Returns whether TensorFlow can access a GPU. Warning: if a non-GPU version of the package is installed, the function would
also return False. Use
tf.test.is_built_with_cuda
to validate if TensorFlow
was build with CUDA support.
Parameters
-
ImplicitContainer<T>
cuda_only - limit the search to CUDA GPUs.
-
object
min_cuda_compute_capability - a (major,minor) pair that indicates the minimum CUDA compute capability required, or None if no requirement. Note that the keyword arg name "cuda_only" is misleading (since routine will return true when a GPU device is available irrespective of whether TF was built with CUDA support or ROCm support. However no changes here because ++ Changing the name "cuda_only" to something more generic would break backward compatibility ++ Adding an equivalent "rocm_only" would require the implementation check the build type. This in turn would require doing the same for CUDA and thus potentially break backward compatibility ++ Adding a new "cuda_or_rocm_only" would not break backward compatibility, but would require most (if not all) callers to update the call to use "cuda_or_rocm_only" instead of "cuda_only"
Returns
-
object
- True if a GPU device of the requested kind is available.