Type LinearOperatorDiag
Namespace tensorflow.linalg
Parent LinearOperator
Interfaces ILinearOperatorDiag
`LinearOperator` acting like a [batch] square diagonal matrix. This operator acts like a [batch] diagonal matrix `A` with shape
`[B1,...,Bb, N, N]` for some `b >= 0`. The first `b` indices index a
batch member. For every batch index `(i1,...,ib)`, `A[i1,...,ib, : :]` is
an `N x N` matrix. This matrix `A` is not materialized, but for
purposes of broadcasting this shape will be relevant. `LinearOperatorDiag` is initialized with a (batch) vector.
#### Shape compatibility This operator acts on [batch] matrix with compatible shape.
`x` is a batch matrix with compatible shape for `matmul` and `solve` if ```
operator.shape = [B1,...,Bb] + [N, N], with b >= 0
x.shape = [C1,...,Cc] + [N, R],
and [C1,...,Cc] broadcasts with [B1,...,Bb] to [D1,...,Dd]
``` #### Performance Suppose `operator` is a `LinearOperatorDiag` of shape `[N, N]`,
and `x.shape = [N, R]`. Then * `operator.matmul(x)` involves `N * R` multiplications.
* `operator.solve(x)` involves `N` divisions and `N * R` multiplications.
* `operator.determinant()` involves a size `N` `reduce_prod`. If instead `operator` and `x` have shape `[B1,...,Bb, N, N]` and
`[B1,...,Bb, N, R]`, every operation increases in complexity by `B1*...*Bb`. #### Matrix property hints This `LinearOperator` is initialized with boolean flags of the form `is_X`,
for `X = non_singular, self_adjoint, positive_definite, square`.
These have the following meaning: * If `is_X == True`, callers should expect the operator to have the
property `X`. This is a promise that should be fulfilled, but is *not* a
runtime assert. For example, finite floating point precision may result
in these promises being violated.
* If `is_X == False`, callers should expect the operator to not have `X`.
* If `is_X == None` (the default), callers should have no expectation either
way.
Show Example
# Create a 2 x 2 diagonal linear operator. diag = [1., -1.] operator = LinearOperatorDiag(diag) operator.to_dense() ==> [[1., 0.] [0., -1.]] operator.shape ==> [2, 2] operator.log_abs_determinant() ==> scalar Tensor x =... Shape [2, 4] Tensor operator.matmul(x) ==> Shape [2, 4] Tensor # Create a [2, 3] batch of 4 x 4 linear operators. diag = tf.random.normal(shape=[2, 3, 4]) operator = LinearOperatorDiag(diag) # Create a shape [2, 1, 4, 2] vector. Note that this shape is compatible # since the batch dimensions, [2, 1], are broadcast to # operator.batch_shape = [2, 3]. y = tf.random.normal(shape=[2, 1, 4, 2]) x = operator.solve(y) ==> operator.matmul(x) = y
Properties
- batch_shape
- batch_shape_dyn
- diag
- diag_dyn
- domain_dimension
- domain_dimension_dyn
- dtype
- dtype_dyn
- graph_parents
- graph_parents_dyn
- is_non_singular
- is_non_singular_dyn
- is_positive_definite
- is_positive_definite_dyn
- is_self_adjoint
- is_self_adjoint_dyn
- is_square
- is_square_dyn
- name
- name_dyn
- name_scope
- name_scope_dyn
- PythonObject
- range_dimension
- range_dimension_dyn
- shape
- shape_dyn
- submodules
- submodules_dyn
- tensor_rank
- tensor_rank_dyn
- trainable_variables
- trainable_variables_dyn
- variables
- variables_dyn