Type WALSModel
Namespace tensorflow.contrib.factorization
Parent PythonObjectContainer
Interfaces IWALSModel
A model for Weighted Alternating Least Squares matrix factorization. It minimizes the following loss function over U, V:
$$
\|\sqrt W \odot (A - U V^T)\|_F^2 + \lambda (\|U\|_F^2 + \|V\|_F^2)
$$
where,
A: input matrix,
W: weight matrix. Note that the (element-wise) square root of the weights
is used in the objective function.
U, V: row_factors and column_factors matrices,
\\(\lambda)\\: regularization.
Also we assume that W is of the following special form:
\\( W_{ij} = W_0 + R_i * C_j \\) if \\(A_{ij} \ne 0\\),
\\(W_{ij} = W_0\\) otherwise.
where,
\\(W_0\\): unobserved_weight,
\\(R_i\\): row_weights,
\\(C_j\\): col_weights. Note that the current implementation supports two operation modes: The default
mode is for the condition where row_factors and col_factors can individually
fit into the memory of each worker and these will be cached. When this
condition can't be met, setting use_factors_weights_cache to False allows the
larger problem sizes with slight performance penalty as this will avoid
creating the worker caches and instead the relevant weight and factor values
are looked up from parameter servers at each step. Loss computation: The loss can be computed efficiently by decomposing it into
a sparse term and a Gramian term, see wals.md.
The loss is returned by the update_{col, row}_factors(sp_input), and is
normalized as follows:
_, _, unregularized_loss, regularization, sum_weights =
update_row_factors(sp_input)
if sp_input contains the rows \\({A_i, i \in I}\\), and the input matrix A
has n total rows, then the minibatch loss = unregularized_loss +
regularization is
$$
(\|\sqrt W_I \odot (A_I - U_I V^T)\|_F^2 + \lambda \|U_I\|_F^2) * n / |I| +
\lambda \|V\|_F^2
$$
The sum_weights tensor contains the normalized sum of weights
\\(sum(W_I) * n / |I|\\). A typical usage example (pseudocode): with tf.Graph().as_default():
# Set up the model object.
model = tf.contrib.factorization.WALSModel(....) # To be run only once as part of session initialization. In distributed
# training setting, this should only be run by the chief trainer and all
# other trainers should block until this is done.
model_init_op = model.initialize_op # To be run once per worker after session is available, prior to
# the prep_gramian_op for row(column) can be run.
worker_init_op = model.worker_init # To be run once per iteration sweep before the row(column) update
# initialize ops can be run. Note that in the distributed training
# situations, this should only be run by the chief trainer. All other
# trainers need to block until this is done.
row_update_prep_gramian_op = model.row_update_prep_gramian_op
col_update_prep_gramian_op = model.col_update_prep_gramian_op # To be run once per worker per iteration sweep. Must be run before
# any actual update ops can be run.
init_row_update_op = model.initialize_row_update_op
init_col_update_op = model.initialize_col_update_op # Ops to update row(column). This can either take the entire sparse
# tensor or slices of sparse tensor. For distributed trainer, each
# trainer handles just part of the matrix.
_, row_update_op, unreg_row_loss, row_reg, _ = model.update_row_factors(
sp_input=matrix_slices_from_queue_for_worker_shard)
row_loss = unreg_row_loss + row_reg
_, col_update_op, unreg_col_loss, col_reg, _ = model.update_col_factors(
sp_input=transposed_matrix_slices_from_queue_for_worker_shard,
transpose_input=True)
col_loss = unreg_col_loss + col_reg ... # model_init_op is passed to Supervisor. Chief trainer runs it. Other
# trainers wait.
sv = tf.compat.v1.train.Supervisor(is_chief=is_chief,
...,
init_op=tf.group(..., model_init_op,...),...)
... with sv.managed_session(...) as sess:
# All workers/trainers run it after session becomes available.
worker_init_op.run(session=sess) ... while i in iterations: # All trainers need to sync up here.
while not_all_ready:
wait # Row update sweep.
if is_chief:
row_update_prep_gramian_op.run(session=sess)
else:
wait_for_chief # All workers run upate initialization.
init_row_update_op.run(session=sess) # Go through the matrix.
reset_matrix_slices_queue_for_worker_shard
while_matrix_slices:
row_update_op.run(session=sess) # All trainers need to sync up here.
while not_all_ready:
wait # Column update sweep.
if is_chief:
col_update_prep_gramian_op.run(session=sess)
else:
wait_for_chief # All workers run upate initialization.
init_col_update_op.run(session=sess) # Go through the matrix.
reset_transposed_matrix_slices_queue_for_worker_shard
while_transposed_matrix_slices:
col_update_op.run(session=sess)
Methods
- NewDyn
- project_col_factors
- project_col_factors_dyn
- project_row_factors
- project_row_factors_dyn
- scatter_update_dyn<TClass>
- scatter_update<TClass>
- update_col_factors
- update_col_factors_dyn
- update_row_factors
- update_row_factors_dyn
Properties
- col_factors
- col_factors_dyn
- col_update_prep_gramian_op
- col_update_prep_gramian_op_dyn
- col_weights
- col_weights_dyn
- initialize_col_update_op
- initialize_col_update_op_dyn
- initialize_op
- initialize_op_dyn
- initialize_row_update_op
- initialize_row_update_op_dyn
- PythonObject
- row_factors
- row_factors_dyn
- row_update_prep_gramian_op
- row_update_prep_gramian_op_dyn
- row_weights
- row_weights_dyn
- worker_init
- worker_init_dyn
Public instance methods
Tensor project_col_factors(SparseTensor sp_input, bool transpose_input, IEnumerable<double> projection_weights)
Projects the column factors. This computes the column embedding \(v_j\) for an observed column
\(a_j\) by solving one iteration of the update equations.
Parameters
-
SparseTensor
sp_input - A SparseTensor representing a set of columns. Please note that the row indices of this SparseTensor must match the model row feature indexing while the column indices are ignored. The returned results will be in the same ordering as the input columns.
-
bool
transpose_input - If true, the input will be logically transposed and the columns corresponding to the transposed input are projected.
-
IEnumerable<double>
projection_weights - The column weights to be used for the projection. If None then 1.0 is used. This can be either a scaler or a rank-1 tensor with the number of elements matching the number of columns to be projected. Note that the row weights will be determined by the underlying WALS model.
Returns
-
Tensor
- Projected column factors.
object project_col_factors_dyn(object sp_input, ImplicitContainer<T> transpose_input, object projection_weights)
Projects the column factors. This computes the column embedding \(v_j\) for an observed column
\(a_j\) by solving one iteration of the update equations.
Parameters
-
object
sp_input - A SparseTensor representing a set of columns. Please note that the row indices of this SparseTensor must match the model row feature indexing while the column indices are ignored. The returned results will be in the same ordering as the input columns.
-
ImplicitContainer<T>
transpose_input - If true, the input will be logically transposed and the columns corresponding to the transposed input are projected.
-
object
projection_weights - The column weights to be used for the projection. If None then 1.0 is used. This can be either a scaler or a rank-1 tensor with the number of elements matching the number of columns to be projected. Note that the row weights will be determined by the underlying WALS model.
Returns
-
object
- Projected column factors.
Tensor project_row_factors(SparseTensor sp_input, bool transpose_input, IEnumerable<double> projection_weights)
Projects the row factors. This computes the row embedding \(u_i\) for an observed row \(a_i\) by
solving one iteration of the update equations.
Parameters
-
SparseTensor
sp_input - A SparseTensor representing a set of rows. Please note that the column indices of this SparseTensor must match the model column feature indexing while the row indices are ignored. The returned results will be in the same ordering as the input rows.
-
bool
transpose_input - If true, the input will be logically transposed and the rows corresponding to the transposed input are projected.
-
IEnumerable<double>
projection_weights - The row weights to be used for the projection. If None then 1.0 is used. This can be either a scaler or a rank-1 tensor with the number of elements matching the number of rows to be projected. Note that the column weights will be determined by the underlying WALS model.
Returns
-
Tensor
- Projected row factors.
object project_row_factors_dyn(object sp_input, ImplicitContainer<T> transpose_input, object projection_weights)
Projects the row factors. This computes the row embedding \(u_i\) for an observed row \(a_i\) by
solving one iteration of the update equations.
Parameters
-
object
sp_input - A SparseTensor representing a set of rows. Please note that the column indices of this SparseTensor must match the model column feature indexing while the row indices are ignored. The returned results will be in the same ordering as the input rows.
-
ImplicitContainer<T>
transpose_input - If true, the input will be logically transposed and the rows corresponding to the transposed input are projected.
-
object
projection_weights - The row weights to be used for the projection. If None then 1.0 is used. This can be either a scaler or a rank-1 tensor with the number of elements matching the number of rows to be projected. Note that the column weights will be determined by the underlying WALS model.
Returns
-
object
- Projected row factors.
ValueTuple<Tensor, object, object, object, double> update_col_factors(SparseTensor sp_input, bool transpose_input)
Updates the column factors.
Parameters
-
SparseTensor
sp_input - A SparseTensor representing a subset of columns of the full input. Please refer to comments for update_row_factors for restrictions.
-
bool
transpose_input - If true, the input will be logically transposed and the columns corresponding to the transposed input are updated.
Returns
-
ValueTuple<Tensor, object, object, object, double>
- A tuple consisting of the following elements:
object update_col_factors_dyn(object sp_input, ImplicitContainer<T> transpose_input)
Updates the column factors.
Parameters
-
object
sp_input - A SparseTensor representing a subset of columns of the full input. Please refer to comments for update_row_factors for restrictions.
-
ImplicitContainer<T>
transpose_input - If true, the input will be logically transposed and the columns corresponding to the transposed input are updated.
Returns
-
object
- A tuple consisting of the following elements:
ValueTuple<Tensor, object, object, object, double> update_row_factors(SparseTensor sp_input, bool transpose_input)
Updates the row factors.
Parameters
-
SparseTensor
sp_input - A SparseTensor representing a subset of rows of the full input in any order. Please note that this SparseTensor must retain the indexing as the original input.
-
bool
transpose_input - If true, the input will be logically transposed and the rows corresponding to the transposed input are updated.
Returns
-
ValueTuple<Tensor, object, object, object, double>
- A tuple consisting of the following elements:
object update_row_factors_dyn(object sp_input, ImplicitContainer<T> transpose_input)
Updates the row factors.
Parameters
-
object
sp_input - A SparseTensor representing a subset of rows of the full input in any order. Please note that this SparseTensor must retain the indexing as the original input.
-
ImplicitContainer<T>
transpose_input - If true, the input will be logically transposed and the rows corresponding to the transposed input are updated.
Returns
-
object
- A tuple consisting of the following elements:
Public static methods
WALSModel NewDyn(object input_rows, object input_cols, object n_components, ImplicitContainer<T> unobserved_weight, object regularization, ImplicitContainer<T> row_init, ImplicitContainer<T> col_init, ImplicitContainer<T> num_row_shards, ImplicitContainer<T> num_col_shards, ImplicitContainer<T> row_weights, ImplicitContainer<T> col_weights, ImplicitContainer<T> use_factors_weights_cache, ImplicitContainer<T> use_gramian_cache, ImplicitContainer<T> use_scoped_vars)
Creates model for WALS matrix factorization.
Parameters
-
object
input_rows - total number of rows for input matrix.
-
object
input_cols - total number of cols for input matrix.
-
object
n_components - number of dimensions to use for the factors.
-
ImplicitContainer<T>
unobserved_weight - weight given to unobserved entries of matrix.
-
object
regularization - weight of L2 regularization term. If None, no regularization is done.
-
ImplicitContainer<T>
row_init - initializer for row factor. Can be a tensor or numpy constant. If set to "random", the value is initialized randomly.
-
ImplicitContainer<T>
col_init - initializer for column factor. See row_init for details.
-
ImplicitContainer<T>
num_row_shards - number of shards to use for row factors.
-
ImplicitContainer<T>
num_col_shards - number of shards to use for column factors.
-
ImplicitContainer<T>
row_weights - Must be in one of the following three formats: None, a list of lists of non-negative real numbers (or equivalent iterables) or a single non-negative real number. - When set to None, w_ij = unobserved_weight, which simplifies to ALS. Note that col_weights must also be set to "None" in this case. - If it is a list of lists of non-negative real numbers, it needs to be in the form of [[w_0, w_1,...], [w_k,... ], [...]], with the number of inner lists matching the number of row factor shards and the elements in each inner list are the weights for the rows of the corresponding row factor shard. In this case, w_ij = unobserved_weight + row_weights[i] * col_weights[j]. - If this is a single non-negative real number, this value is used for all row weights and \(w_ij\) = unobserved_weight + row_weights * col_weights[j]. Note that it is allowed to have row_weights as a list while col_weights a single number or vice versa.
-
ImplicitContainer<T>
col_weights - See row_weights.
-
ImplicitContainer<T>
use_factors_weights_cache - When True, the factors and weights will be cached on the workers before the updates start. Defaults to True. Note that the weights cache is initialized through `worker_init`, and the row/col factors cache is initialized through `initialize_{col/row}_update_op`. In the case where the weights are computed outside and set before the training iterations start, it is important to ensure the `worker_init` op is run afterwards for the weights cache to take effect.
-
ImplicitContainer<T>
use_gramian_cache - When True, the Gramians will be cached on the workers before the updates start. Defaults to True.
-
ImplicitContainer<T>
use_scoped_vars - When True, the factor and weight vars will also be nested in a tf.name_scope.
object scatter_update_dyn<TClass>(object factor, object indices, object values, object sharding_func, object name)
Helper function for doing sharded scatter update.
TClass scatter_update<TClass>(IEnumerable<Variable> factor, object indices, IGraphNodeBase values, object sharding_func, string name)
Helper function for doing sharded scatter update.
Public properties
IList<Variable> col_factors get;
Returns a list of tensors corresponding to column factor shards.
object col_factors_dyn get;
Returns a list of tensors corresponding to column factor shards.
object col_update_prep_gramian_op get;
Op to form the gramian before starting col updates. Must be run before initialize_col_update_op and should only be run by one
trainer (usually the chief) when doing distributed training.
object col_update_prep_gramian_op_dyn get;
Op to form the gramian before starting col updates. Must be run before initialize_col_update_op and should only be run by one
trainer (usually the chief) when doing distributed training.
IList<Variable> col_weights get;
Returns a list of tensors corresponding to col weight shards.
object col_weights_dyn get;
Returns a list of tensors corresponding to col weight shards.
object initialize_col_update_op get;
Op to initialize worker state before starting column updates.
object initialize_col_update_op_dyn get;
Op to initialize worker state before starting column updates.
object initialize_op get;
Returns an op for initializing tensorflow variables.
object initialize_op_dyn get;
Returns an op for initializing tensorflow variables.
object initialize_row_update_op get;
Op to initialize worker state before starting row updates.
object initialize_row_update_op_dyn get;
Op to initialize worker state before starting row updates.
object PythonObject get;
IList<Variable> row_factors get;
Returns a list of tensors corresponding to row factor shards.
object row_factors_dyn get;
Returns a list of tensors corresponding to row factor shards.
object row_update_prep_gramian_op get;
Op to form the gramian before starting row updates. Must be run before initialize_row_update_op and should only be run by one
trainer (usually the chief) when doing distributed training.
object row_update_prep_gramian_op_dyn get;
Op to form the gramian before starting row updates. Must be run before initialize_row_update_op and should only be run by one
trainer (usually the chief) when doing distributed training.
IList<Variable> row_weights get;
Returns a list of tensors corresponding to row weight shards.
object row_weights_dyn get;
Returns a list of tensors corresponding to row weight shards.
object worker_init get;
Op to initialize worker state once before starting any updates. Note that specifically this initializes the cache of the row and column
weights on workers when `use_factors_weights_cache` is True. In this case,
if these weights are being calculated and reset after the object is created,
it is important to ensure this ops is run afterwards so the cache reflects
the correct values.
object worker_init_dyn get;
Op to initialize worker state once before starting any updates. Note that specifically this initializes the cache of the row and column
weights on workers when `use_factors_weights_cache` is True. In this case,
if these weights are being calculated and reset after the object is created,
it is important to ensure this ops is run afterwards so the cache reflects
the correct values.