LostTech.TensorFlow : API Documentation

Type ARModel

Namespace tensorflow.contrib.timeseries

Parent TimeSeriesModel

Interfaces IARModel

Auto-regressive model, both linear and non-linear.

Features to the model include time and values of input_window_size timesteps, and times for output_window_size timesteps. These are passed through a configurable prediction model, and then fed to a loss function (e.g. squared loss).

Note that this class can also be used to regress against time only by setting the input_window_size to zero.

Each periodicity in the `periodicities` arg is divided by the `num_time_buckets` into time buckets that are represented as features added to the model.

A good heuristic for picking an appropriate periodicity for a given data set would be the length of cycles in the data. For example, energy usage in a home is typically cyclic each day. If the time feature in a home energy usage dataset is in the unit of hours, then 24 would be an appropriate periodicity. Similarly, a good heuristic for `num_time_buckets` is how often the data is expected to change within the cycle. For the aforementioned home energy usage dataset and periodicity of 24, then 48 would be a reasonable value if usage is expected to change every half hour.

Each feature's value for a given example with time t is the difference between t and the start of the time bucket it falls under. If it doesn't fall under a feature's associated time bucket, then that feature's value is zero.

For example: if `periodicities` = (9, 12) and `num_time_buckets` = 3, then 6 features would be added to the model, 3 for periodicity 9 and 3 for periodicity 12.

For an example data point where t = 17: - It's in the 3rd time bucket for periodicity 9 (2nd period is 9-18 and 3rd time bucket is 15-18) - It's in the 2nd time bucket for periodicity 12 (2nd period is 12-24 and 2nd time bucket is between 16-20).

Therefore the 6 added features for this row with t = 17 would be:

# Feature name (periodicity#_timebucket#), feature value P9_T1, 0 # not in first time bucket P9_T2, 0 # not in second time bucket P9_T3, 2 # 17 - 15 since 15 is the start of the 3rd time bucket P12_T1, 0 # not in first time bucket P12_T2, 1 # 17 - 16 since 16 is the start of the 2nd time bucket P12_T3, 0 # not in third time bucket




Public instance methods

IDictionary<object, object> generate(int number_of_series, int series_length, IDictionary<IGraphNodeBase, object> model_parameters, object seed)

Sample synthetic data from model parameters, with optional substitutions.

Returns `number_of_series` possible sequences of future values, sampled from the generative model with each conditioned on the previous. Samples are based on trained parameters, except for those parameters explicitly overridden in `model_parameters`.

For distributions over future observations, see predict().
int number_of_series
Number of time series to create.
int series_length
Length of each time series.
IDictionary<IGraphNodeBase, object> model_parameters
A dictionary mapping model parameters to values, which replace trained parameters when generating data.
object seed
If specified, return deterministic time series according to this value.
IDictionary<object, object>
A dictionary with keys TrainEvalFeatures.TIMES (mapping to an array with shape [number_of_series, series_length]) and TrainEvalFeatures.VALUES (mapping to an array with shape [number_of_series, series_length, num_features]).

object get_batch_loss(IDictionary<string, IGraphNodeBase> features, object mode, object state)

Computes predictions and a loss.
IDictionary<string, IGraphNodeBase> features
A dictionary (such as is produced by a chunker) with the following key/value pairs (shapes are given as required for training): TrainEvalFeatures.TIMES: A [batch size, self.window_size] integer Tensor with times for each observation. To train on longer sequences, the data should first be chunked. TrainEvalFeatures.VALUES: A [batch size, self.window_size, self.num_features] Tensor with values for each observation. When evaluating, `TIMES` and `VALUES` must have a window size of at least self.window_size, but it may be longer, in which case the last window_size - self.input_window_size times (or fewer if this is not divisible by self.output_window_size) will be evaluated on with non-overlapping output windows (and will have associated predictions). This is primarily to support qualitative evaluation/plotting, and is not a recommended way to compute evaluation losses (since there is no overlap in the output windows, which for window-based models is an undesirable bias).
object mode
The tf.estimator.ModeKeys mode to use (TRAIN or EVAL).
object state
A model.ModelOutputs object.

Public properties

DType dtype get; set;

IList<object> exogenous_feature_columns get;

object exogenous_feature_columns_dyn get;

int exogenous_size get; set;

int input_window_size get; set;

string loss get; set;

object NORMAL_LIKELIHOOD_LOSS_dyn get; set;

int num_features get; set;

int output_window_size get; set;

object PythonObject get;

object SQUARED_LOSS_dyn get; set;

int window_size get; set;

Public fields


return string


return string