LostTech.TensorFlow : API Documentation

Type LuongMonotonicAttention

Namespace tensorflow.contrib.seq2seq

Parent _BaseMonotonicAttentionMechanism

Interfaces ILuongMonotonicAttention

Monotonic attention mechanism with Luong-style energy function.

This type of attention enforces a monotonic constraint on the attention distributions; that is once the model attends to a given point in the memory it can't attend to any prior points at subsequence output timesteps. It achieves this by using the _monotonic_probability_fn instead of softmax to construct its attention distributions. Otherwise, it is equivalent to LuongAttention. This approach is proposed in

Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, Douglas Eck, "Online and Linear-Time Attention by Enforcing Monotonic Alignments." ICML 2017. https://arxiv.org/abs/1704.00784


Public properties

object alignments_size get;

object alignments_size_dyn get;

object batch_size get;

object batch_size_dyn get;

object dtype get; set;

object keys get;

object keys_dyn get;

Dense memory_layer get;

object memory_layer_dyn get;

object PythonObject get;

Dense query_layer get;

object query_layer_dyn get;

object state_size get;

object state_size_dyn get;

object values get;

object values_dyn get;