Type BahdanauAttention
Namespace tensorflow.contrib.seq2seq
Parent _BaseAttentionMechanism
Interfaces IBahdanauAttention
Implements Bahdanau-style (additive) attention. This attention has two forms. The first is Bahdanau attention,
as described in: Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio.
"Neural Machine Translation by Jointly Learning to Align and Translate."
ICLR 2015. https://arxiv.org/abs/1409.0473 The second is the normalized form. This form is inspired by the
weight normalization article: Tim Salimans, Diederik P. Kingma.
"Weight Normalization: A Simple Reparameterization to Accelerate
Training of Deep Neural Networks."
https://arxiv.org/abs/1602.07868 To enable the second form, construct the object with parameter
`normalize=True`.