|
def | __init__ (self, attention_rnn_dim, embedding_dim, attention_dim, attention_location_n_filters, attention_location_kernel_size) |
|
def | get_alignment_energies (self, query, processed_memory, attention_weights_cat) |
|
def | forward (self, attention_hidden_state, memory, processed_memory, attention_weights_cat, mask) |
|
◆ __init__()
def model.Attention.__init__ |
( |
|
self, |
|
|
|
attention_rnn_dim, |
|
|
|
embedding_dim, |
|
|
|
attention_dim, |
|
|
|
attention_location_n_filters, |
|
|
|
attention_location_kernel_size |
|
) |
| |
◆ get_alignment_energies()
def model.Attention.get_alignment_energies |
( |
|
self, |
|
|
|
query, |
|
|
|
processed_memory, |
|
|
|
attention_weights_cat |
|
) |
| |
PARAMS
------
query: decoder output (batch, n_mel_channels * n_frames_per_step)
processed_memory: processed encoder outputs (B, T_in, attention_dim)
attention_weights_cat: cumulative and prev. att weights (B, 2, max_time)
RETURNS
-------
alignment (batch, max_time)
◆ forward()
def model.Attention.forward |
( |
|
self, |
|
|
|
attention_hidden_state, |
|
|
|
memory, |
|
|
|
processed_memory, |
|
|
|
attention_weights_cat, |
|
|
|
mask |
|
) |
| |
PARAMS
------
attention_hidden_state: attention rnn last output
memory: encoder outputs
processed_memory: processed encoder outputs
attention_weights_cat: previous and cummulative attention weights
mask: binary mask for padded data
◆ query_layer
model.Attention.query_layer |
◆ memory_layer
model.Attention.memory_layer |
◆ location_layer
model.Attention.location_layer |
◆ score_mask_value
model.Attention.score_mask_value |
The documentation for this class was generated from the following file:
- demo/Tacotron2/tacotron2/model.py