TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
All Classes Namespaces Functions Variables Typedefs Enumerations Enumerator Friends Pages
model.Attention Class Reference
Inheritance diagram for model.Attention:
Collaboration diagram for model.Attention:

Public Member Functions

def __init__ (self, attention_rnn_dim, embedding_dim, attention_dim, attention_location_n_filters, attention_location_kernel_size)
 
def get_alignment_energies (self, query, processed_memory, attention_weights_cat)
 
def forward (self, attention_hidden_state, memory, processed_memory, attention_weights_cat, mask)
 

Public Attributes

 query_layer
 
 memory_layer
 
 v
 
 location_layer
 
 score_mask_value
 

Constructor & Destructor Documentation

◆ __init__()

def model.Attention.__init__ (   self,
  attention_rnn_dim,
  embedding_dim,
  attention_dim,
  attention_location_n_filters,
  attention_location_kernel_size 
)

Member Function Documentation

◆ get_alignment_energies()

def model.Attention.get_alignment_energies (   self,
  query,
  processed_memory,
  attention_weights_cat 
)
PARAMS
------
query: decoder output (batch, n_mel_channels * n_frames_per_step)
processed_memory: processed encoder outputs (B, T_in, attention_dim)
attention_weights_cat: cumulative and prev. att weights (B, 2, max_time)

RETURNS
-------
alignment (batch, max_time)
Here is the caller graph for this function:

◆ forward()

def model.Attention.forward (   self,
  attention_hidden_state,
  memory,
  processed_memory,
  attention_weights_cat,
  mask 
)
PARAMS
------
attention_hidden_state: attention rnn last output
memory: encoder outputs
processed_memory: processed encoder outputs
attention_weights_cat: previous and cummulative attention weights
mask: binary mask for padded data
Here is the call graph for this function:

Member Data Documentation

◆ query_layer

model.Attention.query_layer

◆ memory_layer

model.Attention.memory_layer

◆ v

model.Attention.v

◆ location_layer

model.Attention.location_layer

◆ score_mask_value

model.Attention.score_mask_value

The documentation for this class was generated from the following file: