TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
All Classes Namespaces Functions Variables Typedefs Enumerations Enumerator Friends Pages
model.Decoder Class Reference
Inheritance diagram for model.Decoder:
Collaboration diagram for model.Decoder:

Public Member Functions

def __init__ (self, n_mel_channels, n_frames_per_step, encoder_embedding_dim, attention_dim, attention_location_n_filters, attention_location_kernel_size, attention_rnn_dim, decoder_rnn_dim, prenet_dim, max_decoder_steps, gate_threshold, p_attention_dropout, p_decoder_dropout, early_stopping)
 
def get_go_frame (self, memory)
 
def initialize_decoder_states (self, memory)
 
def parse_decoder_inputs (self, decoder_inputs)
 
def parse_decoder_outputs (self, mel_outputs, gate_outputs, alignments)
 
def decode (self, decoder_input, attention_hidden, attention_cell, decoder_hidden, decoder_cell, attention_weights, attention_weights_cum, attention_context, memory, processed_memory, mask)
 
def forward (self, memory, decoder_inputs, memory_lengths)
 
def infer (self, memory, memory_lengths)
 

Public Attributes

 n_mel_channels
 
 n_frames_per_step
 
 encoder_embedding_dim
 
 attention_rnn_dim
 
 decoder_rnn_dim
 
 prenet_dim
 
 max_decoder_steps
 
 gate_threshold
 
 p_attention_dropout
 
 p_decoder_dropout
 
 early_stopping
 
 prenet
 
 attention_rnn
 
 attention_layer
 
 decoder_rnn
 
 linear_projection
 
 gate_layer
 

Constructor & Destructor Documentation

◆ __init__()

def model.Decoder.__init__ (   self,
  n_mel_channels,
  n_frames_per_step,
  encoder_embedding_dim,
  attention_dim,
  attention_location_n_filters,
  attention_location_kernel_size,
  attention_rnn_dim,
  decoder_rnn_dim,
  prenet_dim,
  max_decoder_steps,
  gate_threshold,
  p_attention_dropout,
  p_decoder_dropout,
  early_stopping 
)

Member Function Documentation

◆ get_go_frame()

def model.Decoder.get_go_frame (   self,
  memory 
)
Gets all zeros frames to use as first decoder input
PARAMS
------
memory: decoder outputs

RETURNS
-------
decoder_input: all zeros frames
Here is the caller graph for this function:

◆ initialize_decoder_states()

def model.Decoder.initialize_decoder_states (   self,
  memory 
)
Initializes attention rnn states, decoder rnn states, attention
weights, attention cumulative weights, attention context, stores memory
and stores processed memory
PARAMS
------
memory: Encoder outputs
mask: Mask for padded data if training, expects None for inference
Here is the caller graph for this function:

◆ parse_decoder_inputs()

def model.Decoder.parse_decoder_inputs (   self,
  decoder_inputs 
)
Prepares decoder inputs, i.e. mel outputs
PARAMS
------
decoder_inputs: inputs used for teacher-forced training, i.e. mel-specs

RETURNS
-------
inputs: processed decoder inputs
Here is the caller graph for this function:

◆ parse_decoder_outputs()

def model.Decoder.parse_decoder_outputs (   self,
  mel_outputs,
  gate_outputs,
  alignments 
)
Prepares decoder outputs for output
PARAMS
------
mel_outputs:
gate_outputs: gate output energies
alignments:

RETURNS
-------
mel_outputs:
gate_outpust: gate output energies
alignments:
Here is the caller graph for this function:

◆ decode()

def model.Decoder.decode (   self,
  decoder_input,
  attention_hidden,
  attention_cell,
  decoder_hidden,
  decoder_cell,
  attention_weights,
  attention_weights_cum,
  attention_context,
  memory,
  processed_memory,
  mask 
)
Decoder step using stored states, attention and memory
PARAMS
------
decoder_input: previous mel output

RETURNS
-------
mel_output:
gate_output: gate output energies
attention_weights:
Here is the caller graph for this function:

◆ forward()

def model.Decoder.forward (   self,
  memory,
  decoder_inputs,
  memory_lengths 
)
Decoder forward pass for training
PARAMS
------
memory: Encoder outputs
decoder_inputs: Decoder inputs for teacher forcing. i.e. mel-specs
memory_lengths: Encoder output lengths for attention masking.

RETURNS
-------
mel_outputs: mel outputs from the decoder
gate_outputs: gate outputs from the decoder
alignments: sequence of attention weights from the decoder
Here is the call graph for this function:

◆ infer()

def model.Decoder.infer (   self,
  memory,
  memory_lengths 
)
Decoder inference
PARAMS
------
memory: Encoder outputs

RETURNS
-------
mel_outputs: mel outputs from the decoder
gate_outputs: gate outputs from the decoder
alignments: sequence of attention weights from the decoder
Here is the call graph for this function:

Member Data Documentation

◆ n_mel_channels

model.Decoder.n_mel_channels

◆ n_frames_per_step

model.Decoder.n_frames_per_step

◆ encoder_embedding_dim

model.Decoder.encoder_embedding_dim

◆ attention_rnn_dim

model.Decoder.attention_rnn_dim

◆ decoder_rnn_dim

model.Decoder.decoder_rnn_dim

◆ prenet_dim

model.Decoder.prenet_dim

◆ max_decoder_steps

model.Decoder.max_decoder_steps

◆ gate_threshold

model.Decoder.gate_threshold

◆ p_attention_dropout

model.Decoder.p_attention_dropout

◆ p_decoder_dropout

model.Decoder.p_decoder_dropout

◆ early_stopping

model.Decoder.early_stopping

◆ prenet

model.Decoder.prenet

◆ attention_rnn

model.Decoder.attention_rnn

◆ attention_layer

model.Decoder.attention_layer

◆ decoder_rnn

model.Decoder.decoder_rnn

◆ linear_projection

model.Decoder.linear_projection

◆ gate_layer

model.Decoder.gate_layer

The documentation for this class was generated from the following file: