Public Member Functions | |
def | test_basic_forward (self, verbose) |
def | test_no_quant (self, verbose) |
def | test_no_quant_input_hidden (self, verbose) |
def | test_no_quant_all_modes (self, verbose) |
def | test_against_unquantized (self, verbose) |
def | test_quant_input_hidden (self, verbose) |
def | test_quant_input_hidden_bias (self, verbose) |
def | test_quant_different_prec (self, verbose) |
tests for quant_rnn.QuantLSTM default parameters in QuantLSTM: bias=True, quant_weight=True, bits_weight=8, fake_quantTrue, quant_mode_weight='channel', quant_input=True, bits_acts=8, quant_mode_input='tensor' Tests of real quantization mode (nonfake) are disabled as it is not fully supported yet.
def tests.quant_rnn_test.TestQuantLSTM.test_basic_forward | ( | self, | |
verbose | |||
) |
Do a forward pass on the layer module and see if anything catches fire.
def tests.quant_rnn_test.TestQuantLSTM.test_no_quant | ( | self, | |
verbose | |||
) |
QuantLSTM with quantization disabled vs. pytorch LSTM.
def tests.quant_rnn_test.TestQuantLSTM.test_no_quant_input_hidden | ( | self, | |
verbose | |||
) |
QuantLSTM with quantization disabled vs. pytorch LSTM for input and hidden inputs.
def tests.quant_rnn_test.TestQuantLSTM.test_no_quant_all_modes | ( | self, | |
verbose | |||
) |
QuantLSTM with quantization disabled vs. pytorch LSTM for all modes.
def tests.quant_rnn_test.TestQuantLSTM.test_against_unquantized | ( | self, | |
verbose | |||
) |
Quantization should introduce bounded error utils.compare to pytorch implementation.
def tests.quant_rnn_test.TestQuantLSTM.test_quant_input_hidden | ( | self, | |
verbose | |||
) |
QuantLSTM vs. manual input quantization + pytorchLSTM.
def tests.quant_rnn_test.TestQuantLSTM.test_quant_input_hidden_bias | ( | self, | |
verbose | |||
) |
QuantLSTM vs. manual input quantization + pytorchLSTM.
def tests.quant_rnn_test.TestQuantLSTM.test_quant_different_prec | ( | self, | |
verbose | |||
) |
QuantLSTM vs. manual input quantization + pytorchLSTM.