TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
pytorch_quantization.quant_modules.QuantModuleReplacementHelper Class Reference

Public Member Functions

def __init__ (self)
 
def prepare_state (self, float_module_list=None, custom_map=None)
 
def apply_quant_modules (self)
 
def restore_float_modules (self)
 

Public Attributes

 orginal_func_map
 
 default_quant_map
 
 quant_map
 

Detailed Description

To help replace torch.nn modules with quantized versions.

This module is used to replace (by monkey patching) the torch.nn modules with their
quantized versions as provided by either tool's internal implementation or any other
user provided custom module.

Attributes:
    orginal_func_map: A dict. Maintains the original torch.nn module mapping.
    quant_support_list: A list. Contains the names of modules for which a quantized
        version is provided by the tool.
    quant_map: A dict. Contains the map of the module name and its quantized versions.
    quant_switch_opt: A dict. A map to indicate which modules to be left unreplaced with
        their quantized versions. This dict is updated by a list provided from the user
        which indicates the modules to leave out in monkey patching.

Constructor & Destructor Documentation

◆ __init__()

def pytorch_quantization.quant_modules.QuantModuleReplacementHelper.__init__ (   self)

Member Function Documentation

◆ prepare_state()

def pytorch_quantization.quant_modules.QuantModuleReplacementHelper.prepare_state (   self,
  float_module_list = None,
  custom_map = None 
)
Prepare the internal variables that would used in the monkey patching mechanism later.
1. Set up the list of quantized modules that are supported by the tool for torch.nn.
2. Set up the custom mapping for modules other than torch.nn.
3. Use the float_module_list to switch off the monkey patching replacement for user indicated modules

◆ apply_quant_modules()

def pytorch_quantization.quant_modules.QuantModuleReplacementHelper.apply_quant_modules (   self)
For the modules registered in the quant_map, simply monkey patch them and also store the
original modules so that they could be later replaced back.

◆ restore_float_modules()

def pytorch_quantization.quant_modules.QuantModuleReplacementHelper.restore_float_modules (   self)
Reverse the effect of monkey patch by using the orginal_func_map to replace back the
original modules.

Member Data Documentation

◆ orginal_func_map

pytorch_quantization.quant_modules.QuantModuleReplacementHelper.orginal_func_map

◆ default_quant_map

pytorch_quantization.quant_modules.QuantModuleReplacementHelper.default_quant_map

◆ quant_map

pytorch_quantization.quant_modules.QuantModuleReplacementHelper.quant_map

The documentation for this class was generated from the following file: