TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
All Classes Namespaces Functions Variables Typedefs Enumerations Enumerator Friends Pages
classification_flow Namespace Reference

Functions

def get_parser ()
 
def prepare_model (model_name, data_dir, per_channel_quantization, batch_size_train, batch_size_test, batch_size_onnx, calibrator, pretrained=True, ckpt_path=None, ckpt_url=None)
 
def main (cmdline_args)
 
def evaluate_onnx (onnx_filename, data_loader, criterion, print_freq)
 
def export_onnx (model, onnx_filename, batch_onnx, per_channel_quantization)
 
def calibrate_model (model, model_name, data_loader, num_calib_batch, calibrator, hist_percentile, out_dir)
 
def collect_stats (model, data_loader, num_batches)
 
def compute_amax (model, **kwargs)
 
def finetune_model (model, data_loader)
 
def build_sensitivity_profile (model, criterion, data_loader_test)
 

Variables

def res = main(sys.argv[1:])
 

Function Documentation

◆ get_parser()

def classification_flow.get_parser ( )
Creates an argument parser.
Here is the caller graph for this function:

◆ prepare_model()

def classification_flow.prepare_model (   model_name,
  data_dir,
  per_channel_quantization,
  batch_size_train,
  batch_size_test,
  batch_size_onnx,
  calibrator,
  pretrained = True,
  ckpt_path = None,
  ckpt_url = None 
)
Prepare the model for the classification flow.
Arguments:
    model_name: name to use when accessing torchvision model dictionary
    data_dir: directory with train and val subdirs prepared "imagenet style"
    per_channel_quantization: iff true use per channel quantization for weights
                               note that this isn't currently supported in ONNX-RT/Pytorch
    batch_size_train: batch size to use when training
    batch_size_test: batch size to use when testing in Pytorch
    batch_size_onnx: batch size to use when testing with ONNX-RT
    calibrator: calibration type to use (max/histogram)

    pretrained: if true a pretrained model will be loaded from torchvision
    ckpt_path: path to load a model checkpoint from, if not pretrained
    ckpt_url: url to download a model checkpoint from, if not pretrained and no path was given
    * at least one of {pretrained, path, url} must be valid

The method returns a the following list:
    [
        Model object,
        data loader for training,
        data loader for Pytorch testing,
        data loader for onnx testing
    ]
Here is the caller graph for this function:

◆ main()

def classification_flow.main (   cmdline_args)
Here is the call graph for this function:

◆ evaluate_onnx()

def classification_flow.evaluate_onnx (   onnx_filename,
  data_loader,
  criterion,
  print_freq 
)
Evaluate accuracy on the given ONNX file using the provided data loader and criterion.
   The method returns the average top-1 accuracy on the given dataset.
Here is the caller graph for this function:

◆ export_onnx()

def classification_flow.export_onnx (   model,
  onnx_filename,
  batch_onnx,
  per_channel_quantization 
)
Here is the caller graph for this function:

◆ calibrate_model()

def classification_flow.calibrate_model (   model,
  model_name,
  data_loader,
  num_calib_batch,
  calibrator,
  hist_percentile,
  out_dir 
)
    Feed data to the network and calibrate.
    Arguments:
        model: classification model
        model_name: name to use when creating state files
        data_loader: calibration data set
        num_calib_batch: amount of calibration passes to perform
        calibrator: type of calibration to use (max/histogram)
        hist_percentile: percentiles to be used for historgram calibration
        out_dir: dir to save state files in
Here is the call graph for this function:
Here is the caller graph for this function:

◆ collect_stats()

def classification_flow.collect_stats (   model,
  data_loader,
  num_batches 
)
Feed data to the network and collect statistics
Here is the caller graph for this function:

◆ compute_amax()

def classification_flow.compute_amax (   model,
**  kwargs 
)
Here is the caller graph for this function:

◆ finetune_model()

def classification_flow.finetune_model (   model,
  data_loader 
)
Finetune the model
Here is the caller graph for this function:

◆ build_sensitivity_profile()

def classification_flow.build_sensitivity_profile (   model,
  criterion,
  data_loader_test 
)
Here is the caller graph for this function:

Variable Documentation

◆ res

def classification_flow.res = main(sys.argv[1:])