TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
All Classes Namespaces Functions Variables Typedefs Enumerations Enumerator Friends Pages
polygraphy.backend.onnx.loader.ModifyOnnx Class Reference
Inheritance diagram for polygraphy.backend.onnx.loader.ModifyOnnx:
Collaboration diagram for polygraphy.backend.onnx.loader.ModifyOnnx:

Public Member Functions

def __init__ (self, model, do_shape_inference=None, outputs=None, exclude_outputs=None)
 
def __call__ (self)
 

Public Attributes

 do_shape_inference
 
 outputs
 
 exclude_outputs
 

Private Attributes

 _model
 

Constructor & Destructor Documentation

◆ __init__()

def polygraphy.backend.onnx.loader.ModifyOnnx.__init__ (   self,
  model,
  do_shape_inference = None,
  outputs = None,
  exclude_outputs = None 
)
Functor that modifies an ONNX model.

Args:
    model (Callable() -> onnx.ModelProto): A loader that can supply an ONNX model.

    outputs (Sequence[str]):
    Names of tensors to mark as outputs. If provided, this will override the
    existing model outputs.
    If a value of `constants.MARK_ALL` is used instead of a list, all tensors in the network are marked.
    exclude_outputs (Sequence[str]):
    Names of tensors to exclude as outputs. This can be useful in conjunction with
    ``outputs=constants.MARK_ALL`` to omit outputs.

Member Function Documentation

◆ __call__()

def polygraphy.backend.onnx.loader.ModifyOnnx.__call__ (   self)
Modifies an ONNX model.

Returns:
    onnx.ModelProto: The modified ONNX model.

Member Data Documentation

◆ _model

polygraphy.backend.onnx.loader.ModifyOnnx._model
private

◆ do_shape_inference

polygraphy.backend.onnx.loader.ModifyOnnx.do_shape_inference

◆ outputs

polygraphy.backend.onnx.loader.ModifyOnnx.outputs

◆ exclude_outputs

polygraphy.backend.onnx.loader.ModifyOnnx.exclude_outputs

The documentation for this class was generated from the following file: