TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
All Classes Namespaces Functions Variables Typedefs Enumerations Enumerator Friends Pages
polygraphy.backend.trt.runner.TrtRunner Class Reference
Inheritance diagram for polygraphy.backend.trt.runner.TrtRunner:
Collaboration diagram for polygraphy.backend.trt.runner.TrtRunner:

Public Member Functions

def __init__ (self, engine, name=None)
 
def get_input_metadata (self)
 
def activate_impl (self)
 
def set_shapes_from_feed_dict (self, feed_dict)
 
def infer_impl (self, feed_dict)
 
def deactivate_impl (self)
 
def last_inference_time (self)
 
def __enter__ (self)
 
def __exit__ (self, exc_type, exc_value, traceback)
 
def activate (self)
 
def infer_impl (self)
 
def infer (self, feed_dict)
 
def deactivate (self)
 

Public Attributes

 owns_engine
 
 owns_context
 
 engine
 
 context
 
 host_output_buffers
 
 stream
 
 inference_time
 
 name
 
 is_active
 

Static Public Attributes

 RUNNER_COUNTS = defaultdict(int)
 

Private Attributes

 _engine_or_context
 

Detailed Description

Runs inference using a TensorRT engine.

Constructor & Destructor Documentation

◆ __init__()

def polygraphy.backend.trt.runner.TrtRunner.__init__ (   self,
  engine,
  name = None 
)
Args:
    engine (Callable() -> Union[trt.ICudaEngine, trt.IExecutionContext]):
    A callable that can supply either a TensorRT engine or execution context.
    If an engine is provided, the runner will create a context automatically.
    Otherwise, it will use the provided context.
    If instead of a callable, the object is provided directly, then the runner
    will *not* take ownership of it, and therefore will not destroy it.


    name (str):
    The human-readable name prefix to use for this runner.
    A runner count and timestamp will be appended to this prefix.

Reimplemented from polygraphy.backend.base.runner.BaseRunner.

Member Function Documentation

◆ get_input_metadata()

def polygraphy.backend.trt.runner.TrtRunner.get_input_metadata (   self)
Returns information about the inputs of the model.
Shapes here may include dynamic dimensions, represented by ``None``.
Must be called only after activate() and before deactivate().

Returns:
    TensorMetadata: Input names, shapes, and data types.

Reimplemented from polygraphy.backend.base.runner.BaseRunner.

◆ activate_impl()

def polygraphy.backend.trt.runner.TrtRunner.activate_impl (   self)
Implementation for runner activation. Derived classes should override this function
rather than ``activate()``.

Reimplemented from polygraphy.backend.base.runner.BaseRunner.

◆ set_shapes_from_feed_dict()

def polygraphy.backend.trt.runner.TrtRunner.set_shapes_from_feed_dict (   self,
  feed_dict 
)
Sets context shapes according to the provided feed_dict, then resizes
buffers as needed.

Args:
    feed_dict (OrderedDict[str, numpy.ndarray]): A mapping of input tensor names to corresponding input NumPy arrays.

Returns:
    Tuple[int, int]: The start and end binding indices of the modified bindings.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ infer_impl() [1/2]

def polygraphy.backend.trt.runner.TrtRunner.infer_impl (   self,
  feed_dict 
)
Here is the call graph for this function:

◆ deactivate_impl()

def polygraphy.backend.trt.runner.TrtRunner.deactivate_impl (   self)
Implementation for runner deactivation. Derived classes should override this function
rather than ``deactivate()``.

Reimplemented from polygraphy.backend.base.runner.BaseRunner.

◆ last_inference_time()

def polygraphy.backend.base.runner.BaseRunner.last_inference_time (   self)
inherited
Returns the total inference time required during the last call to ``infer()``.

Returns:
    float: The time in seconds, or None if runtime was not measured by the runner.

◆ __enter__()

def polygraphy.backend.base.runner.BaseRunner.__enter__ (   self)
inherited
Here is the call graph for this function:

◆ __exit__()

def polygraphy.backend.base.runner.BaseRunner.__exit__ (   self,
  exc_type,
  exc_value,
  traceback 
)
inherited
Here is the call graph for this function:

◆ activate()

def polygraphy.backend.base.runner.BaseRunner.activate (   self)
inherited
Activate the runner for inference. This may involve allocating GPU buffers, for example.
Here is the caller graph for this function:

◆ infer_impl() [2/2]

def polygraphy.backend.base.runner.BaseRunner.infer_impl (   self)
inherited
Implementation for runner inference. Derived classes should override this function
rather than ``infer()``
Here is the caller graph for this function:

◆ infer()

def polygraphy.backend.base.runner.BaseRunner.infer (   self,
  feed_dict 
)
inherited
Runs inference using the provided feed_dict.

Args:
    feed_dict (OrderedDict[str, numpy.ndarray]): A mapping of input tensor names to corresponding input NumPy arrays.

Returns:
    OrderedDict[str, numpy.ndarray]:
    A mapping of output tensor names to their corresponding NumPy arrays.
    IMPORTANT: Runners may reuse these output buffers. Thus, if you need to save
    outputs from multiple inferences, you should make a copy with ``copy.copy(outputs)``.
Here is the call graph for this function:

◆ deactivate()

def polygraphy.backend.base.runner.BaseRunner.deactivate (   self)
inherited
Deactivate the runner.
Here is the caller graph for this function:

Member Data Documentation

◆ _engine_or_context

polygraphy.backend.trt.runner.TrtRunner._engine_or_context
private

◆ owns_engine

polygraphy.backend.trt.runner.TrtRunner.owns_engine

◆ owns_context

polygraphy.backend.trt.runner.TrtRunner.owns_context

◆ engine

polygraphy.backend.trt.runner.TrtRunner.engine

◆ context

polygraphy.backend.trt.runner.TrtRunner.context

◆ host_output_buffers

polygraphy.backend.trt.runner.TrtRunner.host_output_buffers

◆ stream

polygraphy.backend.trt.runner.TrtRunner.stream

◆ inference_time

polygraphy.backend.trt.runner.TrtRunner.inference_time

◆ RUNNER_COUNTS

polygraphy.backend.base.runner.BaseRunner.RUNNER_COUNTS = defaultdict(int)
staticinherited

◆ name

polygraphy.backend.base.runner.BaseRunner.name
inherited

◆ is_active

polygraphy.backend.base.runner.BaseRunner.is_active
inherited

The documentation for this class was generated from the following file: