TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
All Classes Namespaces Functions Variables Typedefs Enumerations Enumerator Friends Pages
polygraphy.backend.tf.runner.TfRunner Class Reference
Inheritance diagram for polygraphy.backend.tf.runner.TfRunner:
Collaboration diagram for polygraphy.backend.tf.runner.TfRunner:

Public Member Functions

def __init__ (self, sess, timeline_dir=None, name=None)
 
def activate_impl (self)
 
def get_input_metadata (self)
 
def deactivate_impl (self)
 
def infer_impl (self, feed_dict)
 
def last_inference_time (self)
 
def __enter__ (self)
 
def __exit__ (self, exc_type, exc_value, traceback)
 
def activate (self)
 
def infer_impl (self)
 
def infer (self, feed_dict)
 
def deactivate (self)
 

Public Attributes

 timeline_dir
 
 num_inferences
 
 run_options
 
 run_metadata
 
 sess
 
 inference_time
 
 name
 
 is_active
 

Static Public Attributes

 RUNNER_COUNTS = defaultdict(int)
 

Private Attributes

 _sess
 

Detailed Description

Runs inference using a TensorFlow session.

Constructor & Destructor Documentation

◆ __init__()

def polygraphy.backend.tf.runner.TfRunner.__init__ (   self,
  sess,
  timeline_dir = None,
  name = None 
)
Args:
    sess (Callable() -> Tuple[tf.Session, Sequence[str]]):
    A callable that can supply a tuple containing a
    TensorFlow session and output names.


    timeline_dir (str):
    Path to write a TensorFlow timeline.
    Note that profiling may affect execution time.
    name (str):
    The human-readable name prefix to use for this runner.
    A runner count and timestamp will be appended to this prefix.

Member Function Documentation

◆ activate_impl()

def polygraphy.backend.tf.runner.TfRunner.activate_impl (   self)
Implementation for runner activation. Derived classes should override this function
rather than ``activate()``.

Reimplemented from polygraphy.backend.base.runner.BaseRunner.

◆ get_input_metadata()

def polygraphy.backend.tf.runner.TfRunner.get_input_metadata (   self)
Returns information about the inputs of the model.
Shapes here may include dynamic dimensions, represented by ``None``.
Must be called only after activate() and before deactivate().

Returns:
    TensorMetadata: Input names, shapes, and data types.

Reimplemented from polygraphy.backend.base.runner.BaseRunner.

◆ deactivate_impl()

def polygraphy.backend.tf.runner.TfRunner.deactivate_impl (   self)
Implementation for runner deactivation. Derived classes should override this function
rather than ``deactivate()``.

Reimplemented from polygraphy.backend.base.runner.BaseRunner.

◆ infer_impl() [1/2]

def polygraphy.backend.tf.runner.TfRunner.infer_impl (   self,
  feed_dict 
)

◆ last_inference_time()

def polygraphy.backend.base.runner.BaseRunner.last_inference_time (   self)
inherited
Returns the total inference time required during the last call to ``infer()``.

Returns:
    float: The time in seconds, or None if runtime was not measured by the runner.

◆ __enter__()

def polygraphy.backend.base.runner.BaseRunner.__enter__ (   self)
inherited
Here is the call graph for this function:

◆ __exit__()

def polygraphy.backend.base.runner.BaseRunner.__exit__ (   self,
  exc_type,
  exc_value,
  traceback 
)
inherited
Here is the call graph for this function:

◆ activate()

def polygraphy.backend.base.runner.BaseRunner.activate (   self)
inherited
Activate the runner for inference. This may involve allocating GPU buffers, for example.
Here is the caller graph for this function:

◆ infer_impl() [2/2]

def polygraphy.backend.base.runner.BaseRunner.infer_impl (   self)
inherited
Implementation for runner inference. Derived classes should override this function
rather than ``infer()``
Here is the caller graph for this function:

◆ infer()

def polygraphy.backend.base.runner.BaseRunner.infer (   self,
  feed_dict 
)
inherited
Runs inference using the provided feed_dict.

Args:
    feed_dict (OrderedDict[str, numpy.ndarray]): A mapping of input tensor names to corresponding input NumPy arrays.

Returns:
    OrderedDict[str, numpy.ndarray]:
    A mapping of output tensor names to their corresponding NumPy arrays.
    IMPORTANT: Runners may reuse these output buffers. Thus, if you need to save
    outputs from multiple inferences, you should make a copy with ``copy.copy(outputs)``.
Here is the call graph for this function:

◆ deactivate()

def polygraphy.backend.base.runner.BaseRunner.deactivate (   self)
inherited
Deactivate the runner.
Here is the caller graph for this function:

Member Data Documentation

◆ _sess

polygraphy.backend.tf.runner.TfRunner._sess
private

◆ timeline_dir

polygraphy.backend.tf.runner.TfRunner.timeline_dir

◆ num_inferences

polygraphy.backend.tf.runner.TfRunner.num_inferences

◆ run_options

polygraphy.backend.tf.runner.TfRunner.run_options

◆ run_metadata

polygraphy.backend.tf.runner.TfRunner.run_metadata

◆ sess

polygraphy.backend.tf.runner.TfRunner.sess

◆ inference_time

polygraphy.backend.tf.runner.TfRunner.inference_time

◆ RUNNER_COUNTS

polygraphy.backend.base.runner.BaseRunner.RUNNER_COUNTS = defaultdict(int)
staticinherited

◆ name

polygraphy.backend.base.runner.BaseRunner.name
inherited

◆ is_active

polygraphy.backend.base.runner.BaseRunner.is_active
inherited

The documentation for this class was generated from the following file: