Context for executing inference using an engine, with functionally unsafe features. More...
Public Member Functions | |
virtual bool | execute (int32_t batchSize, void **bindings) noexcept=0 |
Synchronously execute inference on a batch. More... | |
virtual bool | enqueue (int32_t batchSize, void **bindings, cudaStream_t stream, cudaEvent_t *inputConsumed) noexcept=0 |
Asynchronously execute inference on a batch. More... | |
virtual void | setDebugSync (bool sync) noexcept=0 |
Set the debug sync flag. More... | |
virtual bool | getDebugSync () const noexcept=0 |
Get the debug sync flag. More... | |
virtual void | setProfiler (IProfiler *) noexcept=0 |
Set the profiler. More... | |
virtual IProfiler * | getProfiler () const noexcept=0 |
Get the profiler. More... | |
virtual const ICudaEngine & | getEngine () const noexcept=0 |
Get the associated engine. More... | |
virtual void | destroy () noexcept=0 |
Destroy this object. More... | |
virtual void | setName (const char *name) noexcept=0 |
Set the name of the execution context. More... | |
virtual const char * | getName () const noexcept=0 |
Return the name of the execution context. More... | |
virtual void | setDeviceMemory (void *memory) noexcept=0 |
Set the device memory for use by this execution context. More... | |
virtual Dims | getStrides (int32_t bindingIndex) const noexcept=0 |
Return the strides of the buffer for the given binding. More... | |
__attribute__ ((deprecated)) virtual bool setOptimizationProfile(int32_t profileIndex) noexcept=0 | |
Select an optimization profile for the current context. More... | |
virtual int32_t | getOptimizationProfile () const noexcept=0 |
Get the index of the currently selected optimization profile. More... | |
virtual bool | setBindingDimensions (int32_t bindingIndex, Dims dimensions) noexcept=0 |
Set the dynamic dimensions of a binding. More... | |
virtual Dims | getBindingDimensions (int32_t bindingIndex) const noexcept=0 |
Get the dynamic dimensions of a binding. More... | |
virtual bool | setInputShapeBinding (int32_t bindingIndex, const int32_t *data) noexcept=0 |
Set values of input tensor required by shape calculations. More... | |
virtual bool | getShapeBinding (int32_t bindingIndex, int32_t *data) const noexcept=0 |
Get values of an input tensor required for shape calculations or an output tensor produced by shape calculations. More... | |
virtual bool | allInputDimensionsSpecified () const noexcept=0 |
Whether all dynamic dimensions of input tensors have been specified. More... | |
virtual bool | allInputShapesSpecified () const noexcept=0 |
Whether all input shape bindings have been specified. More... | |
virtual void | setErrorRecorder (IErrorRecorder *recorder) noexcept=0 |
Set the ErrorRecorder for this interface. More... | |
virtual IErrorRecorder * | getErrorRecorder () const noexcept=0 |
get the ErrorRecorder assigned to this interface. More... | |
virtual bool | executeV2 (void **bindings) noexcept=0 |
Synchronously execute inference a network. More... | |
virtual bool | enqueueV2 (void **bindings, cudaStream_t stream, cudaEvent_t *inputConsumed) noexcept=0 |
Asynchronously execute inference. More... | |
virtual bool | setOptimizationProfileAsync (int32_t profileIndex, cudaStream_t stream) noexcept=0 |
Select an optimization profile for the current context with async semantics. More... | |
Protected Member Functions | |
virtual | ~IExecutionContext () noexcept |
Context for executing inference using an engine, with functionally unsafe features.
Multiple execution contexts may exist for one ICudaEngine instance, allowing the same engine to be used for the execution of multiple batches simultaneously. If the engine supports dynamic shapes, each execution context in concurrent use must use a separate optimization profile.
|
inlineprotectedvirtualnoexcept |
|
pure virtualnoexcept |
Synchronously execute inference on a batch.
This method requires an array of input and output buffers. The mapping from tensor names to indices can be queried using ICudaEngine::getBindingIndex()
batchSize | The batch size. This is at most the value supplied when the engine was built. |
bindings | An array of pointers to input and output buffers for the network. |
|
pure virtualnoexcept |
Asynchronously execute inference on a batch.
This method requires an array of input and output buffers. The mapping from tensor names to indices can be queried using ICudaEngine::getBindingIndex()
batchSize | The batch size. This is at most the value supplied when the engine was built. |
bindings | An array of pointers to input and output buffers for the network. |
stream | A cuda stream on which the inference kernels will be enqueued |
inputConsumed | An optional event which will be signaled when the input buffers can be refilled with new data |
|
pure virtualnoexcept |
Set the debug sync flag.
If this flag is set to true, the engine will log the successful execution for each kernel during execute(). It has no effect when using enqueue().
|
pure virtualnoexcept |
Get the debug sync flag.
|
pure virtualnoexcept |
Set the profiler.
|
pure virtualnoexcept |
Get the profiler.
|
pure virtualnoexcept |
Get the associated engine.
|
pure virtualnoexcept |
Destroy this object.
|
pure virtualnoexcept |
|
pure virtualnoexcept |
Return the name of the execution context.
|
pure virtualnoexcept |
Set the device memory for use by this execution context.
The memory must be aligned with cuda memory alignment property (using cudaGetDeviceProperties()), and its size must be at least that returned by getDeviceMemorySize(). Setting memory to nullptr is acceptable if getDeviceMemorySize() returns 0. If using enqueue() to run the network, the memory is in use from the invocation of enqueue() until network execution is complete. If using execute(), it is in use until execute() returns. Releasing or otherwise using the memory for other purposes during this time will result in undefined behavior.
|
pure virtualnoexcept |
Return the strides of the buffer for the given binding.
The strides are in units of elements, not components or bytes. For example, for TensorFormat::kHWC8, a stride of one spans 8 scalars.
Note that strides can be different for different execution contexts with dynamic shapes.
If the bindingIndex is invalid or there are dynamic dimensions that have not been set yet, returns Dims with Dims::nbDims = -1.
bindingIndex | The binding index. |
|
pure virtualnoexcept |
Select an optimization profile for the current context.
profileIndex | Index of the profile. It must lie between 0 and getEngine().getNbOptimizationProfiles() - 1 |
The selected profile will be used in subsequent calls to execute() or enqueue().
When an optimization profile is switched via this API, TensorRT may enqueue GPU memory copy operations required to set up the new profile during the subsequent enqueue() operations. To avoid these calls during enqueue(), use setOptimizationProfileAsync() instead.
If the associated CUDA engine has dynamic inputs, this method must be called at least once with a unique profileIndex before calling execute or enqueue (i.e. the profile index may not be in use by another execution context that has not been destroyed yet). For the first execution context that is created for an engine, setOptimizationProfile(0) is called implicitly.
If the associated CUDA engine does not have inputs with dynamic shapes, this method need not be called, in which case the default profile index of 0 will be used (this is particularly the case for all safe engines).
setOptimizationProfile() must be called before calling setBindingDimensions() and setInputShapeBinding() for all dynamic input tensors or input shape tensors, which in turn must be called before either execute() or enqueue().
|
pure virtualnoexcept |
Get the index of the currently selected optimization profile.
If the profile index has not been set yet (implicitly to 0 for the first execution context to be created, or explicitly for all subsequent contexts), an invalid value of -1 will be returned and all calls to enqueue() or execute() will fail until a valid profile index has been set.
|
pure virtualnoexcept |
Set the dynamic dimensions of a binding.
Requires the engine to be built without an implicit batch dimension. The binding must be an input tensor, and all dimensions must be compatible with the network definition (i.e. only the wildcard dimension -1 can be replaced with a new dimension > 0). Furthermore, the dimensions must be in the valid range for the currently selected optimization profile, and the corresponding engine must not be safety-certified.
This method will fail unless a valid optimization profile is defined for the current execution context (getOptimizationProfile() must not be -1).
For all dynamic non-output bindings (which have at least one wildcard dimension of -1), this method needs to be called before either enqueue() or execute() may be called. This can be checked using the method allInputDimensionsSpecified().
|
pure virtualnoexcept |
Get the dynamic dimensions of a binding.
If the engine was built with an implicit batch dimension, same as ICudaEngine::getBindingDimensions.
If setBindingDimensions() has been called on this binding (or if there are no dynamic dimensions), all dimensions will be positive. Otherwise, it is necessary to call setBindingDimensions() before enqueue() or execute() may be called.
If the bindingIndex is out of range, an invalid Dims with nbDims == -1 is returned. The same invalid Dims will be returned if the engine was not built with an implicit batch dimension and if the execution context is not currently associated with a valid optimization profile (i.e. if getOptimizationProfile() returns -1).
If ICudaEngine::bindingIsInput(bindingIndex) is false, then both allInputDimensionsSpecified() and allInputShapesSpecified() must be true before calling this method.
For backwards compatibility with earlier versions of TensorRT, a bindingIndex that does not belong to the current profile is corrected as described for ICudaEngine::getProfileDimensions.
|
pure virtualnoexcept |
Set values of input tensor required by shape calculations.
bindingIndex | index of an input tensor for which ICudaEngine::isShapeBinding(bindingIndex) and ICudaEngine::bindingIsInput(bindingIndex) are both true. |
data | pointer to values of the input tensor. The number of values should be the product of the dimensions returned by getBindingDimensions(bindingIndex). |
If ICudaEngine::isShapeBinding(bindingIndex) and ICudaEngine::bindingIsInput(bindingIndex) are both true, this method must be called before enqueue() or execute() may be called. This method will fail unless a valid optimization profile is defined for the current execution context (getOptimizationProfile() must not be -1).
|
pure virtualnoexcept |
Get values of an input tensor required for shape calculations or an output tensor produced by shape calculations.
bindingIndex | index of an input or output tensor for which ICudaEngine::isShapeBinding(bindingIndex) is true. |
data | pointer to where values will be written. The number of values written is the product of the dimensions returned by getBindingDimensions(bindingIndex). |
If ICudaEngine::bindingIsInput(bindingIndex) is false, then both allInputDimensionsSpecified() and allInputShapesSpecified() must be true before calling this method. The method will also fail if no valid optimization profile has been set for the current execution context, i.e. if getOptimizationProfile() returns -1.
|
pure virtualnoexcept |
Whether all dynamic dimensions of input tensors have been specified.
Trivially true if network has no dynamically shaped input tensors.
|
pure virtualnoexcept |
Whether all input shape bindings have been specified.
Trivially true if network has no input shape bindings.
|
pure virtualnoexcept |
Set the ErrorRecorder for this interface.
Assigns the ErrorRecorder to this interface. The ErrorRecorder will track all errors during execution. This function will call incRefCount of the registered ErrorRecorder at least once. Setting recorder to nullptr unregisters the recorder with the interface, resulting in a call to decRefCount if a recorder has been registered.
recorder | The error recorder to register with this interface. |
|
pure virtualnoexcept |
get the ErrorRecorder assigned to this interface.
Retrieves the assigned error recorder object for the given class. A default error recorder does not exist, so a nullptr will be returned if setErrorRecorder has not been called.
|
pure virtualnoexcept |
Synchronously execute inference a network.
This method requires an array of input and output buffers. The mapping from tensor names to indices can be queried using ICudaEngine::getBindingIndex(). This method only works for execution contexts built with full dimension networks.
bindings | An array of pointers to input and output buffers for the network. |
|
pure virtualnoexcept |
Asynchronously execute inference.
This method requires an array of input and output buffers. The mapping from tensor names to indices can be queried using ICudaEngine::getBindingIndex(). This method only works for execution contexts built with full dimension networks.
bindings | An array of pointers to input and output buffers for the network. |
stream | A cuda stream on which the inference kernels will be enqueued |
inputConsumed | An optional event which will be signaled when the input buffers can be refilled with new data |
|
pure virtualnoexcept |
Select an optimization profile for the current context with async semantics.
profileIndex | Index of the profile. It must lie between 0 and getEngine().getNbOptimizationProfiles() - 1 |
stream | A cuda stream on which the cudaMemcpyAsyncs may be enqueued |
When an optimization profile is switched via this API, TensorRT may require that data is copied via cudaMemcpyAsync. It is the application’s responsibility to guarantee that synchronization between the profile sync stream and the enqueue stream occurs.
The selected profile will be used in subsequent calls to execute() or enqueue(). If the associated CUDA engine has inputs with dynamic shapes, the optimization profile must be set with a unique profileIndex before calling execute or enqueue. For the first execution context that is created for an engine, setOptimizationProfile(0) is called implicitly.
If the associated CUDA engine does not have inputs with dynamic shapes, this method need not be called, in which case the default profile index of 0 will be used.
setOptimizationProfileAsync() must be called before calling setBindingDimensions() and setInputShapeBinding() for all dynamic input tensors or input shape tensors, which in turn must be called before either execute() or enqueue().