TensorRT
7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work.
Hire me!
▼
TensorRT
TensorRT OSS Release Changelog
TensorRT C++ Coding Guidelines
TensorRT OSS Contribution Rules
BERT Inference Using TensorRT
Jasper Inference Using TensorRT
Tacotron 2 and WaveGlow Inference with TensorRT
batchedNMSPlugin
batchTilePlugin
bertQKVToContextPlugin
coordConvACPlugin
cropAndResize Plugin
DetectionLayer
embLayerNormPlugin
fcPlugin
flattenConcat
geluPlugin
generateDetection
gridAnchorPlugin
InstanceNormalizationPlugin
MultilevelCropAndResize
MultilevelProposeROI
nmsPlugin
normalizePlugin
NvPluginFasterRCNN Plugin
priorBoxPlugin
ProposalLayer
proposal Plugin
PyramidROIAlign
TensorRT Plugins
regionPlugin
reorgPlugin
ResizeNearest
skipLayerNormPlugin
SpecialSlice
README
Algorithm Selection API usage example based off sampleMNIST in TensorRT
Building An RNN Network Layer By Layer
Digit Recognition With Dynamic Shapes In TensorRT
Object Detection With Faster R-CNN
Building And Running GoogleNet In TensorRT
Performing Inference In INT8 Using Custom Calibration
Performing Inference In INT8 Precision
“Hello World” For Multilayer Perceptron (MLP)
“Hello World” For TensorRT
Building a Simple MNIST Network Layer by Layer
Movie Recommendation Using Neural Collaborative Filter (NCF)
Movie Recommendation Using MPS (Multi-Process Service)
Neural Machine Translation (NMT) Using A Sequence To Sequence (seq2seq) Model
“Hello World” For TensorRT From ONNX
Implementing CoordConv in TensorRT with a custom plugin
Adding A Custom Layer To Your Network In TensorRT
Specifying I/O Formats Using The Reformat Free I/O APIs
Object Detection With SSD
Object Detection With A TensorFlow FasterRCNN Network
<strong>Model Conversion Guideline</strong>
Object Detection And Instance Segmentations With A TensorFlow MasK R-CNN Network
Import A TensorFlow Model And Run Inference
Adding A Custom Layer That Supports INT8 I/O To Your Network In TensorRT
Object Detection With A TensorFlow SSD Network
TensorRT Command-Line Wrapper: trtexec
ONNX GraphSurgeon Change Log
Creating An ONNX Model
Creating An ONNX Model With An Initializer
Isolating A Subgraph
Modiyfing A Model
Folding Constants
Removing A Node
Creating A Model Using The Graph Layer API
Replacing A Subgraph
Shape Operations With The Layer API
ONNX GraphSurgeon
Polygraphy Change Log
Contributing to Polygraphy
Comparing Frameworks
Using Real Data
Interoperating With TensorRT
Int8 Calibration In TensorRT
Using The TensorRT Network API
Polygraphy Python API Examples
Inspecting A TensorRT Network
Inspecting A TensorRT Engine
Inspecting An ONNX Model
Inspecting A TensorFlow Graph
Inspecting Run Results
Inspecting Input Data
Polygraphy CLI Examples
Comparing Frameworks
Comparing Across Runs
Generating A Script For Advanced Comparisons
Using Extract To Isolate A Subgraph
Example Models
Examples
Polygraphy Python API
Command-Line Argument Helpers
Inspect
[EXPERIMENTAL] Precision
Polygraphy Tools
Run
Surgeon
Polygraphy: A Deep Learning Inference Prototyping and Debugging Toolkit
Contributing guidelines
Quantization Basics
Deprecated List
►
Namespaces
▼
Classes
►
Class List
Class Index
►
Class Hierarchy
►
Class Members
•
All
Classes
Namespaces
Functions
Variables
Typedefs
Enumerations
Enumerator
Friends
Pages
Public Member Functions
|
List of all members
samplesCommon::HostFree Class Reference
Public Member Functions
void
operator()
(void *ptr) const
Member Function Documentation
◆
operator()()
void samplesCommon::HostFree::operator()
(
void *
ptr
)
const
inline
The documentation for this class was generated from the following file:
buffers.h
samplesCommon
HostFree
Generated on Sat Jan 30 2021 07:35:53 for TensorRT by
1.8.17