TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
All Classes Namespaces Functions Variables Typedefs Enumerations Enumerator Friends Pages
polygraphy.util.cuda.DeviceBuffer Class Reference
Inheritance diagram for polygraphy.util.cuda.DeviceBuffer:
Collaboration diagram for polygraphy.util.cuda.DeviceBuffer:

Public Member Functions

def __init__ (self, shape=None, dtype=None)
 
def address (self)
 
def allocate (self, nbytes)
 
def free (self)
 
def resize (self, shape)
 
def copy_from (self, host_buffer, stream=None)
 
def copy_to (self, host_buffer, stream=None)
 
def __str__ (self)
 

Public Attributes

 shape
 
 dtype
 
 allocated_nbytes
 

Private Member Functions

def _check_dtype_matches (self, host_buffer)
 

Private Attributes

 _ptr
 

Constructor & Destructor Documentation

◆ __init__()

def polygraphy.util.cuda.DeviceBuffer.__init__ (   self,
  shape = None,
  dtype = None 
)
Represents a buffer on the GPU.

Args:
    shape (Tuple[int]): The initial shape of the buffer.
    dtype (numpy.dtype): The data type of the buffer.

Member Function Documentation

◆ address()

def polygraphy.util.cuda.DeviceBuffer.address (   self)

◆ allocate()

def polygraphy.util.cuda.DeviceBuffer.allocate (   self,
  nbytes 
)
Here is the call graph for this function:
Here is the caller graph for this function:

◆ free()

def polygraphy.util.cuda.DeviceBuffer.free (   self)
Here is the call graph for this function:
Here is the caller graph for this function:

◆ resize()

def polygraphy.util.cuda.DeviceBuffer.resize (   self,
  shape 
)
Here is the call graph for this function:
Here is the caller graph for this function:

◆ _check_dtype_matches()

def polygraphy.util.cuda.DeviceBuffer._check_dtype_matches (   self,
  host_buffer 
)
private
Here is the caller graph for this function:

◆ copy_from()

def polygraphy.util.cuda.DeviceBuffer.copy_from (   self,
  host_buffer,
  stream = None 
)
Here is the call graph for this function:

◆ copy_to()

def polygraphy.util.cuda.DeviceBuffer.copy_to (   self,
  host_buffer,
  stream = None 
)
Copies from this device buffer to the provided host buffer.
Host buffer must be contiguous in memory (see np.ascontiguousarray).

Args:
    host_buffer (numpy.ndarray): The host buffer to copy into.
    stream (Stream):
    A Stream instance (see util/cuda.py). Performs a synchronous copy if no stream is provided.

Returns:
    numpy.ndarray: The host buffer, possibly reallocated if the provided buffer was too small.
Here is the call graph for this function:

◆ __str__()

def polygraphy.util.cuda.DeviceBuffer.__str__ (   self)

Member Data Documentation

◆ shape

polygraphy.util.cuda.DeviceBuffer.shape

◆ dtype

polygraphy.util.cuda.DeviceBuffer.dtype

◆ allocated_nbytes

polygraphy.util.cuda.DeviceBuffer.allocated_nbytes

◆ _ptr

polygraphy.util.cuda.DeviceBuffer._ptr
private

The documentation for this class was generated from the following file: