TensorRT  7.2.1.6
NVIDIA TensorRT
Looking for a C++ dev who knows TensorRT?
I'm looking for work. Hire me!
All Classes Namespaces Functions Variables Typedefs Enumerations Enumerator Friends Pages
polygraphy.backend.onnx.loader.OnnxFromTfGraph Class Reference
Inheritance diagram for polygraphy.backend.onnx.loader.OnnxFromTfGraph:
Collaboration diagram for polygraphy.backend.onnx.loader.OnnxFromTfGraph:

Public Member Functions

def __init__ (self, graph, opset=None, optimize=None, fold_constant=None)
 
def __call__ (self)
 

Public Attributes

 opset
 
 fold_constant
 
 optimize
 

Private Attributes

 _graph
 

Constructor & Destructor Documentation

◆ __init__()

def polygraphy.backend.onnx.loader.OnnxFromTfGraph.__init__ (   self,
  graph,
  opset = None,
  optimize = None,
  fold_constant = None 
)
Functor that loads a TensorFlow graph and converts it to ONNX using the tf2onnx converter.

Args:
    graph (Callable() -> Tuple[tf.Graph, Sequence[str]]):
    A callable that can supply a tuple containing a TensorFlow
    graph and output names.


    opset (int): The ONNX opset to use during conversion.
    optimize (bool): Whether to use tf2onnx's graph optimization pass.
    fold_constant (bool):
    Whether to fold constants in the TensorFlow Graph.
    Requires that ``optimize`` is also enabled.
    Defaults to True.

Member Function Documentation

◆ __call__()

def polygraphy.backend.onnx.loader.OnnxFromTfGraph.__call__ (   self)
Converts a TensorFlow model into ONNX.

Returns:
    onnx.ModelProto: The ONNX model.

Member Data Documentation

◆ _graph

polygraphy.backend.onnx.loader.OnnxFromTfGraph._graph
private

◆ opset

polygraphy.backend.onnx.loader.OnnxFromTfGraph.opset

◆ fold_constant

polygraphy.backend.onnx.loader.OnnxFromTfGraph.fold_constant

◆ optimize

polygraphy.backend.onnx.loader.OnnxFromTfGraph.optimize

The documentation for this class was generated from the following file: