This is the general guideline to convert a keras model to .uff.
Converting .pb to .uff using uff converter:
``` input: "tf_node" input: "tf_node:1" input: "tf_node:2" ``` although above inputs come from a same node "tf_node" but the graphsurgeon would regard them as output from 3 different nodes. So to get rid of this situation, we need to add "tf_node:1" and "tf_node:2" to plugin map dict (even they are not really nodes)
node.input.append(other_node.name)
(The example code is in config.py
):Update the Mask_RCNN model from NHWC to NCHW:
1) Set default image data format : ```python import keras.backend as K K.set_image_data_format('channels_first') ``` 2) change all BN layers from NHWC to NCHW: ```python
x = BatchNorm(name=bn_name_base + '2a', axis=1)(x, training=train_bn) x = KL.TimeDistributed(BatchNorm(axis=1), name='mrcnn_class_bn1')(x, training=train_bn) `` 3) Modify class
PyramidROIAlign` to be compatible with NCHW format:
permutation
with crop_and_resize
because tf.crop_and_resize
only supports NHWC
format: ```python def NCHW_crop_and_resize(feature_map, level_boxes, box_indices, crop_size, method="bilinear"): feature_map = tf.transpose(feature_map, [0, 2, 3, 1])
box_feature = tf.image.crop_and_resize(feature_map, level_boxes, box_indices, crop_size, method=method)
box_feature = tf.transpose(box_feature, [0, 3, 1, 2])
return box_feature
pooled.append(NCHW_crop_and_resize(feature_maps[i], level_boxes, box_indices, self.pool_shape, method="bilinear")) ``
- Change thecompute_output_shape
to return
NCHWshape:
``python return input_shape[0][:2] + (input_shape[2][1], ) + self.pool_shape `` 4) Change the input format in function
build_rpn_model:
``python input_feature_map = KL.Input(shape=[depth, None, None], name="input_rpn_feature_map") `` 5) Permute the feature in function
rpn_graph‘ and change 'lambda’ to 'Reshape': ```python x = KL.Permute((2,3,1))(x) rpn_class_logits = KL.Reshape((-1, 2))(x)
x = KL.Permute((2,3,1))(x) rpn_bbox = KL.Reshape((-1, 4))(x) ``
6) Change squeeze axis in function
fpn_classifier_graph:
``python shared = KL.Lambda(lambda x: K.squeeze(K.squeeze(x, 4), 3), name="pool_squeeze")(x) `` 7) Change the input format in function
buildof class
MaskRCNN:
``python shape=[config.IMAGE_SHAPE[2], 1024, 1024 ], name="input_image") `` 8) (Optional) Change the input blob for prediction in function
detectof class
MaskRCNN:
``python molded_input_images = np.transpose(molded_images, (0, 3, 1, 2)) detections, _, _, mrcnn_mask, _, _, _ =\ self.keras_model.predict([molded_input_images, image_metas, anchors], verbose=0) mrcnn_mask = np.transpose(mrcnn_mask, (0, 1, 3, 4, 2)) ```
- For conversion to UFF, please refer to these instructions.
NOTE: For reference, the successful converted model should contain 3049 nodes.