Introduction

Overview

After a custom operator is deployed into the operator library, you can run the TensorFlow network that contains the custom operator to check its execution result.

Operator verification on network involves all deliverables generated during operator development, including the implementation files, operator prototype definition files, operator information library, and operator plugins. This section describes only the verification method.

You can verify the custom operator on the network using either of the following methods:

  • Train a model that contains the custom operator.
  • Build a single-operator network containing only custom operators at the frontend of TensorFlow and perform verification.

In MindStudio

For operator development in MindStudio, you can verify your TensorFlow custom operator on a network via model training.

MindStudio allows you to create Ascend Training projects. After the project is created, the necessary deliverable template and build configuration file are automatically generated. You simply need to prepare the dataset and training script and then the training project can be started on the GUI.

During the training process, you can view the runtime logs in real time in the Run window.

For details about how to create and execute an Ascend Training project, see "Model training" in MindStudio IDE User Guide .

On Command Line

For operator development on the command line, you can verify your custom operator on a network using either of the following methods:

  1. Train the TensorFlow network that contains the custom operator on Ascend AI Processor online.

    For details about how to execute the training script, see TensorFlow 1.15 Model Porting Guide or TensorFlow 2.6.5 Model Porting Guide.

  2. Build a single-operator network containing only that custom operator and verify its execution result in Ascend AI Processor.

The following describes how to build a single-operator network for operator verification on the command line.