Overview

The X2MindSpore tool can migrate models developed on PyTorch and TensorFlow and their training scripts to code that can run on MindSpore based on specific adaptation rules, greatly accelerating script migration and minimizing the workload of developers.

  • X2MindSpore can migrate the models listed in Model List (including but not limited to). After the migration is successful, some models can directly run on MindSpore while some others need small adaptation.
  • X2MindSpore only supports migration of PyTorch and TensorFlow training scripts.
  • X2MindSpore generates the adaptation layer file directory x2ms_adapter in the migrated script directory. This directory stores the MindSpore-implemented APIs used to replace PyTorch/TensorFlow APIs. The tool migrates the original PyTorch/TensorFlow APIs to the adaptation layer APIs according to the mapping. After the migration, the code runs depending on the adaptation layer.
  • X2MindSpore only supports successful training and convergence of migrated models in Model List. The final accuracy and performance are not guaranteed.
  • Training projects migrated by the tool can run on MindSpore 1.7 and later versions.

Restrictions

  • MindSpore supports two run modes, Graph and PyNative. Due to Python syntax restrictions of the Graph mode, only the ResNet and BiT series (PyTorch) models listed in Table 1 can be migrated to the Graph mode. Other models can be migrated only to PyNative mode, which delivers lower performance. For details about their differences, see the MindSpore Documentation.
  • To avoid the restriction that tensors cannot be created during data processing in MindSpore, the run mode is set to synchronous operator delivery during PyTorch migration. As a result, the training performance may deteriorate. In this case, you can remove pynative_synchronize=True from context.set_context and use asynchronous operator delivery to improve the performance. If an error is reported, check the data processing code, remove the tensor creation part, and use NumPy NDarray.
  • When the current device is occupied by other programs and its available memory is insufficient, you can add device_id = * to the context.set_context statement. * indicates the specified device ID.
  • The migrated TensorFlow 1 models cannot run on multiple devices.