本章节将指导用户了解配置Pod级别重调度的关键步骤。Pod级别重调度的特性介绍、使用约束、支持的产品型号及原理请参见Pod级别重调度。
# MindCluster断点续训适配脚本,MINDX_ELASTIC_PKG为Elastic Agent whl安装包的路径,请根据实际情况填写
RUN pip install $MINDX_ELASTIC_PKG
# 可选,使用优雅容错、Pod级别重调度或进程级别重调度时必须配置以下命令
RUN sed -i '/import logging/i import mindx_elastic.api' $(pip3 show torch | grep Location | awk -F ' ' '{print $2}')/torch/distributed/run.py
# 可选,MindSpore框架下,使用Pod级别重调度需配置以下命令
RUN pip install $TASKD_WHL
... metadata: labels: ... pod-rescheduling: "on" fault-scheduling: "force"
... logger "server id is: ""${server_id}" if [ "${framework}" == "PyTorch" ]; then get_env_for_pytorch_multi_node_job DISTRIBUTED_ARGS="--nproc_per_node $GPUS_PER_NODE --nnodes $NNODES --node_rank $NODE_RANK --master_addr $MASTER_ADDR --master_port $MASTER_PORT --max_restarts 5 --monitor_interval 10 "