本次发布包配套以上链接的MindSpeed的1.0.0_core_r0.6.0分支,环境、代码、数据集准备请用户参考MindSpeed仓库的相关指导说明,并确保其安全性。
业务用户非{MindIO-install-user}、HwHiAiUser、hwMindX用户,由用户根据实际情况决定。
cd Megatron-LM/
vim pretrain_gpt.py
import torch import mindspeed.megatron_adaptor import torch_mindio from functools import partial
此处以编辑“tests_extend/model_tests/perf_model/llama2/pretrain_llama2_70b_4k.sh”脚本为例。
vim tests_extend/model_tests/perf_model/llama2/pretrain_llama2_70b_4k.sh
#!/bin/bash export CUDA_DEVICE_MAX_CONNECTIONS=1 source "tests_extend/system_tests/env_npu.sh" export MINDIO_AUTO_PATCH_MEGATRON=true export GLOO_SOCKET_IFNAME=enp189s0f0 export LD_LIBRARY_PATH=/usr/local/Ascend/driver/lib64/driver:$LD_LIBRARY_PATH source /usr/local/Ascend/ascend-toolkit/set_env.sh # Change for multinode config NPUS_PER_NODE=16 MASTER_ADDR=<master_ip_address> MASTER_PORT=6000 NNODES=8 NODE_RANK=<local_rank> WORLD_SIZE=$(($NPUS_PER_NODE*$NNODES)) CKPT_DIR=./ckpt_llama DATA_PATH="/home/dataset/llama2/alpaca_text_document" TOKENIZER_MODEL="/home/dataset/model/llama-2-7b-hf/tokenizer.model" TP=8 PP=4 CP=1 DISTRIBUTED_ARGS=" --nproc_per_node $NPUS_PER_NODE \ --nnodes $NNODES \ --node_rank $NODE_RANK \ --master_addr $MASTER_ADDR \ --master_port $MASTER_PORT " GPT_ARGS=" --tensor-model-parallel-size ${TP} \ --pipeline-model-parallel-size ${PP} \ --use-fused-rotary-pos-emb \ --use-fused-swiglu \ --use-fused-rmsnorm \ --log-throughput \ --overlap-grad-reduce \ --overlap-param-gather \ --use-ascend-mc2 \ --num-layers-per-virtual-pipeline-stage 2 \ --sequence-parallel \ --use-distributed-optimizer \ --num-layers 80 \ --hidden-size 8192 \ --ffn-hidden-size 28672 \ --num-attention-heads 64 \ --tokenizer-type Llama2Tokenizer \ --tokenizer-model ${TOKENIZER_MODEL} \ --seq-length 4096 \ --max-position-embeddings 4096 \ --micro-batch-size 1 \ --global-batch-size 32 \ --make-vocab-size-divisible-by 1 \ --lr 1.0e-6 \ --train-iters 5000 \ --lr-decay-style cosine \ --untie-embeddings-and-output-weights \ --attention-dropout 0.0 \ --init-method-std 0.01 \ --hidden-dropout 0.0 \ --position-embedding-type rope \ --normalization RMSNorm \ --swiglu \ --use-flash-attn \ --no-masked-softmax-fusion \ --attention-softmax-in-fp32 \ --min-lr 1.0e-7 \ --weight-decay 0.1 \ --clip-grad 1.0 \ --adam-beta1 0.9 \ --initial-loss-scale 4096.0 \ --adam-beta2 0.95 \ --adam-eps 1e-5 \ --no-gradient-accumulation-fusion \ --disable-bias-linear \ --group-query-attention \ --num-query-groups 8 \ --lr-warmup-fraction 0.01 \ --bf16 " DATA_ARGS=" --data-path $DATA_PATH \ --split 949,50,1 " OUTPUT_ARGS=" --log-interval 1 \ --save-interval 10000 \ --eval-interval 1000 \ --eval-iters 10 " torchrun $DISTRIBUTED_ARGS pretrain_gpt.py \ $GPT_ARGS \ $DATA_ARGS \ $OUTPUT_ARGS \ --distributed-backend nccl \ --save $CKPT_DIR \ set +x
周期性CheckPoint加速功能相关参数说明如下: