下载
EN
注册
Intern-LM 7B-PyTorch

目录

InternLM-7B

训练

InternLM-7B 训练的硬件配置如下:

硬件 配置
NPU 8 x Ascend NPUs

脚本

  1. 拷贝仓库到本地服务器
git clone https://gitee.com/ascend/AscendSpeed.git 
cd AscendSpeed 
mkdir logs
mkdir ckpt
  1. 搭建环境
# python3.7
conda create -n test python=3.8
conda activate test
# 安装 torch 和 torch_npu
pip install torch-2.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
pip install torch_npu-2.1.0.post20231124_cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
# 安装 apex
pip install apex-0.1_ascend_20231124-cp38-cp38-linux_aarch64.whl
# 安装 megatron-core
pip3 install --no-use-pep517 -e git+https://github.com/NVIDIA/Megatron-LM.git@23.05#egg=megatron-core
# 安装 deepspeed 和 deepspeed_npu
pip install deepspeed==0.9.2
git clone https://gitee.com/ascend/DeepSpeed.git -b v0.9.2 deepspeed_npu
cd deepspeed_npu
pip3 install -e ./
cd ..
# 安装其余依赖包
pip install -r requirements.txt 
  1. 下载 Internlm-7B 词表文件
#!/bin/bash
mkdir -p dataset/internlm
cd ./dataset/internlm
wget https://huggingface.co/internlm/internlm-7b/resolve/main/config.json
wget https://huggingface.co/internlm/internlm-7b/resolve/main/generation_config.json
wget https://huggingface.co/internlm/internlm-7b/resolve/main/special_tokens_map.json
wget https://huggingface.co/internlm/internlm-7b/resolve/main/tokenization_internlm.py
wget https://huggingface.co/internlm/internlm-7b/resolve/main/tokenizer.model
wget https://huggingface.co/internlm/internlm-7b/resolve/main/tokenizer_config.json
cd ../..
  1. 下载 Internlm-7B 数据集
cd dataset/
wget https://huggingface.co/datasets/tatsu-lab/alpaca/resolve/main/data/train-00000-of-00001-a09b74b3ef9c3b56.parquet
cd ..
#!/bin/bash
source /usr/local/Ascend/ascend-toolkit/set_env.sh 
python ./tools/preprocess_data.py \
    --input ./dataset/train-00000-of-00001-a09b74b3ef9c3b56.parquet \
    --tokenizer-name-or-path ./dataset/internlm \
    --output-prefix ./dataset/alpaca \
    --workers 4 \
    --log-interval 1000  \
    --tokenizer-type PretrainedFromHF  \
    --handler-name AlpacaPretrainHandler  \
    --tokenizer-not-use-fast \
    --append-eod
  1. 权重格式转换

下载 Internlm-7B 权重

# 请注意,如果要加载huggingface的预训练权重,需要修改一个deepspeed关于加载权重的bug:
# 在 `<deepspeed-installed-path>/runtime/engine.py` 文件里的 `_load_zero_checkpoint` 函数,
# 将 `if zero_sd_list is None` 改为 `if zero_sd_list is None or len(zero_sd_list) == 0`

# 原始 deepspeed/runtime/engine.py, 大概 #Lines2746-2748
zero_sd_list = self._get_all_zero_checkpoints(load_dir, tag)
if zero_sd_list is None:
    return False

# 修改后
zero_sd_list = self._get_all_zero_checkpoints(load_dir, tag)
if zero_sd_list is None or len(zero_sd_list) == 0:
    return False
mkdir model_from_hf
cd ./model_from_hf
# 必须安装 git-lfs
git clone https://huggingface.co/internlm/internlm-7b
cd ..

将模型权重从 huggingface 格式转换为 AscendSpeed 可以处理的格式

mkdir model_weights
SCRIPT_PATH=./tools/ckpt_convert/llama/convert_weights_from_huggingface.py
python $SCRIPT_PATH \
    --input-model-dir ./model_from_hf/internlm-7b/ \
    --output-model-dir ./model_weights \
    --tensor-model-parallel-size 1 \
    --pipeline-model-parallel-size 1 \
    --type 7B \
    --bias \
    --deepspeed
  1. 配置 Internlm-7B 预训练脚本
# 修改 ascend-toolkit 路径
source /usr/local/Ascend/ascend-toolkit/set_env.sh 
# 修改数据集,词表,权重等路径
TOKENIZER_PATH=./dataset/internlm  #tokenizer path
DATA=./dataset/alpaca_text_document  #processed dataset
CHECKPOINT=./model_weights/
  1. 启动 Internlm-7B 预训练脚本
bash examples/intern/pretrain_internlm_7b_zero.sh 

性能

吞吐

Internlm-7B 在 昇腾芯片参考芯片 上的性能对比:

设备 模型 总迭代数 样本吞吐 (samples/p/s) token吞吐 (tokens/p/s) 单步迭代时间 (s/step) 浮点计算数 (TFLOPs/s)
NPUs Internlm-7B 2048 13.000 2943 19684.6 145.69
参考 Internlm-7B - - 4078 - -

精度

NPU vs 参考 (无预训练权重) loss 对比和相对误差 NPU-Loss-and-Relative-Error

NPU vs 参考 (有预训练权重) loss 对比和相对误差 NPU-Loss-with-weight-and-Relative-Error

推理

推理脚本: examples/intern/generate_internlm_7b_deepspeed.sh

bash examples/intern/generate_internlm_7b_deepspeed.sh

推理举例: Internlm-7b-inference

评估

评估脚本: tasks/evaluation/eval_internlm.sh

bash  tasks/evaluation/eval_internlm.sh

InternLM-7B在Ascend NPU中的评测表现:

任务 模型 昇腾值 社区值
MMLU LLaMA-7B 48.8 51.0

InternLM-65B

训练

InternLM-65B 训练的硬件配置如下:

硬件 配置
NPU 32 x Ascend NPUs

脚本

  1. 拷贝仓库到本地服务器
git clone https://gitee.com/ascend/AscendSpeed.git 
cd AscendSpeed 
mkdir logs
mkdir ckpt
  1. 搭建环境
# python3.7
conda create -n test python=3.8
conda activate test
# 安装 torch 和 torch_npu
pip install torch-2.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
pip install torch_npu-2.1.0.post20231124_cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
# 安装 apex
pip install apex-0.1_ascend_20231124-cp38-cp38-linux_aarch64.whl
# 安装 megatron-core
pip3 install --no-use-pep517 -e git+https://github.com/NVIDIA/Megatron-LM.git@23.05#egg=megatron-core
# 安装 deepspeed 和 deepspeed_npu
pip install deepspeed==0.9.2
git clone https://gitee.com/ascend/DeepSpeed.git -b v0.9.2 deepspeed_npu
cd deepspeed_npu
pip3 install -e ./
cd ..
# 安装其余依赖包
pip install -r requirements.txt 
  1. 下载 词表文件
#!/bin/bash
mkdir -p dataset/internlm
cd ./dataset/internlm
wget https://huggingface.co/internlm/internlm-7b/resolve/main/config.json
wget https://huggingface.co/internlm/internlm-7b/resolve/main/generation_config.json
wget https://huggingface.co/internlm/internlm-7b/resolve/main/special_tokens_map.json
wget https://huggingface.co/internlm/internlm-7b/resolve/main/tokenization_internlm.py
wget https://huggingface.co/internlm/internlm-7b/resolve/main/tokenizer.model
wget https://huggingface.co/internlm/internlm-7b/resolve/main/tokenizer_config.json
cd ../..
  1. 下载 Internlm-65B 数据集
cd dataset/
wget https://huggingface.co/datasets/tatsu-lab/alpaca/resolve/main/data/train-00000-of-00001-a09b74b3ef9c3b56.parquet
cd ..
#!/bin/bash
source /usr/local/Ascend/ascend-toolkit/set_env.sh 
python ./tools/preprocess_data.py \
    --input ./dataset/train-00000-of-00001-a09b74b3ef9c3b56.parquet \
    --tokenizer-name-or-path ./dataset/internlm \
    --output-prefix ./dataset/alpaca \
    --workers 4 \
    --log-interval 1000  \
    --tokenizer-type PretrainedFromHF  \
    --handler-name AlpacaPretrainHandler  \
    --tokenizer-not-use-fast \
    --append-eod
  1. 配置 Internlm-65B 预训练脚本
# 修改 ascend-toolkit 路径
source /usr/local/Ascend/ascend-toolkit/set_env.sh 
# 修改数据集,词表,权重等路径
TOKENIZER_PATH=./dataset/internlm  #tokenizer path
DATA=./dataset/alpaca_text_document  #processed dataset
CHECKPOINT=./model_weights/
  1. 启动 Internlm-65B 预训练脚本
bash examples/intern/pretrain_internlm_65b_zero.sh 

性能

吞吐

Internlm-65B 在 昇腾芯片参考芯片 上的性能对比:

设备 模型 总迭代数 样本吞吐 (samples/p/s) token吞吐 (tokens/p/s) 单步迭代时间 (s/step) 浮点计算数 (TFLOPs/s)
NPUs Internlm-65B 50000 5.33 342 24 137.8
Reference Internlm-65B - - 414 - -

精度

NPU vs 参考 (无预训练权重) loss 对比和相对误差 NPU-Loss-and-Relative-Error

使用模型资源和服务前,请您仔细阅读并理解透彻 《昇腾深度学习模型许可协议 3.0》