Skip to content

Releases: modelscope/ms-swift

Patch release v3.6.1

11 Jul 02:14
Compare
Choose a tag to compare

v3.6.0

08 Jul 03:35
Compare
Choose a tag to compare

中文版

新特性

  1. Megatron-SWIFT:
    a. 支持更多的 MoE 模型结构,包括:DeepseekV3ForCausalLM、Dots1ForCausalLM 和 Ernie4_5_MoeForCausalLM。训练脚本参考:https://github.com/modelscope/ms-swift/tree/main/examples/train/megatron/moe
    b. 支持更多的 Dense 模型结构,包括:MiMoForCausalLM、InternLM3ForCausalLM 和 Ernie4_5_ForCausalLM。训练脚本参考:https://github.com/modelscope/ms-swift/tree/main/examples/train/megatron/dense
    c. 支持 DPO 训练。训练脚本参考:https://github.com/modelscope/ms-swift/tree/main/examples/train/megatron/rlhf/dpo
    d. 支持 FP8 训练。
    e. 支持更多 rope scaling 类型,包括:default、linear、yarn、dynamic、longrope、llama3 等。
    f. --test_convert_precision参数优化,方便测试 mcore 与 huggingface 模型权重转换精度。
  2. GRPO:
    a. GRPO 多轮训练重构,支持使用 AsyncEngine 加速多轮推理,参考文档:https://swift.readthedocs.io/zh-cn/latest/Instruction/GRPO/DeveloperGuide/%E5%A4%9A%E8%BD%AE%E8%AE%AD%E7%BB%83.html
    b. offload_model 参数额外对参考模型进行卸载。
    c. 优化 sleep_level 和 offload_model 参数下的显存管理。
    d. reward_funcs 增加了 trainer_state 入参,方便获取当前训练步数和总步数。
  3. 训练:
    a. 支持 reranker 训练,训练脚本参考:https://github.com/modelscope/ms-swift/tree/main/examples/train/reranker
    b. CPT/SFT/DPO/GRPO 纯文本大模型训练支持 ring-attention 切分序列长度,降低显存占用。训练脚本参考:https://github.com/modelscope/ms-swift/tree/main/examples/train/long_text/ring_attention
    c. channel loss 在CPT/SFT训练时,兼容 padding_free 与 packing。 感谢招商银行技术团队的贡献。
    d. remove_unused_columns 参数优化。设置为 False,则将额外数据集传递至 Trainer 内,方便自定义损失函数。
    e. split_dataset_ratio参数默认值从0.01修改为0,默认不再进行验证集切分,需要手动设置--split_dataset_ratio或者--val_dataset
    f. 多模态模型 packing/padding_free 损失对齐问题修复。详见此PR:#4838
    g. swanlab 支持训练完成后的飞书通知回调。
  4. RLHF:
    a. 纯文本/多模态模型支持 GKD 训练,部分场景下支持 padding_free 和 packing,训练脚本如下:
    i. 大模型:https://github.com/modelscope/ms-swift/blob/main/examples/train/rlhf/gkd.sh
    ii. 多模态大模型:https://github.com/modelscope/ms-swift/blob/main/examples/train/multimodal/rlhf/gkd.sh
    b. reward model 训练支持 margin 参数支持,参考文档:https://swift.readthedocs.io/zh-cn/latest/Instruction/%E4%BA%BA%E7%B1%BB%E5%AF%B9%E9%BD%90.html#rm
  5. 全链路:
    a. 支持使用 SGLang 推理引擎对 ms-swift 推理/部署/评测/ui模块进行加速,设置--infer_backend sglang即可。推理脚本参考:https://github.com/modelscope/ms-swift/tree/main/examples/infer/sglang
    b. 支持 FP8 量化,量化脚本参考:https://github.com/modelscope/ms-swift/blob/main/examples/export/quantize/fp8.sh
  6. Web-UI:
    a. 支持 SFT/RLHF/GRPO 在不同 Tab 页面训练,支持保存训练命令行。
    b. Web-UI 界面支持数据采样。

新模型

  1. 多模态模型:
    a. ZhipuAI/GLM-4.1V-9B-Thinking系列
    b. Kwai-Keye/Keye-VL-8B-Preview
    c. moonshotai/Kimi-VL-A3B-Thinking-2506
    d. google/gemma-3n-E2B-it系列
  2. 纯文本模型:
    a. PaddlePaddle/ERNIE-4.5-21B-A3B-PT系列
    b. rednote-hilab/dots.llm1.inst系列
    c. Tencent-Hunyuan/Hunyuan-A13B-Instruct
    d. MiniMax/MiniMax-M1-80k系列(推理)
    e. moonshotai/Kimi-Dev-72B
    f. cognitivecomputations/DeepSeek-R1-0528-AWQ

English Version

New Features

  1. Megatron-SWIFT:
    a. Support for more MoE model architectures, including: DeepseekV3ForCausalLM, Dots1ForCausalLM, and Ernie4_5_MoeForCausalLM. Training script reference: https://github.com/modelscope/ms-swift/tree/main/examples/train/megatron/moe
    b. Support for more Dense model architectures, including: MiMoForCausalLM, InternLM3ForCausalLM, and Ernie4_5_ForCausalLM. Training script reference: https://github.com/modelscope/ms-swift/tree/main/examples/train/megatron/dense
    c. DPO training supported. Training script reference: https://github.com/modelscope/ms-swift/tree/main/examples/train/megatron/rlhf/dpo
    d. FP8 training supported.
    e. More rope scaling types supported, including: default, linear, yarn, dynamic, longrope, llama3, etc.
    f. --test_convert_precision parameter optimized for easier testing of weight conversion precision between mcore and huggingface models.
  2. GRPO:
    a. GRPO multi-turn training refactored, supporting accelerated multi-turn inference with AsyncEngine. Documentation: https://swift.readthedocs.io/zh-cn/latest/Instruction/GRPO/DeveloperGuide/%E5%A4%9A%E8%BD%AE%E8%AE%AD%E7%BB%83.html
    b. The offload_model parameter now also offloads the reference model.
    c. Optimized GPU memory management under sleep_level and offload_model parameters.
    d. Added trainer_state as an input parameter to reward_funcs, making it easier to obtain the current and total training steps.
  3. Training:
    a. Reranker training supported. Training script reference: https://github.com/modelscope/ms-swift/tree/main/examples/train/reranker
    b. CPT/SFT/DPO/GRPO pure-text large model training supports ring-attention sequence length partitioning, reducing memory usage. Training script reference: https://github.com/modelscope/ms-swift/tree/main/examples/train/long_text/ring_attention
    c. Channel loss in CPT/SFT training is compatible with padding_free and packing. Thanks to the technical team at China Merchants Bank for their contribution.
    d. Optimized remove_unused_columns parameter. When set to False, extra dataset columns are passed to the Trainer for custom loss functions.
    e. The default value for split_dataset_ratio changed from 0.01 to 0, so the validation set is not split by default. You now need to manually set --split_dataset_ratio or --val_dataset.
    f. Fixed loss alignment issue between packing/padding_free for multimodal models. For details, see this PR: #4838
    g. Swanlab now supports Feishu (Lark Suite) notification callback after training is completed.
  4. RLHF:
    a. Pure-text and multimodal models support GKD training, with some scenarios supporting padding_free and packing. Training scripts:
    i. Large models: https://github.com/modelscope/ms-swift/blob/main/examples/train/rlhf/gkd.sh
    ii. Multimodal large models: https://github.com/modelscope/ms-swift/blob/main/examples/train/multimodal/rlhf/gkd.sh
    b. Reward model training now supports the margin parameter. Documentation: https://swift.readthedocs.io/zh-cn/latest/Instruction/%E4%BA%BA%E7%B1%BB%E5%AF%B9%E9%BD%90.html#rm
  5. Full Pipeline:
    a. SGLang inference engine can be used to accelerate ms-swift inference/deployment/evaluation/ui modules, by setting --infer_backend sglang. Inference script reference: https://github.com/modelscope/ms-swift/tree/main/examples/infer/sglang
    b. FP8 quantization supported. Quantization script reference: https://github.com/modelscope/ms-swift/blob/main/examples/export/quantize/fp8.sh
  6. Web-UI:
    a. Supports SFT/RLHF/GRPO training on different Tab pages, and saves training command lines.
    b. Web-UI interface supports data sampling.

New Models

  1. Multimodal Models:
    a. ZhipuAI/GLM-4.1V-9B-Thinking series
    b. Kwai-Keye/Keye-VL-8B-Preview
    c. moonshotai/Kimi-VL-A3B-Thinking-2506
    d. google/gemma-3n-E2B-it series
  2. Pure Text Models:
    a. PaddlePaddle/ERNIE-4.5-21B-A3B-PT series
    b. rednote-hilab/dots.llm1.inst series
    c. Tencent-Hunyuan/Hunyuan-A13B-Instruct
    d. MiniMax/MiniMax-M1-80k series (inference)
    e. moonshotai/Kimi-Dev-72B
    f. cognitivecomputations/DeepSeek-R1-0528-AWQ

What's Changed

Read more

Patch release v3.5.3

27 Jun 05:12
Compare
Choose a tag to compare

Patch release v3.5.2

20 Jun 14:49
Compare
Choose a tag to compare

Patch release v3.5.1

13 Jun 14:24
Compare
Choose a tag to compare

v3.5.0

08 Jun 16:51
Compare
Choose a tag to compare

中文版

新特性

  1. GRPO:
    a. 代码重构,使用参数vllm_mode指定。参数说明详见参考文档:https://swift.readthedocs.io/zh-cn/latest/Instruction/GRPO.html#id1:~:text=vllm_mode%20server%20%E5%8F%82%E6%95%B0,colocate%20mode%20%E7%94%9F%E6%95%88%E3%80%82
    b. GRPO长文本优化,支持ulysses序列并行,显著降低长文本训练显存占用,训练脚本参考:https://github.com/modelscope/ms-swift/blob/main/examples/train/long_text/sequence_parallel_grpo.sh
    c. 新增sync_ref_model参数,支持训练中同步参考模型权重。
    d. 支持 liger kernel loss,使用参数 use_liger_kernel,降低显存占用。
    e. External mode 支持 move_model_batches,降低zero3同步权重时的显存峰值。
    f. 集成 INTELLECT-2 的 Two-Sided Clipping 算法,使用参数 delta。
    g. 支持奖励函数返回 None,适用于多任务训练,参考文档:https://swift.readthedocs.io/zh-cn/latest/Instruction/GRPO.html#id7
    h. Internal mode 支持 vllm_server_base_url,传入外部 vLLM 服务器url。
    i. 插件拓展:支持 QwenLong-L1 奖励模型插件。
    j. 新增 steps_per_generation/generation_batch_size 参数,支持自定义采样批量大小。
    k. Web-UI支持GRPO训练。
    l. 以下参数将在 v3.6 移除:tensor_parallel_size / vllm_device / vllm_max_num_seqs / num_infer_workers。
  2. 训练:
    a. CPT/SFT/DPO/GRPO 支持 padding free。通过将批次数据展平避免数据填充(padding),显著降低显存并加速训练。训练脚本参考:https://github.com/modelscope/ms-swift/tree/main/examples/train/padding_free
    b. 多模态训练增强。支持使用 vit_lr 和 aligner_lr 参数独立控制 ViT 和 Aligner 模块的学习率。支持通过 vit_gradient_checkpointing 参数单独控制 vit 模块的 gradient checkpointing,性能基准测试参考:https://github.com/modelscope/ms-swift/blob/main/examples/train/multimodal/vit_gradient_checkpointing.sh
    c. CPT/SFT支持使用 channel loss 对不同 channel 数据集分别统计损失值。感谢招商银行技术团队的贡献。
    d. CPT/SFT/DPO支持 use_logits_to_keep参数,降低显存占用,提升训练速度。
    e. Qwen2.5-VL/Omni 支持传入图像目录进行视频训练。
  3. 推理部署:
    a. swift infer批处理优化,新增 write_batch_size 参数,用于控制批处理推理结果写入result_path的间隔。
    b. vllm 推理引擎默认使用 V1 engine,并支持TP和DP结合的推理模式,脚本参考:https://github.com/modelscope/ms-swift/blob/main/examples/infer/vllm/dp_tp.sh
  4. Megatron-SWIFT:
    a. 非流式数据集支持通过 max_epochs 自动计算 train_iters。
    b. 提供 extra_megatron_kwargs 参数,支持未写入ms-swift的megatron参数传入。

新模型

  1. Qwen/Qwen3-Embedding-0.6B系列,训练脚本参考:https://github.com/modelscope/ms-swift/blob/main/examples/train/embedding/train_emb.sh
  2. deepseek-ai/DeepSeek-R1-0528-Qwen3-8B系列,最佳实践参考https://mp.weixin.qq.com/s/-hhfGiiGTqXUybwPH525gw
  3. iic/QwenLong-L1-32B
  4. XiaomiMiMo/MiMo-7B-RL-0530、XiaomiMiMo/MiMo-VL-7B-SFT系列
  5. OpenBMB/MiniCPM4-0.5B系列

English Version

New Features

  1. GRPO:
    a. Code refactored, specified via the vllm_mode parameter. For details, refer to the documentation: https://swift.readthedocs.io/en/latest/Instruction/GRPO.html#arguments-and-execution-script:~:text=vllm_mode%20server%20parameter,in%20colocate%20mode.
    b. GRPO long-text optimization with Ulysses sequence parallelism, significantly reducing GPU memory usage during long-text training. Training script: https://github.com/modelscope/ms-swift/blob/main/examples/train/long_text/sequence_parallel_grpo.sh
    c. Added sync_ref_model parameter to synchronize reference model weights during training.
    d. Supports Liger Kernel Loss via use_liger_kernel parameter, reducing GPU memory consumption.
    e. External mode supports move_model_batches to lower peak GPU memory during ZeRO-3 weight synchronization.
    f. Integrated INTELLECT-2’s Two-Sided Clipping algorithm using the delta parameter.
    g. Supports reward functions returning None, applicable for multi-task training. For details, refer to the documentation: https://swift.readthedocs.io/en/latest/Instruction/GRPO.html#multi-task-training
    h. Internal mode supports vllm_server_base_url for passing external vLLM server URLs.
    i. Plugin extension: Added QwenLong-L1 reward model plugin.
    j. Added steps_per_generation and generation_batch_size parameters for customizing sampling batch size.
    k. Web-UI supports GRPO training.
    l. The following parameters will be deprecated in v3.6: tensor_parallel_size, vllm_device, vllm_max_num_seqs, num_infer_workers.
  2. Training:
    a. CPT/SFT/DPO/GRPO support padding-free training. By flattening batch data to avoid padding, GPU memory usage is reduced and training speed is improved. Script: https://github.com/modelscope/ms-swift/tree/main/examples/train/padding_free
    b. Multimodal training enhancements: Supports separate learning rates for ViT and Aligner modules via vit_lr and aligner_lr parameters. Added vit_gradient_checkpointing to independently control gradient checkpointing for ViT modules. Benchmark: https://github.com/modelscope/ms-swift/blob/main/examples/train/multimodal/vit_gradient_checkpointing.sh
    c. CPT/SFT support channel_loss to separately calculate loss for different channel datasets. Thanks to the contributions from the technical team at China Merchants Bank.
    d. CPT/SFT/DPO support use_logits_to_keep to reduce GPU memory usage and accelerate training.
    e. Qwen2.5-VL/Omni support video training by passing image directories.
  3. Inference & Deployment:
    a. Optimized swift infer batching with new write_batch_size parameter to control inference result write intervals to result_path.
    b. vLLM inference engine now defaults to V1 engine and supports hybrid Tensor Parallelism (TP) and Data Parallelism (DP). Script: https://github.com/modelscope/ms-swift/blob/main/examples/infer/vllm/dp_tp.sh
  4. Megatron-SWIFT:
    a. Non-streaming datasets automatically calculate train_iters via max_epochs.
    b. Added extra_megatron_kwargs to pass unlisted Megatron parameters into ms-swift.

New Models

  1. Qwen/Qwen3-Embedding-0.6B series. Training script reference: https://github.com/modelscope/ms-swift/blob/main/examples/train/embedding/train_emb.sh
  2. deepseek-ai/DeepSeek-R1-0528-Qwen3-8B series. Best practices: https://mp.weixin.qq.com/s/-hhfGiiGTqXUybwPH525gw
  3. iic/QwenLong-L1-32B
  4. XiaomiMiMo/MiMo-7B-RL-0530 & XiaomiMiMo/MiMo-VL-7B-SFT series
  5. OpenBMB/MiniCPM4-0.5B series

What's Changed

Read more

v3.4.1.post1

18 May 14:47
Compare
Choose a tag to compare

v3.4.1

13 May 06:33
Compare
Choose a tag to compare

中文版

新特性

  1. 序列并行: 支持在PT/SFT/DPO阶段使用ulysses序列并行。兼容deepspeed、packing、flash_attn、streaming等训练技术。训练脚本参考这里
  2. GRPO: 支持自定义奖励模型逻辑,内置了一个生成式奖励模型的例子,训练脚本参考这里
  3. Megatron-SWIFT: 更新megatron-core到0.12.0;新增max_epochs参数,在epoch到达max_epochs时停止训练并保存权重;新增wandb参数记录训练日志。
  4. 最佳实践:新增从零开始快速训练视觉语言模型的最佳实践,参考这里
  5. 外部贡献:支持GRPO使用judge0执行生成的代码;支持指定freeze/activate parameters使用正则表达式;支持对初始化模型中未初始化参数指定初始化策略。感谢招商银行技术团队的贡献。

新模型

  1. XiaomiMiMo/MiMo-7B-RL系列
  2. deepseek-ai/DeepSeek-Prover-V2-7B系列
  3. OpenGVLab/InternVL3-1B-Pretrained系列

English Version

New Features

  1. Sequence Parallelism: Supports the use of Ulysses sequence parallelism during PT/SFT/DPO stages. Compatible with training techniques such as DeepSpeed, packing, flash_attn, and streaming. Refer to the training script here.
  2. GRPO: Supports custom reward model logic. Includes a built-in example of a generative reward model. Refer to the training script here.
  3. Megatron-SWIFT: Updated megatron-core to version 0.12.0. Added the max_epochs parameter to stop training and save weights when the epoch reaches max_epochs. Added the wandb parameter to log training metrics.
  4. Best Practices: Added best practices for quickly training vision-language models from scratch. Refer to the guide here.
  5. External Contributions: Supports GRPO using judge0 for executing generated code. Allows specifying freeze/activate parameters using regular expressions. Supports defining initialization strategies for uninitialized parameters in the initial model. Thanks to the contributions from the technical team at China Merchants Bank.

New Models

  1. XiaomiMiMo/MiMo-7B-RL Series
  2. deepseek-ai/DeepSeek-Prover-V2-7B Series
  3. OpenGVLab/InternVL3-1B-Pretrained Series

What's Changed

New Contributors

Full Changelog: v3.4.0...v3.4.1

v3.4.0

30 Apr 15:45
Compare
Choose a tag to compare

中文版

新特性

  1. 支持Qwen3/Qwen2-MoE/Qwen3-MoE的Megatron训练(CPT/SFT),在MoE模型上相比transformers实现训练速度快近10倍。Qwen3-MoE训练最佳实践参考: #4030

新模型

  1. Qwen/Qwen3-32B, Qwen/Qwen3-30B-A3B系列
  2. Qwen/Qwen2.5-Omni-3B

English Version

New Features

  1. Support for Megatron training (CPT/SFT) of Qwen3/Qwen2-MoE/Qwen3-MoE, with training speeds nearly 10 times faster on MoE models compared to the Transformers implementation. For best practices on Qwen3-MoE training, refer to: #4030

New Models

  1. Qwen/Qwen3-32B, Qwen/Qwen3-30B-A3B series
  2. Qwen/Qwen2.5-Omni-3B

What's Changed

New Contributors

Full Changelog: v3.3.1...v3.4.0

v3.3.1

26 Apr 08:57
Compare
Choose a tag to compare

中文版

新特性

  1. Agent训练部署模块引入agent template,包括hermes, glm4_0414, llama4等10余种agent template,支持agent数据集兼容不同模型的训练切换,文档参考这里
  2. GRPO训练支持调用外部vLLM server,训练与部署显存分配更灵活,训练脚本参考这里

新模型

  1. OpenGVLab/InternVL3-1B系列
  2. moonshotai/Kimi-VL-A3B-Instruct系列
  3. ZhipuAI/GLM-4-9B-0414, ZhipuAI/GLM-Z1-9B-0414系列

English Version

New Features

  1. The Agent training and deployment module introduces agent templates, including more than 10 types such as hermes, glm4_0414, and llama4. These templates support switching between different models for agent dataset compatibility during training. For documentation, refer to here.
  2. GRPO training now supports calling an external vLLM server, allowing for more flexible allocation of GPU memory during training and deployment. For the training script, refer to here.

New Models

  1. OpenGVLab/InternVL3-1B series
  2. moonshotai/Kimi-VL-A3B-Instruct series
  3. ZhipuAI/GLM-4-9B-0414, ZhipuAI/GLM-Z1-9B-0414 series

What's Changed

New Contributors

Full Changelog: v3.3.0...v3.3.1