Hao Wang1,2,Limeng Qiao3,Zequn Jie3, Zhijian Huang1, Chengjian Feng3,
Qingfang Zheng1, Lin Ma3, Xiangyuan Lan2π§, Xiaodan Liang1π§
1 Sun Yat-sen University, 2 Peng Cheng Laboratory, 3 Meituan Inc.
π§ Corresponding author
X-SAM is under active development, and we will continue to update the code and documentation.
We recommend that everyone use English to communicate in issues, as this helps developers from around the world discuss, share experiences, and answer questions together.
If you have any questions or would like to collaborate, please feel free to open an issue or reach out to me at [email protected]
.
2025-08-11
: Thanks for your great attention to our work! We have deployed another Online Demo2. You can also try it if Online Demo1 is not available.2025-08-11
: We released the effective code for Evaluation on All Segmentation Benchmarks. We have updated all code except for Training X-SAM.2025-08-10
: We released the detailed instructions for Demo Deployment.2025-08-09
: We released the code for Training LLaVA-based MLLMs.2025-08-08
: We released the simple code for Evaluation on All VLM Benchmarks.2025-08-06
: We are excited to publish the Technical Report, please check it out for more technical details.2025-08-05
: We provided the Model Weights on the HuggingFaceπ€.2025-07-26
: We deployed the Online Demo, you can try it now!
This repository provides the official PyTorch implementation, pre-trained models, training, evaluation, visualization, and demo code of X-SAM:
-
X-SAM introduces a unified multimodal large language model (MLLM) framework, extending the segmentation paradigm from segment anything to any segmentation, thereby enhancing pixel-level perceptual understanding.
-
X-SAM proposes a novel Visual GrounDed (VGD) segmentation task, which segments all instance objects using interactive visual prompts, empowering the model with visually grounded, pixel-wise interpretative capabilities.
-
X-SAM presents a unified training strategy that enables co-training across multiple datasets. Experimental results demonstrate that X-SAM achieves state-of-the-art performance on various image segmentation benchmarks, highlighting its efficiency in multimodal, pixel-level visual understanding.
β¨ HIGHLIGHT: This repository provides unified and effective code for training, evaluation, and visualization of segmentation MLLMs, including LLaVA-based MLLMs. We hope this repository will promote further research on MLLMs.
Large Language Models (LLMs) demonstrate strong capabilities in broad knowledge representation, yet they are inherently deficient in pixel-level perceptual understanding. Although the Segment Anything Model (SAM) represents a significant advancement in visual-prompt-driven image segmentation, it exhibits notable limitations in multi-mask prediction and category-specific segmentation tasks, and it cannot integrate all segmentation tasks within a unified model architecture. To address these limitations, we present X-SAM, a streamlined Multimodal Large Language Model (MLLM) framework that extends the segmentation paradigm from segment anything to any segmentation. Specifically, we introduce a novel unified framework that enables more advanced pixel-level perceptual comprehension for MLLMs. Furthermore, we propose a new segmentation task, termed Visual GrounDed (VGD) segmentation, which segments all instance objects with interactive visual prompts and empowers MLLMs with visual grounded, pixel-wise interpretative capabilities. To enable effective training on diverse data sources, we present a unified training strategy that supports co-training across multiple datasets. Experimental results demonstrate that X-SAM achieves state-of-the-art performance on a wide range of image segmentation benchmarks, highlighting its efficiency for multimodal, pixel-level visual understanding.
Please refer to the Benchmark Results for more details.
We provide a detailed project structure for X-SAM. Please follow this structure to organize the project.
π Structure (Click to collapse)
X-SAM
βββ datas
βΒ Β βββ gcg_seg_data
βΒ Β βββ gen_seg_data
βΒ Β βββ img_conv_data
βΒ Β βββ inter_seg_data
βΒ Β βββ LMUData
βΒ Β βββ ov_seg_data
βΒ Β βββ rea_seg_data
βΒ Β βββ ref_seg_data
βΒ Β βββ vgd_seg_data
βββ inits
βΒ Β βββ huggingface
βΒ Β βββ mask2former-swin-large-coco-panoptic
βΒ Β βββ Phi-3-mini-4k-instruct
βΒ Β βββ sam-vit-large
βΒ Β βββ xsam
βββ xsam
βΒ Β βββ docs
βΒ Β βββ requirements
βΒ Β βββ xsam
βΒ Β βΒ Β βββ configs
βΒ Β βΒ Β βββ dataset
βΒ Β βΒ Β βββ demo
βΒ Β βΒ Β βββ engine
βΒ Β βΒ Β βββ evaluation
βΒ Β βΒ Β βββ model
βΒ Β βΒ Β βββ structures
βΒ Β βΒ Β βββ tools
βΒ Β β βββ utils
βββ wkdrs
βΒ Β βββ s1_seg_finetune
β β βββ ...
βΒ Β βββ s2_align_pretrain
β β βββ ...
βΒ Β βββ s2_mixed_finetune
β β βββ ...
β βββ ...
...
We provide a detailed installation guide to create a environment for X-SAM, please refer to the following steps.
βοΈ Installation (Click to collapse)
cd X-SAM
export root_dir=$(realpath ./)
cd $root_dir/xsam
# set CUDA_HOME for cuda12.4(optional).
# X-SAM utilizes the cuda12.4 default, if your cuda is not cuda12.4, you need first export CUDA_HOME env manually.
export CUDA_HOME="your_cuda12.4_path"
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
echo -e "cuda version:\n$(nvcc -V)"
# create conda env for X-SAM
conda create -n xsam python=3.10 -y
conda activate xsam
conda install pytorch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 pytorch-cuda=12.4 -c pytorch -c nvidia
# install gcc11(optional)
conda install gcc=11 gxx=11 -c conda-forge -y
# install xtuner0.2.0
pip install git+https://github.com/InternLM/[email protected]
cd xtuner
pip install '.[all]'
# install deepspeed
pip install -r requirements/deepspeed.txt
# install xsam requirements
pip install -r requirements/xsam.txt
# install flash-attention
pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.3/flash_attn-2.7.3+cu12torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whl
# install VLMEvalKit for evaluation on VLM benchmarks(optional)
cd $root_dir
git clone -b v0.3rc1 https://github.com/open-compass/VLMEvalKit.git
cd VLMEvalKit
pip install -e .
# install aria2 for downloading datasets and models(optional)
pip install aria2
There are many datasets and models to prepare, please refer to Dataset Preparing and Model Preparing for more details.
β¨ One Script for All !
cd $root_dir
bash runs/run.sh --modes MODES --config CONFIG_FILE --work-dir WORK_DIR --suffix WORK_DIR_SUFFIX
# MODES: train, segeval, vlmeval, visualize, demo
# bash runs/run.sh -h # echo help.
# Read the runs/run.sh for more details.
Prepare the Datasets and Models, and then refer to the following commands to start training and evaluation.
π₯ Training (Click to collapse)
cd $root_dir
bash runs/run.sh --modes train --config xsam/configs/xsam/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s1_seg_finetune/xsam_sam_large_m2f_e36_gpu16_seg_finetune.py
cd $root_dir
bash runs/run.sh --modes train --config xsam/configs/xsam/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s2_align_pretrain/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_e1_gpu16_align_pretrain.py
# π«£Coming soon...
# βΌοΈNOTE: Training for Mixed Fine-tuning will be available with more than 500 π.
π§ͺ Evaluation (Click to collapse)
cd $root_dir
# Evaluate on all segmentation benchmarks.
# NOTE: ONLY generic segmentation and VGD segmentation are supported NOW.
bash runs/run.sh --modes segeval --config xsam/configs/xsam/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune.py --work-dir $root_dir/inits/X-SAM/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune
cd $root_dir
# Evaluate on all VLM benchmarks.
bash runs/run.sh --modes vlmeval --config xsam/configs/xsam/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune.py --work-dir $root_dir/inits/X-SAM/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune
π₯ Training (Click to expand)
cd $root_dir
bash runs/run.sh --modes train --config xsam/configs/llava/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s1_pretrain/llava_phi3_mini_4k_instruct_siglip2_so400m_p14_384_e1_gpu16_pretrain.py
cd $root_dir
bash runs/run.sh --modes train --config xsam/configs/llava/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s2_finetune/llava_phi3_mini_4k_instruct_siglip2_so400m_p14_384_e1_gpu16_finetune.py
π§ͺ Evaluation (Click to expand)
cd $root_dir
bash runs/run.sh --modes vlmeval --config xsam/configs/llava/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s2_finetune/llava_phi3_mini_4k_instruct_siglip2_so400m_p14_384_e1_gpu16_finetune.py
We provide detalied instructions for demo deployment, and a demo video is shown below.
π οΈ Deployment (Click to collapse)
cd $root_dir
bash runs/run.sh --modes demo --config xsam/configs/xsam/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune.py --work-dir $root_dir/inits/X-SAM/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune
π₯ Video (Click to collapse)
xsam_demo.mp4
- Release the Online Demo.
- Release the Model Weights.
- Release the Technical Report.
- Release the code for Training LLaVA-based MLLMs.
- Release the code for Evaluation on All VLM Benchmarks.
- Release the code for Demo Deployment.
- Release the code for Evaluation on All Segmentation Benchmarks.
- Release the code for Training X-SAM (more than 500 π).
This project has referenced some excellent open-sourced repos (xtuner, VLMEvalKit, Sa2VA). Thanks for their wonderful works and contributions to the community.
If you find X-SAM is helpful for your research or applications, please consider giving us a star π and citing it by the following BibTex entry.
@article{wang2025xsam,
title={X-SAM: From Segment Anything to Any Segmentation},
author={Wang, Hao and Qiao, Limeng and Jie, Zequn and Huang, Zhijian and Feng, Chengjian and Zheng, Qingfang and Ma, Lin and Lan, Xiangyuan and Liang, Xiaodan},
journal={arXiv preprint arXiv:2508.04655},
year={2025}
}