Skip to content

wanghao9610/X-SAM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

✨X-SAM

From Segment Anything to Any Segmentation

Hao Wang1,2,Limeng Qiao3,Zequn Jie3, Zhijian Huang1, Chengjian Feng3,

Qingfang Zheng1, Lin Ma3, Xiangyuan Lan2πŸ“§, Xiaodan Liang1πŸ“§

1 Sun Yat-sen University, 2 Peng Cheng Laboratory, 3 Meituan Inc.

πŸ“§ Corresponding author

πŸ‘€ Notice

X-SAM is under active development, and we will continue to update the code and documentation.

We recommend that everyone use English to communicate in issues, as this helps developers from around the world discuss, share experiences, and answer questions together.

If you have any questions or would like to collaborate, please feel free to open an issue or reach out to me at [email protected].

πŸ’₯ Updates

πŸš€ Introduction

This repository provides the official PyTorch implementation, pre-trained models, training, evaluation, visualization, and demo code of X-SAM:

  • X-SAM introduces a unified multimodal large language model (MLLM) framework, extending the segmentation paradigm from segment anything to any segmentation, thereby enhancing pixel-level perceptual understanding.

  • X-SAM proposes a novel Visual GrounDed (VGD) segmentation task, which segments all instance objects using interactive visual prompts, empowering the model with visually grounded, pixel-wise interpretative capabilities.

  • X-SAM presents a unified training strategy that enables co-training across multiple datasets. Experimental results demonstrate that X-SAM achieves state-of-the-art performance on various image segmentation benchmarks, highlighting its efficiency in multimodal, pixel-level visual understanding.

✨ HIGHLIGHT: This repository provides unified and effective code for training, evaluation, and visualization of segmentation MLLMs, including LLaVA-based MLLMs. We hope this repository will promote further research on MLLMs.

πŸ”– Abstract

Large Language Models (LLMs) demonstrate strong capabilities in broad knowledge representation, yet they are inherently deficient in pixel-level perceptual understanding. Although the Segment Anything Model (SAM) represents a significant advancement in visual-prompt-driven image segmentation, it exhibits notable limitations in multi-mask prediction and category-specific segmentation tasks, and it cannot integrate all segmentation tasks within a unified model architecture. To address these limitations, we present X-SAM, a streamlined Multimodal Large Language Model (MLLM) framework that extends the segmentation paradigm from segment anything to any segmentation. Specifically, we introduce a novel unified framework that enables more advanced pixel-level perceptual comprehension for MLLMs. Furthermore, we propose a new segmentation task, termed Visual GrounDed (VGD) segmentation, which segments all instance objects with interactive visual prompts and empowers MLLMs with visual grounded, pixel-wise interpretative capabilities. To enable effective training on diverse data sources, we present a unified training strategy that supports co-training across multiple datasets. Experimental results demonstrate that X-SAM achieves state-of-the-art performance on a wide range of image segmentation benchmarks, highlighting its efficiency for multimodal, pixel-level visual understanding.

πŸ” Overview

πŸ“Š Benchmarks

Please refer to the Benchmark Results for more details.

🏁 Getting Started

1. Structure

We provide a detailed project structure for X-SAM. Please follow this structure to organize the project.

πŸ“ Structure (Click to collapse)
X-SAM
β”œβ”€β”€ datas
β”‚Β Β  β”œβ”€β”€ gcg_seg_data
β”‚Β Β  β”œβ”€β”€ gen_seg_data
β”‚Β Β  β”œβ”€β”€ img_conv_data
β”‚Β Β  β”œβ”€β”€ inter_seg_data
β”‚Β Β  β”œβ”€β”€ LMUData
β”‚Β Β  β”œβ”€β”€ ov_seg_data
β”‚Β Β  β”œβ”€β”€ rea_seg_data
β”‚Β Β  β”œβ”€β”€ ref_seg_data
β”‚Β Β  └── vgd_seg_data
β”œβ”€β”€ inits
β”‚Β Β  β”œβ”€β”€ huggingface
β”‚Β Β  β”œβ”€β”€ mask2former-swin-large-coco-panoptic
β”‚Β Β  β”œβ”€β”€ Phi-3-mini-4k-instruct
β”‚Β Β  β”œβ”€β”€ sam-vit-large
β”‚Β Β  └── xsam
β”œβ”€β”€ xsam
β”‚Β Β  β”œβ”€β”€ docs
β”‚Β Β  β”œβ”€β”€ requirements
β”‚Β Β  β”œβ”€β”€ xsam
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ configs
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ dataset
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ demo
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ engine
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ evaluation
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ model
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ structures
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ tools
β”‚Β Β  β”‚   └── utils
β”œβ”€β”€ wkdrs
β”‚Β Β  β”œβ”€β”€ s1_seg_finetune
β”‚   β”‚   β”œβ”€β”€ ...
β”‚Β Β  β”œβ”€β”€ s2_align_pretrain
β”‚   β”‚   β”œβ”€β”€ ...
β”‚Β Β  β”œβ”€β”€ s2_mixed_finetune
β”‚   β”‚   β”œβ”€β”€ ...
β”‚   β”œβ”€β”€ ...
...

2. Installation

We provide a detailed installation guide to create a environment for X-SAM, please refer to the following steps.

βš™οΈ Installation (Click to collapse)
cd X-SAM
export root_dir=$(realpath ./)
cd $root_dir/xsam

# set CUDA_HOME for cuda12.4(optional).
# X-SAM utilizes the cuda12.4 default, if your cuda is not cuda12.4, you need first export CUDA_HOME env manually.
export CUDA_HOME="your_cuda12.4_path"
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
echo -e "cuda version:\n$(nvcc -V)"

# create conda env for X-SAM
conda create -n xsam python=3.10 -y
conda activate xsam
conda install pytorch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 pytorch-cuda=12.4 -c pytorch -c nvidia
# install gcc11(optional)
conda install gcc=11 gxx=11 -c conda-forge -y
# install xtuner0.2.0
pip install git+https://github.com/InternLM/[email protected]
cd xtuner
pip install '.[all]'
# install deepspeed
pip install -r requirements/deepspeed.txt
# install xsam requirements
pip install -r requirements/xsam.txt
# install flash-attention
pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.3/flash_attn-2.7.3+cu12torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whl

# install VLMEvalKit for evaluation on VLM benchmarks(optional)
cd $root_dir
git clone -b v0.3rc1 https://github.com/open-compass/VLMEvalKit.git
cd VLMEvalKit
pip install -e .

# install aria2 for downloading datasets and models(optional)
pip install aria2

3. Preparing

There are many datasets and models to prepare, please refer to Dataset Preparing and Model Preparing for more details.

4. Training & Evaluation

✨ One Script for All !

cd $root_dir
bash runs/run.sh --modes MODES --config CONFIG_FILE --work-dir WORK_DIR --suffix WORK_DIR_SUFFIX
# MODES: train, segeval, vlmeval, visualize, demo
# bash runs/run.sh -h # echo help.
# Read the runs/run.sh for more details.

Prepare the Datasets and Models, and then refer to the following commands to start training and evaluation.

X-SAM

πŸ”₯ Training (Click to collapse)
Stage 1: Segmentor Fine-tuning
cd $root_dir
bash runs/run.sh --modes train --config xsam/configs/xsam/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s1_seg_finetune/xsam_sam_large_m2f_e36_gpu16_seg_finetune.py
Stage 2: Alignment Pre-training
cd $root_dir
bash runs/run.sh --modes train --config xsam/configs/xsam/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s2_align_pretrain/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_e1_gpu16_align_pretrain.py
Stage 3: Mixed Fine-tuning
# 🫣Coming soon...

# ‼️NOTE: Training for Mixed Fine-tuning will be available with more than 500 🌟.
πŸ§ͺ Evaluation (Click to collapse)
Evaluate on all segmentation benchmarks
cd $root_dir
# Evaluate on all segmentation benchmarks.
# NOTE: ONLY generic segmentation and VGD segmentation are supported NOW.
bash runs/run.sh --modes segeval --config xsam/configs/xsam/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune.py --work-dir $root_dir/inits/X-SAM/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune
Evaluate on all VLM benchmarks
cd $root_dir
# Evaluate on all VLM benchmarks.
bash runs/run.sh --modes vlmeval --config xsam/configs/xsam/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune.py --work-dir $root_dir/inits/X-SAM/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune

LLaVA

πŸ”₯ Training (Click to expand)
Stage 1: Alignment Pre-training
cd $root_dir
bash runs/run.sh --modes train --config xsam/configs/llava/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s1_pretrain/llava_phi3_mini_4k_instruct_siglip2_so400m_p14_384_e1_gpu16_pretrain.py
Stage 2: Instruction Fine-tuning
cd $root_dir
bash runs/run.sh --modes train --config xsam/configs/llava/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s2_finetune/llava_phi3_mini_4k_instruct_siglip2_so400m_p14_384_e1_gpu16_finetune.py
πŸ§ͺ Evaluation (Click to expand)
Evaluate on all VLM benchmarks
cd $root_dir
bash runs/run.sh --modes vlmeval --config xsam/configs/llava/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s2_finetune/llava_phi3_mini_4k_instruct_siglip2_so400m_p14_384_e1_gpu16_finetune.py

πŸ’» Demo

We provide detalied instructions for demo deployment, and a demo video is shown below.

πŸ› οΈ Deployment (Click to collapse)
cd $root_dir
bash runs/run.sh --modes demo --config xsam/configs/xsam/phi3_mini_4k_instruct_siglip2_so400m_p14_384/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune.py --work-dir $root_dir/inits/X-SAM/s3_mixed_finetune/xsam_phi3_mini_4k_instruct_siglip2_so400m_p14_384_sam_large_m2f_gpu16_mixed_finetune
πŸŽ₯ Video (Click to collapse)
xsam_demo.mp4

βœ… TODO

😊 Acknowledge

This project has referenced some excellent open-sourced repos (xtuner, VLMEvalKit, Sa2VA). Thanks for their wonderful works and contributions to the community.

πŸ“Œ Citation

If you find X-SAM is helpful for your research or applications, please consider giving us a star 🌟 and citing it by the following BibTex entry.

@article{wang2025xsam,
  title={X-SAM: From Segment Anything to Any Segmentation},
  author={Wang, Hao and Qiao, Limeng and Jie, Zequn and Huang, Zhijian and Feng, Chengjian and Zheng, Qingfang and Ma, Lin and Lan, Xiangyuan and Liang, Xiaodan},
  journal={arXiv preprint arXiv:2508.04655},
  year={2025}
}

About

X-SAM: From Segment Anything to Any Segmentation

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published