Skip to content

robosense2025/track2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

18 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– RoboSense Track 2:
Social Navigation

Official Baseline Implementation for Track 2

Based on Falcon -- "From Cognition to Precognition: A Future-Aware Framework for Social Navigation"
(https://github.com/Zeying-Gong/Falcon)

RoboSense Challenge Track 2 IROS 2025 EvalAI License

πŸ† Prize Pool: $2,000 USD for Track 2 Winners

Challenge Overview

Track 2: Social Navigation challenges participants to develop advanced RGBD-based perception and navigation systems that empower autonomous agents to interact safely, efficiently, and socially in dynamic human environments.

Participants will design algorithms that interpret human behaviors and contextual cues to generate navigation strategies that strike a balance between navigation efficiency and social compliance. Submissions must address key challenges such as real-time adaptability, occlusion handling, and ethical decision-making in socially complex settings.

🎯 Objectives

This track evaluates an agent's ability to perform socially compliant navigation in dynamic indoor environments populated with realistic human agents. Participants must design navigation policies based solely on RGBD observations and odometry, without access to global maps or privileged information.

  • Social Norm Compliance: Maintain safe distances, avoid collisions, and demonstrate socially acceptable behaviors.
  • Realistic Benchmarking: Navigate in large-scale, photo-realistic indoor scenes with dynamic, collision-aware humans.
  • Egocentric Perception: Operate from a first-person perspective, simulating how a robot would perceive its surroundings.

Competition Details

πŸ† Awards

Prize Award
πŸ₯‡ 1st Place $1000 + Certificate
πŸ₯ˆ 2nd Place $600 + Certificate
πŸ₯‰ 3rd Place $400 + Certificate
🌟 Innovation Award Cash Award + Certificate
Participation Certificate

πŸ“Š Official Dataset

This track uses the RoboSense Track 2 Social Navigation Dataset, which is based on the Social-HM3D and Social-MP3D benchmark and provides:

  • Goal-driven Trajectories: Humans navigate with intent, avoiding random or repetitive paths
  • Natural Behaviors: Movement includes walking, pausing, and realistic avoidance via ORCA
  • Balanced Density: Human count is scaled to scene size, avoiding over- or under-crowding
  • Diverse Environments: Includes 844 scenes for Social-HM3D and 72 scenes for Social-MP3D

Dataset Statistics

Dataset Num. of Scenes Scene Types Num. of Humans Natural Motion
Social-HM3D 844 Residence, Office, Shop, etc. 0–6 βœ”οΈ
Social-MP3D 72 Residence, Office, Gym, etc. 0–6 βœ”οΈ

Baseline Performance (Phase I)

The Falcon baseline achieves the following performance on the Phase I evaluation set using the Social-HM3D datasets (∼1,000 test episodes):

Dataset Success ↑ SPL ↑ PSC ↑ H-Coll ↓
Social-HM3D 55.15 55.15 89.56 42.96

Note: These baseline results are based solely on depth input. As the competition supports full RGB-D modalities, participants are encouraged to explore richer representations and surpass the baseline performance.

πŸš€ Quick Start

1. Preparing conda env

Assuming you have conda installed, let's prepare a conda env:

conda_env_name=falcon
conda create -n $conda_env_name python=3.9 cmake=3.14.0
conda activate $conda_env_name

2. conda install habitat-sim & habitat-lab

Following Habitat-lab's instruction:

conda install habitat-sim=0.3.1 withbullet -c conda-forge -c aihabitat

Then, assuming you have this repository cloned (forked from Habitat 3.0), install the necessary dependencies of Habitat.

git clone --recurse-submodules https://github.com/robosense2025/track2.git # to download the submodule
cd Falcon
pip install -e habitat-lab
pip install -e habitat-baselines

3. Downloading the Social-HM3D & Social-MP3D datasets

  • Download Scene Datasets

Following the instructions for HM3D and MatterPort3D in Habitat-lab's Datasets.md.

  • Download Episode Datasets

Download social navigation (SocialNav) episodes for the test scenes, which can be found here: Link.

After downloading, unzip and place the datasets in the default location:

unzip -d data/datasets/pointnav
  • Download Leg animation:
wget https://github.com/facebookresearch/habitat-lab/files/12502177/spot_walking_trajectory.csv -O data/robots/spot_data/spot_walking_trajectory.csv
  • Download the multi-agent necessary data:
python -m habitat_sim.utils.datasets_download --uids hab3-episodes habitat_humanoids hab3_bench_assets hab_spot_arm

The file structure should look like this:

data
β”œβ”€β”€ datasets
β”‚   └── pointnav
β”‚       β”œβ”€β”€ social-hm3d
β”‚       β”‚   β”œβ”€β”€ train
β”‚       β”‚   β”‚   β”œβ”€β”€ content
β”‚       β”‚   β”‚   └── train.json.gz
β”‚       β”‚   └── val
β”‚       β”‚       β”œβ”€β”€ content
β”‚       β”‚       └── val.json.gz
β”‚       └── social-mp3d
β”‚           β”œβ”€β”€ train
β”‚           β”‚   β”œβ”€β”€ content
β”‚           β”‚   └── train.json.gz
β”‚           └── val
β”‚               β”œβ”€β”€ content
β”‚               └── val.json.gz
└── scene_datasets
└── robots
└── humanoids
└── versoned_data
└── hab3_bench_assets

Note that here the definition of SocialNav is different from the original task in Habitat 3.0.

4. Evaluation Falcon Baseline

The pretrained models can be found in this link. Download it to pretrained_model/ under the root directory.

You can evaluate it on the Social-HM3D or Social-MP3D datasets using the following template:

python -u -m habitat-baselines.habitat_baselines.run \
--config-name=social_nav_v2/falcon_<dataset>.yaml

For example, to run it on the Social-HM3D dataset:

python -u -m habitat-baselines.habitat_baselines.run \
--config-name=social_nav_v2/falcon_hm3d.yaml

5. Docker Evaluation Environment

We provide a standardized Docker environment for remote evaluation to ensure consistency and reproducibility across all submissions.

🧱 Evaluation Docker Image

All submissions will be evaluated inside the following Docker image:

docker pull zeyinggong/robosense_socialnav:v0.5

Participants are strongly encouraged to develop and test their pipelines locally using this image to ensure compatibility with the evaluation server.

πŸ“¦ Submission Packaging

Submissions will be evaluated inside the container under /app/Falcon/. You must place all necessary files in the expected structure as described in the Submission Format. The evaluator will unzip your submission.zip or load your actions.json file into /app/Falcon/input/ inside the container.

  • For Code Submissions (submission.zip), your run.sh entry point will be called directly.
  • For Action Submissions (actions.json), the evaluator will automatically invoke:
python -u -m habitat_baselines.eval --config-name=falcon_hm3d_replay.yaml

πŸ§ͺ Local Testing (Recommended)

You can test your submission locally before uploading to EvalAI:

docker run --rm -it \
    --gpus all \
    --runtime=nvidia \
    -v /path/to/your/submission:/app/Falcon/input:ro \
    -v /path/to/your/data:/app/Falcon/data:ro \
    zeyinggong/robosense_socialnav:v0.5 

Inside the container, navigate to /app/Falcon/ and manually execute your run.sh or replay evaluator command to verify correctness.

Tip: You may refer to the provided Baseline ZIP Submission Example and Baseline Action Submission Example for reference.

⏱ Evaluation Time

  • Minival Phase: typically 5–10 minutes.
  • Phase 1 Full Evaluation: may take 3–5 hours, depending on queue length and inference runtime.

If your submission remains pending for over 48 hours, please open an issue on our GitHub repository: issues. You may also contact us via email at [email protected] if necessary.

πŸŽ–οΈ Challenge Participation

Submission Requirements

  1. Phase 1: Submit results on public test set with reproducible code
  2. Phase 2: Final evaluation on private test set (same size as Phase I)
  3. Code: Submit reproducible code with your final results
  4. Model: Include trained model weights
  5. Report: Technical report describing your approach

πŸ“ Evaluation Metrics

Our benchmark focuses on two key aspects: task completion and social compliance.

Metric Description
SR (Success Rate) Fraction of episodes where the robot successfully reaches the goal.
SPL (Success weighted by Path Length) Penalizes inefficient navigation. Rewards shorter, successful paths.
PSC (Personal Space Compliance) Measures how well the robot avoids violating human personal space. A higher PSC indicates better social behavior. The threshold is set to 1.0m, considering a 0.3m human radius and 0.25m robot radius.
H-Coll (Human Collision Rate) The proportion of episodes involving any human collision. Collisions imply task failure.
Total Score Weighted combination of the core metrics: Total = 0.4 Γ— SR + 0.3 Γ— SPL + 0.3 Γ— PSC. This score reflects overall navigation quality while implicitly penalizing human collisions.

Timeline

  • Registration: Google Form
  • Phase 1 Deadline: Public test set evaluation (~190 cases)
  • Phase 2 Deadline: Private test set evaluation (~190 cases)
  • Awards Announcement: IROS 2025

πŸ”— Resources

πŸ“§ Contact & Support

πŸ“„ Citation

If you use the code and dataset in your research, please cite:

@article{gong2024cognition,
  title = {From Cognition to Precognition: A Future-Aware Framework for Social Navigation},
  author = {Gong, Zeying and Hu, Tianshuai and Qiu, Ronghe and Liang, Junwei},
  journal = {arXiv preprint arXiv:2409.13244},
  year = {2024}
}

Acknowledgements

RoboSense 2025 Challenge Organizers

RoboSense 2025 Program Committee


πŸ€– Ready to sense the world robustly? Register now and compete for $2,000!

πŸ“ Register Here | 🌐 Challenge Website | πŸ“§ Contact Us

Made with ❀️ by the RoboSense 2025 Team