Skip to content

mystorm16/FastVGGT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

⚡️ FastVGGT: Training-Free Acceleration of Visual Geometry Transformer

Paper PDF Project Page

Maclab Logo Autolab Logo

Media Analytics & Computing Laboratory; AUTOLAB

You Shen, Zhipeng Zhang, Yansong Qu, Liujuan Cao

📰 News

  • [Sep 10, 2025] Added COLMAP outputs.
  • [Sep 8, 2025] Added custom dataset evaluation.
  • [Sep 3, 2025] Paper release.
  • [Sep 2, 2025] Code release.

🔭 Overview

FastVGGT observes strong similarity in attention maps and leverages it to design a training-free acceleration method for long-sequence 3D reconstruction, achieving up to 4× faster inference without sacrificing accuracy.

Autolab Logo

⚙️ Environment Setup

First, create a virtual environment using Conda, clone this repository to your local machine, and install the required dependencies.

conda create -n fastvggt python=3.10
conda activate fastvggt
git clone [email protected]:mystorm16/FastVGGT.git
cd FastVGGT
pip install -r requirements.txt

Next, prepare the ScanNet dataset: http://www.scan-net.org/ScanNet/

Then, download the VGGT checkpoint (we use the checkpoint link provided in https://github.com/facebookresearch/vggt/tree/evaluation/evaluation):

wget https://huggingface.co/facebook/VGGT_tracker_fixed/resolve/main/model_tracker_fixed_e20.pt

Finally, configure the dataset path and VGGT checkpoint path. For example:

    parser.add_argument(
        "--data_dir", type=Path, default="/data/scannetv2/process_scannet"
    )
    parser.add_argument(
        "--gt_ply_dir",
        type=Path,
        default="/data/scannetv2/OpenDataLab___ScanNet_v2/raw/scans",
    )
    parser.add_argument(
        "--ckpt_path",
        type=str,
        default="./ckpt/model_tracker_fixed_e20.pt",
    )

💎 Observation

Note: A large number of input_frames may significantly slow down saving the visualization results. Please try using a smaller number first.

python eval/eval_scannet.py --input_frame 30 --vis_attn_map --merging 0

We observe that many token-level attention maps are highly similar in each block, motivating our optimization of the Global Attention module.

Autolab Logo

🏀 Evaluation

Custom Dataset

Please organize the data according to the following directory:

<data_path>/
├── images/       
│   ├── 000000.jpg
│   ├── 000001.jpg
│   └── ...
├── pose/                # Optional: Camera poses
│   ├── 000000.txt 
│   ├── 000001.txt
│   └── ...
└── gt_ply/              # Optional: GT point cloud
    └── scene_xxx.ply   
  • Required: images/
  • Additionally required when --enable_evaluation is enabled: pose/ and gt_ply/

Inference only:

python eval/eval_custom.py \
  --data_path /path/to/your_dataset \
  --output_path ./eval_results_custom \
  --plot

Inference + Evaluation (requires pose/ and gt_ply/):

python eval/eval_custom.py \
  --data_path /path/to/your_dataset \
  --enable_evaluation \
  --output_path ./eval_results_custom \
  --plot

If you want the results in COLMAP’s format:

python eval/eval_custom_colmap.py \
  --data_path /path/to/your_dataset \
  --output_path ./eval_results_custom_colmap \

ScanNet

Evaluate FastVGGT on the ScanNet dataset with 1,000 input images. The --merging parameter specifies the block index at which the merging strategy is applied:

python eval/eval_scannet.py --input_frame 1000 --merging 0

Evaluate Baseline VGGT on the ScanNet dataset with 1,000 input images:

python eval/eval_scannet.py --input_frame 1000

Autolab Logo

7 Scenes & NRGBD

Evaluate across two datasets, sampling keyframes every 10 frames:

python eval/eval_7andN.py --kf 10

🍺 Acknowledgements

⚖️ License

See the LICENSE file for details about the license under which this code is made available.

Citation

If you find this project helpful, please consider citing the following paper:

@article{shen2025fastvggt,
  title={FastVGGT: Training-Free Acceleration of Visual Geometry Transformer},
  author={Shen, You and Zhang, Zhipeng and Qu, Yansong and Cao, Liujuan},
  journal={arXiv preprint arXiv:2509.02560},
  year={2025}
}

🔍 Explore, Capture, Lead in 3D

Maclab Logo

About

Code for FastVGGT: Training-Free Acceleration of Visual Geometry Transformer

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages