Skip to content

Wangkkklll/GausPcc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

16 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

A Novel Benchmark and Dataset for Efficient 3D Gaussian Splatting with Gaussian Point Cloud Compression

Kangli Wang1, Shihao Li1, Qianxi Yi1,2, Wei Gao1,2*
(* Corresponding author)

1SECE, Peking University
2Peng Cheng Laboratory, Shenzhen, China

arXiv License Home Page

πŸ“£ News

  • [25-05-27] πŸ”₯ We initially released code and paper.
  • [25-06-05] πŸ”₯ We released the code support for HAC++, TC-GS and Cat-3DGS.
  • [25-06-28] πŸ”₯ We released the GausPcc-1K dataset.
  • [25-10-24] πŸ”₯ We released the AnyPcc. It has higher performance in Gaussian Compression.

Todo

  • Release Paper, Example Code and Checkpoint
  • Release Dataset
  • Support for More Framework

Links

Our work on point cloud compression has also been released. Welcome to check it.

Our work on any source point cloud compression has also been released. Its performance is better than UniPCGC and GausPcc. Welcome to check it.

πŸ“Œ Introduction

Recently, immersive media and autonomous driving applications have significantly advanced through 3D Gaussian Splatting (3DGS), which offers high-fidelity rendering and computational efficiency. Despite these advantages, 3DGS as a display-oriented representation requires substantial storage due to its numerous Gaussian attributes. Current compression methods have shown promising results but typically neglect the compression of Gaussian spatial positions, creating unnecessary bitstream overhead. We conceptualize Gaussian primitives as point clouds and propose leveraging point cloud compression techniques for more effective storage. AI-based point cloud compression demonstrates superior performance and faster inference compared to MPEG Geometry-based Point Cloud Compression (G-PCC). However, direct application of existing models to Gaussian compression may yield suboptimal results, as Gaussian point clouds tend to exhibit globally sparse yet locally dense geometric distributions that differ from conventional point cloud characteristics. To address these challenges, we introduce GausPcgc for Gaussian point cloud geometry compression along with a specialized training dataset GausPcc-1K. Our work pioneers the integration of AI-based point cloud compression into Gaussian compression pipelines, achieving superior compression ratios. The framework complements existing Gaussian compression methods while delivering significant performance improvements. All code, data, and pre-trained models will be publicly released to facilitate further research advances in this field.


Ilustration of the proposed framework.

πŸ”‘ Setup

Type the command for general installation

git clone https://github.com/Wangkkklll/GausPcc.git
conda create -n gauspcc python=3.8
pip install -r requirements
cd src/gs_compress/HAC/submodules
unzip diff-gaussian-rasterization.zip
unzip gridencoder.zip
unzip simple-knn.zip
unzip arithmetic.zip
cd ../..
pip install -e HAC/submodules/diff-gaussian-rasterization
pip install -e HAC/submodules/simple-knn
pip install -e HAC/submodules/gridencoder
pip install -e HAC/submodules/arithmetic

🧩 Dataset Preparation

Please refer to the following links to obtain the data. The entire GausPcc-1K dataset will be updated gradually in the following links, including geometry, attributes, and data before and after quantization.

Datasets Download Link
Testset Link
GausPcc-1K Link

βŒ› Checkpoint Link

Please refer to the following links to obtain the ckpt.

Model Download Link
GausPcgc Link

πŸš€ Running

For train our point cloud compreesion framework

./scripts/ai_pcc/run_train_gauspcgc.sh

run the following code for train Our-HAC gaussian compression

./scripts/gs_compress/run_ours_hac.sh

More training scripts are in /scripts

πŸ’ͺ Integration with More Frameworks

General Steps

  1. Copy Required Files:

    • Copy GausPcgc/ directory to your framework
    • Copy HAC/utils/pcc_utils.py to your framework's utility directory
  2. Modify Gaussian Model:

    • Locate the file in your framework that handles Gaussian anchor points
    • Add the AI-PCC compression during encoding and decoding phases
  3. Update Import Paths:

    • Ensure the import paths in pcc_utils.py are updated to match your project structure

Example Modifications

For most frameworks, you'll need to modify the encoding and decoding processes similar to how we modified HAC's gaussian_model.py:

Encoding Phase

# Import necessary functions
from utils.pcc_utils import calculate_morton_order, compress_point_cloud

# During anchor point processing
_anchor_int = torch.round(_anchor / voxel_size)
sorted_indices = calculate_morton_order(_anchor_int)
_anchor_int = _anchor_int[sorted_indices]
npz_path = os.path.join(output_path, 'xyz_pcc.bin')
model_path = os.path.join(model_dir, 'best_model_ue_4stage_conv.pt')
out = compress_point_cloud(_anchor_int, model_path, npz_path)
bits_xyz = out['file_size_bits']

Decoding Phase

# Import necessary functions
from utils.pcc_utils import calculate_morton_order, decompress_point_cloud

# During anchor point reconstruction
npz_path = os.path.join(input_path, 'xyz_pcc.bin')
model_path = os.path.join(model_dir, 'best_model_ue_4stage_conv.pt')
anchor_decoded = decompress_point_cloud(npz_path, model_path)

_anchor_int_dec = anchor_decoded['point_cloud'].to('cuda')
sorted_indices = calculate_morton_order(_anchor_int_dec)
_anchor_int_dec = _anchor_int_dec[sorted_indices]
anchor_decoded = _anchor_int_dec * voxel_size

πŸ”Ž Contact

If your have any comments or questions, feel free to contact kangliwang@stu.pku.edu.cn.

πŸ‘ Acknowledgement

LICENSE

Please follow the LICENSE of 3D-GS and DL3DV-10K.

πŸ“˜ Citation

Please consider citing our work as follows if it is helpful.

@misc{wang2025novelbenchmarkdatasetefficient,
      title={A Novel Benchmark and Dataset for Efficient 3D Gaussian Splatting with Gaussian Point Cloud Compression}, 
      author={Kangli Wang and Shihao Li and Qianxi Yi and Wei Gao},
      year={2025},
      eprint={2505.18197},
      archivePrefix={arXiv},
      primaryClass={cs.GR},
      url={https://arxiv.org/abs/2505.18197}, 
}

If you use the GausPcc-1K dataset we released, please also consider citing DL3DV.

@inproceedings{ling2024dl3dv,
  title={Dl3dv-10k: A large-scale scene dataset for deep learning-based 3d vision},
  author={Ling, Lu and Sheng, Yichen and Tu, Zhi and Zhao, Wentian and Xin, Cheng and Wan, Kun and Yu, Lantao and Guo, Qianyu and Yu, Zixun and Lu, Yawen and others},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={22160--22169},
  year={2024}
}

About

[Arxiv 2025] Official Implementation for "A Novel Benchmark and Dataset for Efficient 3D Gaussian Splatting with Gaussian Point Cloud Compression"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors