Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion (AAAI 2021)
This repository is for JS3C-Net introduced in the following AAAI-2021 paper [arxiv paper]
Xu Yan, Jiantao Gao, Jie Li, Ruimao Zhang, Zhen Li*, Rui Huang and Shuguang Cui, "Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion".
If you find our work useful in your research, please consider citing:
@inproceedings{yan2021sparse,
title={Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion},
author={Yan, Xu and Gao, Jiantao and Li, Jie and Zhang, Ruimao and Li, Zhen and Huang, Rui and Cui, Shuguang},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={35},
number={4},
pages={3101--3109},
year={2021}
}
Clone the repository:
git clone https://github.com/yanx27/JS3C-Net.git
Installation instructions for Ubuntu 16.04:
sh complile.sh
in /lib
./lib/spconv
. We use the same version with PointGroup, you can install it according to the instruction. Higher version spconv may cause issues.min range: 2.5
max range: 70
future scans: 70
min extent: [0, -25.6, -2]
max extent: [51.2, 25.6, 4.4]
voxel size: 0.2
SemanticKITTI(POSS)
├── dataset
│ ├── sequences
│ │ ├── 00
│ │ │ ├── labels
│ │ │ ├── velodyne
│ │ │ ├── voxels
│ │ │ ├── [OTHER FILES OR FOLDERS]
│ │ ├── 01
│ │ ├── ... ...
--debug
.Run the following command to start the training. Output (logs) will be redirected to ./logs/JS3C-Net-kitti/
. You can ignore this step if you want to use our pretrained model in ./logs/JS3C-Net-kitti/
.
$ python train.py --gpu 0 --log_dir JS3C-Net-kitti --config opt/JS3C_default_kitti.yaml
Run the following command to evaluate model on evaluation or test dataset
$ python test_kitti_segment.py --log_dir JS3C-Net-kitti --gpu 0 --dataset [val/test]
Run the following command to evaluate model on evaluation or test dataset
$ python test_kitti_ssc.py --log_dir JS3C-Net-kitti --gpu 0 --dataset [val/test]
Results on SemanticPOSS can be easily obtained by
$ python train.py --gpu 0 --log_dir JS3C-Net-POSS --config opt/JS3C_default_POSS.yaml
$ python test_poss_segment.py --gpu 0 --log_dir JS3C-Net-POSS
We trained our model on a single Nvidia Tesla V100 GPU with batch size 6. If you want to train on the TITAN GPU, you can choose batch size as 2. Please modify dataset_dir
in args.txt
to your path.
Model | #Param | Segmentation | Completion | Checkpoint |
---|---|---|---|---|
JS3C-Net | 2.69M | 66.0 | 56.6 | 18.5MB |
Quantitative results on SemanticKITTI Benchmark at the submisison time.
This project is not possible without multiple great opensourced codebases.
This repository is released under MIT License (see LICENSE file for details).