This repository is the official implementation of the papers E-VIO: A Continual Evolving Visual-Inertial Odometry for Drones in Flight, which has been submitted to ISPRS Journal of Photogrammetry and Remote Sensing.
- Create conda environment
- Install g2opy, torch==1.10.0, torchvision==0.11.1
To re-train or run the experiments from our paper, please download and pre-process the respective datasets.
Download the following files
leftImg8bit_sequence_trainvaltest.ziptimestamp_sequence.zipvehicle_sequence.zip
Download the following files
22015-10-29-12-18-17_stereo_centre.tar,2015-10-29-12-18-17_gps.tar2015-02-03-08-45-10_stereo_centre.tar,2015-02-03-08-45-10_gps.tar2015-08-21-10-40-24_stereo_centre.tar,2015-08-21-10-40-24_gps.tar
Undistort the center images:
python datasets/robotcar.py <IMG_PATH> <MODELS_PATH>Download the KITTI Odometry dataset
odometry data setodometry ground truth poses
Extract the raw data matching the odometry dataset. | 04 | 2011_09_30_drive_0016 | | 09 | 2011_09_30_drive_0033 | | 10 | 2011_09_30_drive_0034 |
Download the EuRoc MAV dataset
MH_03, MH_05, V2_02
We pre-trained CoVIO on the Cityscapes Dataset.
python main_pretrain.pyFor continual learning, we used the KITTI Odometry Dataset, the Oxford RobotCar Dataset and the EuRoc MAC Dataset. Then run:
python main_adapt.pyFor academic usage, the code is released under the GPLv3 license. For any commercial purpose, please contact the authors.
[1] Continual SLAM: Beyond Lifelong Simultaneous Localization and Mapping Through Continual Learning [2] Lite-Mono: A Lightweight CNN and Transformer Architecture for Self-Supervised Monocular Depth Estimation