BotVIO: A Lightweight Transformer-Based Visual-Inertial Odometry for Robotics
Wenhui Wei, Yangfan Zhou, Yimin Hu, Zhi Li, Sen Wang, Xin Liu, Jiadong Li
- Create conda environment
- Install torch==1.12.1, torchvision==0.13.1, timm==0.4.12
Please refer to Visual-Selective-VIO to prepare your data.
Please download pretrained models and place them under pretrain_models directory.
python ./evaluation/eval_odom.py
python ./evaluation/evaluate_pose_vo.py
Please modify '--data_path' in the options.py file to specify your dataset path. Additionally, update the pose embedding data type to float16 in PositionalEncodingFourier function within the depth encoder.py file. In addtion, comment out the fully connected (FC) layer in the pose_encoder.py.
python ./evaluation/evaluate_pose_vio.py
Please modify '--data_path' in the options.py file to specify your dataset path. Additionally, update the pose embedding data type to float16 in PositionalEncodingFourier function within the depth encoder.py file.
python ./evaluation/evaluate_depth.py
Please modify '--data_path' in the options.py file to specify your dataset path, and comment out the IMU data reading process in the in the mono_dataset.py file.
python ./evaluation/evaluate_timing.py
Please modify '--data_path' in the options.py file to specify your dataset path. Additionally, update the pose embedding data type to float16 in PositionalEncodingFourier function within the depth encoder.py file.
The manuscript related to this work is currently under review.
