A versatile simulator for advancing research in event-based and RGB-event data fusion.
Automatically generated results where objects are randomly selected from a pool and then placed and moved according to some pre-defined rules (also the camera):
V2 (need to checkout dev/v2 branch):
- Event simulation: event data simulated from high-frequency rendering data
- Simulation of low dynamic range, motion blur, defocus blur and atmospheric effect
- Dense point tracking: provide tracking ground truth for each pixel at any frame and any object
- Forward/backward optical flow
- Depth maps
Datas that are not shown in the demo but are also accessible
- Normal maps
- Instance segmentation
- Camera poses and intrinsic
- Object poses
- Install Blender, recommended version 3.3, link: https://www.blender.org/download/lts/3-3/
- Install Python dependencies
conda env create -f environment.yml- Prepare data and put them under
data/. The full data that we used includes:
**texture**
1. ADE20K dataset, need to download yourself
2. flickr images
3. pixabay images
4. cc textures
**object**
1. shapenet dataset, need to download yourself
2. google scanned dataset, need to download yourselfWe crawled some images for usage such as texture and HDR lighting, including flickr, pixabay and cc textures and so on. Download our prepared data through:
python scripts/download_hf_data_v2.pyWe also provide sample data for fast testing. You can download them using the following command:
python scripts/download_hf_data_example.py- (Optional) If you are running rendering on a headless machine, you will need to start an xserver. To do this, run:
sudo apt-get install xserver-xorg
sudo python3 scripts/start_xserver.py start
export DISPLAY=:0.{id} # for example, to use the GPU card 0, it should be DISPLAY=:0.0- Run the main script
If you want to use the default config (need to prepare full dataset), you can run:
python main.pyElse if you want to use the sample data, you can run:
python main.py --config configs/blinkflow_v2_example.yamlIf it runs successfully, you will see the similar result under output folder:
output/train/000000
├── events_left
├── forward_flow
├── clean_uint8
├── all_instance.txt
├── dynamic_instance.txt
├── event_ts.txt
├── image_ts.txt
├── metadata.json
└── clean.mp4
In the default config, we disable the rendering and parsing of many data such as stereo data and the groud truth of particle tracking, depth and so on. You can refer to the config (configs/blinkflow_v2.yaml) and enable them if you need.
If you find this code useful for your research, please use the following BibTeX entry.
@inproceedings{blinkflow_iros2023,
title={BlinkFlow: A Dataset to Push the Limits of Event-based Optical Flow Estimation},
author={Yijin Li, Zhaoyang Huang, Shuo Chen, Xiaoyu Shi, Hongsheng Li, Hujun Bao, Zhaopeng Cui, Guofeng Zhang},
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
month = {October},
year = {2023},
}
@inproceedings{blinkvision_eccv2024,
title={BlinkVision: A Benchmark for Optical Flow, Scene Flow and Point Tracking Estimation using RGB Frames and Events},
author={Yijin Li, Yichen Shen, Zhaoyang Huang, Shuo Chen, Weikang Bian, Xiaoyu Shi, Fu-Yun Wang, Keqiang Sun, Hujun Bao, Zhaopeng Cui, Guofeng Zhang, Hongsheng Li},
booktitle={European Conference on Computer Vision (ECCV)},
year={2024}
}
