Hang Su Ling Gao Tao Liu Laurent Kneip
Mobile Perception Lab, ShanghaiTech University
arXiv | Poster | Video | Test Data
This repository contains the official implementation of our RAL 2025 paper, "Motion-Aware Optical Camera Communication with Event Cameras". The code is open-source and licensed under the terms of Apache-2.0. For commercial use, please contact the authors. If you use this code in your academic work, please cite the following publication:
@article{su2025motion,
author = {Su, Hang and Gao, Ling and Liu, Tao and Kneip, Laurent},
title = {Motion-Aware Optical Camera Communication with Event Cameras},
journal = {IEEE Robotics and Automation Letters},
pages = {1385--1392},
volume = {10},
number = {2},
year = {2025},
doi = {10.1109/LRA.2024.3517292}
}As the ubiquity of smart mobile devices continues to rise, Optical Camera Communication systems have gained more attention as a solution for efficient and private data streaming. This system utilizes optical cameras to receive data from digital screens via visible light. Despite their promise, most of them are hindered by dynamic factors such as screen refreshing and rapid camera motion. CMOS cameras, often serving as the receivers, suffer from limited frame rates and motion-induced image blur, which degrade overall performance. To address these challenges, this letter unveils a novel system that utilizes event cameras. We introduce a dynamic visual marker and design event-based tracking algorithms to achieve fast localization and data streaming. Remarkably, the event camera's unique capabilities mitigate issues related to screen refresh rates and camera motion, enabling a high throughput of up to 114 Kbps in static conditions, and a 1cm localization accuracy with 1% bit error rate under various camera motions.
We have provided Python scripts to generate our dynamic markers.
- OpenCV
- Numpy
- click
- alive_progress
- termcolor
python marker_generation/marker_generator.py [data.txt] --output_path [marker.mp4] --cell 16 --fps 60 --duration 30 Please replace [data.txt] and [marker.mp4] with your input and output path. Note that in our experiment for camera motion, we set the cell number to 16.
The example data is available on the Google Drive. The example data contain a rosbag, GT trajectory and the raw message to transmit.
The code has been tested on Ubuntu 20.04 with ROS noetic and the following dependencies:
- Eigen3 3.3.7
- OpenCV 4.2
- Ceres 2.2
Enter an existing catkin workspace
cd {your_catkin_workspace}/src/or create a catkin workspace if you don't have one
mkdir -p ros_ws/src && cd ros_ws/srcClone this repository from Github
git clone git@github.com:suhang99/EventOCC.gitBuild the catkin package by
source /opt/ros/noetic/setup.bash
catkin build evlc_screenMake sure you have correctly set the parameters in launch/run.launch and param/default.yaml, and run
source {your_catkin_workspace}/devel/setup.bash
roslaunch evlc_screen run.launchThe output trajectory is in TUM format (timestamp tx ty tz qx qy qz qw).
We would like to acknowledge the funding support provided by project 62250610225 by the Natural Science Foundation of China, as well as projects 22DZ1201900, 22ZR1441300, and dfycbj-1 by the Natural Science Foundation of Shanghai.
