UMA Visual-Inertial Dataset
The UMA Visual Inertial Dataset is a collection of 32 sequences obtained in challenging conditions (changing light, low-terxtured scenes) with a handheld custom rig composed by a stereo camera, a stereo rig and a Inertial Measurement Unit.
The UMA-VI Dataset: Visual-Inertial Odometry in Low-textured and Dynamic Illumination Environments
————————————————-
Contact: David Zuñiga-Noël, Alberto Jaenal, Ruben Gomez-Ojeda, and Javier Gonzalez-Jimenez
Mails: dzuniga@uma.es, alberto.jaenal@uma.es
————————————————-
This paper presents a visual-inertial dataset gathered in indoor and outdoor scenarios with a handheld custom sensor rig, for over 80 min in total. The dataset contains hardware-synchronized data from a commercial stereo camera (Bumblebee®2), a custom stereo rig and an inertial measurement unit. The most distinctive feature of this dataset is the strong presence of low-textured environments and scenes with dynamic illumination, which are recurrent corner cases of visual odometry and SLAM methods. The dataset comprises 32 sequences and is provided with pseudo- ground truth poses at the beginning and the end of each of the sequences, thus allowing to measure the accumulated drift in each case. We also make available open source tools for evaluation purposes, as well as the intrinsic and extrinsic calibration parameters of all sensors in the rig.
Contents
Downloads
Video
————————————————-
————————————————-
If you use this dataset, please cite it through (the paper is available here):
@article{zuniga2020vi, title={The UMA-VI dataset: Visual--inertial odometry in low-textured and dynamic illumination environments}, author={Zu{\~n}iga-No{\"e}l, David and Jaenal, Alberto and Gomez-Ojeda, Ruben and Gonzalez-Jimenez, Javier}, journal={The International Journal of Robotics Research}, volume={39}, number={9}, pages={1052--1060}, year={2020}, publisher={SAGE Publications Sage UK: London, England} }
1. Summary
The dataset contains 32 sequences for the evaluation of VI motion estimation methods, totalling ∼80 min of data. The dataset covers challenging conditions (mainly illumination changes and low textured environments) in different degrees and a wide rage of scenarios (including corridors, parking, classrooms, halls, etc.) from two different buildings at the University of Malaga. In general, we provide at least two different sequences within the same scenario, with different illumination conditions or following different trajectories. All sequences were recorded with our VI sensor handheld, except a few that were recorded while mounted in a car.
2. Hardware
We designed a custom VI sensor unit for data collection purposes. Our VI sensor consists of two stereo rigs and a three-axis inertial unit, as can be seen in the image below:
3. Dataset Sequences
The dataset sequences have been classified depending on the type of challenge addressed. Each sequence is stored as a single .zip file and named following the format:
Name-BuildingNumber(-Reverse)_Date_Class
All the secuence files contain (the format used is explained in the paper):
- synchornized timestamps for camera and IMU
- the ground truth poses for the start and the end of each sequence, in order to allow a measure of the drift of the tested algorithm
- the exposure time of each image
4. Calibration Sequences
The calibration parameters of our VI sensor can be downloaded here. We also provide the corresponding calibration sequence, allowing custom calibration methods to be used with our dataset.
The parameters of the AprilTag grid used in the calibration sequences are showen below. The file can also be downloaded here.
|
4.1. Camera intrinsic calibration
We calibrated the intrinsic parameters (the projection parameters of each camera as well as the relative spatial transformation between the two cameras of each stereo setup) for each stereo rig independently. For that purpose, we recorded the calibration pattern (AprilTag grid) while slowly moving the VI sensor unit in front of it. The final calibration parameters were estimated using the calibration toolbox Kalibr. The next table shows three different sequences to calibrate each stereo pair.
4.2. Camera photometric calibration
In order to allow direct VI approaches to be used with our dataset, we also provide calibration sequences for the sensor’s response function and lens vignetting mafor the uEye monochrome camera. To obtain the cameras response function, we recorded a static scene with different exposure times (ranging from 0.07 ms to 20 ms with the smallest steps allowed by the sensor). For the vignette calibration, we recorded a known marker (ArUco) on a white planar surface while moving the VI sensor in order to observe it from different angles. In the next table we provide two Response sequences and two Vignette sequences.
The photometric camera calibration (A Photometrically Calibrated Benchmark For Monocular Visual Odometry) was performed only on the uEye camera (given that the Bumblebee parameters are continuously autoadjusted). To perform such calibration, we used the mono_dataset_code (edited by Alberto Jaenal). The vignette calibration results are shown in the next image (we calibrated the camera as a RadTan model):
Our results for the photometric calibration of each uEye camera are:
- Response: cam2_response.txt, cam3_response.txt
- Vignette: cam2_vignette.png, cam3_vignette.png
4.3. IMU Noise Calibration
The IMU intrinsics were obtained using the approach developed in The TUM VI Benchmark for Evaluating Visual-Inertial Odometry, in which the authors approximate the value of the deviation of white noise and bias by Allan deviations. The calibration sequence is avalaible for download as .bin and .npy files with the next formats:
| # Unsigned long long timestamp, double gyro_x, gyro_y, gyro_z, accel_x, accel_y, accel_z (Little endian) |
| # timestamp, gyro_x, gyro_y, gyro_z, accel_x, accel_y, accel_z |
The calibration was performed with the IMU_calib_still tool. The results for the IMU intrinsic Noise Calibration are shown below:
|
4.4. Extrinsic calibration
We calibrated the extrinsics as well as the time-sinchronization offsets for each camera with respect to the IMU. For that purpose, we again recorded the calibration pattern (AprilTag grid) while moving the VI sensor unit in front of it, trying to excite the three axes of the IMU with rotation and translation movements. We used Kalibr to estimate the extrinsic parameters and time delays of the cameras.
The results for both the intrinsic calibration of each stero pair and its extrinsic calibration with the the IMU are provided below:
Bumblebee®2 – IMU
|
uEye – IMU
|
5. Evaluation
In order to test each method’s performance we developed evaluation tools, available at (repo). These tools test the the drift error and the alignment error, which are described in our paper, and have been developed in C++ (src/evaluate.cpp) and in Python3 (python/evaluate.py).
6. Tools
We also provide different tools in our (repo):
- a rosbag sequence creator, developed in Python (python/rosbagCreator.py)
- a TUM format to ASL format .csv sequence conversor, developed in C++ (src/tum2asl.cpp)
- the aforedescribed IMU calibration toolbox (imu_calibration)
7. License
All data in the UMA Visual Inertial Dataset is licensed under a Creative Commons 4.0 Attribution License (CC BY 4.0)