A 3D reconstruction method to obtain the floor plan including openings (i.e. doors and windows) of multi-room environments from a sequence of RGB-D images captured by a mobile robot.
A dataset of simulated airflows and gas dispersion in detailed 3D models of real houses. Made with robotics in mind, fully integrated with ROS.
Open Mobile Robot Arquitecture (OpenMORA) is a MOOS and MRPT-based distributed architecture for mobile robots. It provides off-the-shelf modules for common robotics platforms and sensors, dataset logging, MonteCarlo Localization, reactive and planned navigation, etc.
The Robot-at-Home dataset (Robot@Home, paper here) is a collection of raw and processed data from five domestic settings compiled by a mobile robot equipped with 4 RGB-D cameras and a 2D laser scanner. Its main purpose is to serve as a testbed for semantic mapping algorithms through the categorization of objects and/or rooms.
The UMA Visual Inertial Dataset is a collection of 32 sequences obtained in challenging conditions (changing light, low-terxtured scenes) with a handheld custom rig composed by a stereo camera, a stereo rig and a Inertial Measurement Unit.
We have developed a software library called MRPT (Mobile Robot Programming Toolkit) that provides a number of wrappers for the most common robotic needs in our projects: localization, mapping, perception, etc. You can visit the MRPT web site by clicking on the image below.
We have used our BABEL Development System to design and implement all the software of our robotic applications. You can visit the BABEL site by clicking in the image below.
The Object Labeling Toolkit (OLT) consists of a number of software applications for performing an effortless labeling of datasets, concretely those containing sequences of RGB-D observations.
The Undirected Probabilistic Graphical Models in C++ (UPGMpp) is an open-source library for the building, training and managing of probabilistic graphical models.
The UMA-Offices dataset was collected in our facilities at the University of Málaga. It consist of 25 scenarios captured through an RGB-D sensor mounted on a robot.