Navigation auf uzh.ch
Teams of robots can succeed in situations where a single robot may fail. We investigate multi-robot systems composed of homogeneous or hetereogeneous robots, using both centralized and distributed communication. These teams are applied to problems in search and rescue robotics, as well as mapping and navigation.
Decentralized visual simultaneous localization and mapping (SLAM) is a powerful tool for multi-robot applications in environments where absolute positioning systems are not available. It allows a group of robots to know how the current pose of each robot relates to all its previous poses, and all current and previous poses of the entire group. Being visual, it relies on cameras, cheap, lightweight and versatile sensors, and being decentralized, it does not rely on communication to a central ground station. In this project, we develop and integrate state-of-the-art decentralized SLAM components.
We have started this line of research by investigating data-efficient visual place recognition, which is an essential component of SLAM. Classically, if a robot would want to know whether a place it sees has been previously seen by the other robots, it would need to send data to all other robots, resulting in a data exchange complexity that is linear with the robot count. We have shown that if visual place recognition can be cast to key-value lookup, this complexity can be reduced to being constant in the robot count, using a simple technique also used in Distributed Hash Tables (DHTs): deterministically assigning keys to robots. We have shown how to achieve this reduction both for bag-of-words visual place recognition and for place recognition that uses recent, machine learned full-image descriptors (NetVLAD).
Using this method for place recognition, and decentralized pose graph optimization, we have developed the first-of-its-kind decentralized visual SLAM system, whose code is available online. The main bandwidth bottleneck of that system is the information that is exchanged for relative pose estimation between robots, which is why our latest research on this subject is in Smart Interest Points that are identified using machine learning .
|
Data-Efficient Decentralized Visual SLAM IEEE International Conference on Robotics and Automation (ICRA), 2018. PDF (PDF, 629 KB)ICRA18 VIdeo Pitch PPT (PPTX, 93 MB) Code and Data |
Efficient Decentralized Visual Place Recognition From Full-Image Descriptors MRS 2017: the 1st International Symposium on Multi-Robot and Multi-Agent Systems. |
|
Efficient Decentralized Visual Place Recognition Using a Distributed Inverted Index IEEE Robotics and Automation Letters (RA-L), 2016. |
As part of the Mohamed Bin Zayed International Robotics Challenge, we developed a method for collaborative object transport with MAVs. In our approach, robots only rely on visual cues for collaboration, and do not require explicit communication. The dynamics of the transported object are taken into account, which allows flight at accelerations of up to 0.5 m/s^2.
Dynamic Collaboration without Communication: Vision-Based Cable-Suspended Load Transport with Two Quadrotors IEEE International Conference on Robotics and Automation (ICRA), 2017. |
This project considers the problem of planning a path for a ground robot through unknown terrain, using observations from a flying robot. In search and rescue missions, which are our target scenarios, the time from arrival at the disaster site to the delivery of aid is critically important. Therefore, we propose active exploration of the environment, where the flying robot that chooses where to map in order to minimize the total response time of the system (time from deployment until the ground robot eaches its goal). We use terrain class and elevation to estimate feasible and efficient paths for the ground robot. By exploring the environment actively, we achieve superior response times compared to both exhaustive and greedy exploration strategies.
J. Delmerico, E. Mueggler, J. Nitsch, D. Scaramuzza Active Autonomous Aerial Exploration for Ground Robot Path Planning IEEE Robotics and Automation Letters (RA-L), 2016. |
|
Collaborative Localization of Aerial and Ground Robots through Elevation Maps International Symposium on Safety, Security, and Rescue Robotics (SSRR), Lausanne, 2016. |
|
"On-the-spot Training" for Terrain Classification in Autonomous Air-Ground Collaborative Teams International Symposium on Experimental Robotics (ISER), Tokyo, 2016. |
Multiple MAVs run a Visual Odometry algorithm and stream their key-frames to a ground-station. The Collaborative Structure from Motion system on the centralized ground-station combines all received information in real-time and creates a global map of the environment.
We are currently working on a decentralized algorithm in which there is no central ground station, but where the robots collaborate on a shared map using distributed consensus. Efficient Decentralized Visual Place Recognition is our most recent contribution to this endeavor. This place recognition algorithm significantly reduces the bandwidth required for place recognition, compared to previous decentralized approaches.
|
T. Cieslewski, D. Scaramuzza, Efficient Decentralized Visual Place Recognition Using a Distributed Inverted Index IEEE Robotics and Automation Letters (RA-L), 2016. |
C. Forster, S. Lynen, L. Kneip, D. Scaramuzza, Collaborative Monocular SLAM with Multiple Micro Aerial Vehicles IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS'13, 2013. |
We demonstrate the fully autonomous collaboration of an aerial and a ground robot in a mock-up disaster scenario. Within this collaboration, we make use of the individual capabilities and strengths of both robots. The aerial robot first maps an area of interest, then it computes the fastest mission for the ground robot to reach a spotted victim and deliver a first-aid kit. Such a mission includes driving and removing obstacles in the way while being constantly monitored and commanded by the aerial robot. Our mission-planning algorithm distinguishes between movable and fixed obstacles and considers both the time for driving and removing obstacles. The entire mission is executed without any human interaction once the aerial robot is launched and requires a minimal amount of communication between the robots. We describe both the hardware and software of our system and detail our mission-planning algorithm. We present exhaustive results of both simulation and real experiments. Our system was successfully demonstrated more than 20 times at a trade fair.
|
E. Mueggler, M. Faessler, F. Fontana, D. Scaramuzza Aerial-guided Navigation of a Ground Robot among Movable Obstacles |
We propose a new method for the localization of a Micro Aerial Vehicle (MAV) with respect to a ground robot. We solve the problem of registering the 3D maps computed by the robots using different sensors: a dense 3D reconstruction from the MAV monocular camera is aligned with the map computed from the depth sensor on the ground robot. Once aligned, the dense reconstruction from the MAV is used to augment the map computed by the ground robot, by extending it with the information conveyed by the aerial views. The overall approach is novel, as it builds on recent developments in live dense reconstruction from moving cameras to address the problem of air-ground localization.
C. Forster, M. Pizzoli, D. Scaramuzza Air-Ground Localization and Map Augmentation Using Monocular Dense Reconstruction IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, 2013. |
The KUKA youBot and the Parrot AR.Drone are helping the Easter bunny to save Easter for Roboy. The drone follows the youBot while hovering above it and commanding it to the next egg. All movements are performed autonomously, with no user interaction: egg localization and autonomous grasping are based on aerial view only.
We present an accurate, efficient, and robust pose estimation system based on infrared LEDs. They are mounted on a target object and are observed by a camera that is equipped with an infrared-pass filter. The correspondences between LEDs and image detections are first determined using a combinatorial approach and then tracked using a constant-velocity model. The pose of the target object is estimated with a P3P algorithm and optimized by minimizing the reprojection error. Since the system works in the infrared spectrum, it is robust to cluttered environments and illumination changes. In a variety of experiments, we show that our system outperforms state-of-the-art approaches. Furthermore, we successfully apply our system to stabilize a quadrotor both indoors and outdoors under challenging conditions. We release our implementation as open-source software.
M. Faessler, E. Mueggler, K. Schwabe, D. Scaramuzza A Monocular Pose Estimation System based on Infrared LEDs IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, 2014. |