lunduniversity.lu.se

Denna sida på svenska This page in English

Semantic Mapping and Visual Navigation for Smart Robots

Researchers: Marcus Greiff, Bo Bernhardsson,  Anders Robertsson and Zhiyong Sun with colleagues from the Depts of Mathematics, Lund, and Chalmers University of Technology.

Funding: Swedish Foundation for Strategic Research

Why is it that today’s autonomous systems for visual inference tasks are often restricted to a narrow set of scene types and controlled lab settings? Examining the best performing perceptual systems reveals that each inference task is solved with a specialized methodology. For instance, object recognition and 3D scene reconstruction, despite being strongly connected problems, are treated independently and an integrated theory is lacking. We believe that in order to reach further, it is necessary to develop smart systems that are capable of integrating the different aspects of vision in a collaborative manner. We gather expertise from computer vision, machine learning, automatic control and optimization with the ambitious goal of establishing such an integrated framework. The research is structured into four work packages: 1) scene modelling, 2) visual recognition, 3) visual navigation and 4) system integration to achieve a perceptual robotic system for exploration and learning in unknown environments. As a demonstrator, we will construct an autonomous system for visual inspection of a supermarket using small-scale, low-cost quadcopters. The system goes well beyond the current state-of-the-art and will provide a complete solution for semantic mapping and visual navigation. The basic research outcomes are relevant to a wide range of industrial applications including self-driving cars, unmanned surface vehicles, street-view modelling and flexible inspection in general.

See also

SSF Smart Systems, "Semantisk kartering & visuell navigering för smarta robotar".