ARCAS: Aerial Robotics Cooperative Assembly System

The ARCAS project proposes the development and experimental validation of the first cooperative free-flying robot system for assembly and structure construction. The project will pave the way for a large number of applications including the building of platforms for evacuation of people or landing aircrafts, the inspection and maintenance of facilities and the construction of structures in inaccessible sites and in the space. The detailed scientific and technological objectives are: 1)New methods for motion control of a free-flying robot with mounted manipulator in contact with a grasped object as well as for coordinated control of multiple cooperating flying robots with manipulators in contact with the same object (e.g. for precise placement or joint manipulation). 2)New flying robot perception methods to model, identify and recognize the scenario and to be used for the guidance in the assembly operation, including fast generation of 3D models, aerial 3D SLAM, 3D tracking and cooperative perception. 3)New methods for the cooperative assembly planning and structure construction by means of multiple flying robots with application to inspection and maintenance activities. 4)Strategies for operator assistance, including visual and force feedback, in manipulation tasks involving multiple cooperating flying robots. The above methods and technologies will be integrated in the ARCAS cooperative flying robot system that will be validated in the following scenarios: a) Indoor testbed with quadrotors, b) Outdoor scenario with helicopters, c) free-flying simulation using multiple robot arms. The project will be implemented by a high-quality consortium whose partners have already demonstrated the cooperative transportation by aerial robots as well as high performance cooperative ground manipulation. The team has the ability to produce for the first time challenging technological demonstrations with a high potential for generation of industrial products upon project completion.

Multimodal interaction in pattern recognition and computer vision
Social and industrial demands for Multimodal Interactive (MI) technologies and advanced man-machine interfaces are increasing dramatically. Pattern Recognition (PR) and Computer Vision (CV) play a highly relevant role of in the development of these MI technologies and interfaces. However, traditional PR and CV technologies have mainly focused on full automation, even though full automation often proves elusive or unnatural in many applications, where technology is expected to assist rather than replace the human agents.
MIPRCV establishes a five-years research programme to develop PR and CV approaches that explicitly deal with the challenges and opportunities entailed by the human-interaction paradigm. Based on these approaches, it also aims at implementing actual systems and prototypes for a number of important MI applications.
The ultimate goal is to show how existing PR and CV technologies can naturally evolve to help the development of advanced multi-modal interactive systems that will realize the long standing promises of a seamless synergy between persons and machines.

URUS 2006-2009
Ubiquitous Networking Robotics in Urban Settings

The general objective of this project is the development of new ways of cooperation between network robots and human beings and/or the environment in urban areas, in order to achieve efficiently tasks that in the other way can be very complex, time consuming or too costly. For example, the cooperation between robots and video cameras can solve surveillance problems in urban areas, or the cooperation between robots and wireless communication devices can help people in several ways. The focus of the project is in urban pedestrian areas, an important topic in Europe where there exists a growing interest in reducing the number of cars in the streets and improving the quality of life. Network robots can be an important instrument to address these issues in the cities.

NRS 2006
Network Robot Systems Research Atelier

The Research Atelier on Network Robot Systems (NRS) was created the 15th of December of 2005 within the EURON II (European Robotics Network) for a period of 1 year, with three purposes in mind: to generate a Roadmap of the NRS in Europe; to start a NRS community in Europe; and to disseminate the results of the Research Atelier among research institutions and companies, trough scientific and technological channels.
Partners: Institut de Robótica i Informática Industrial, Scuola Superiore Sant'Anna, Eidgenössische Technische Hochschule Zürich, Laboratory for Analysis and Architecture of Systems (LAAS), University of Surrey, Instituto Superior Técnico/Institute for Systems and Robotics (IST/ISR), Asociación de Investigación y Cooperación Industrial de Andalucía, and Instituto de Investigación de Ingeniería de Aragón.

Vision and Intelligent Systems Group (VIS)

The Vision and Intelligent Systems Group (VIS) carries out basic and applied research with the aim of understanding and designing intelligent systems that are capable of interacting with the real world in an autonomous and wide-reaching manner. Such intelligent systems must perceive, reason, plan, act and learn from previous experiences. The group works on the following topics: robust colour image segmentation and labelling, pattern recognition, viewpoint invariant object learning and recognition, object tracking, face tracking, biometrics, processing and analysis of medical images for diagnosis, document analysis, mobile robot navigation, simultaneous localisation and map building, visual servoing, and human-computer interaction. The possible areas of application of the VIS research include the automotive and transport industry, the biomedical imaging industry, the space industry, robotics applications, security, home and office automation, the entertainment industry, and future computing environments.

NAVROB 2004-2007
Integration of Robust Perception, Learning, and Navigation Systems in Mobile Robotics

The primary goal of this project is to make an integrated system that includes perception, learning, and navigation systems for mobile robots in urban or compatible industrial surroundings. In these environments the surface is irregular, illumination conditions are varying, and the disposition of the objects and the obstacles is dynamic; reasons why it constitutes a challenge to obtain robust algorithms under these circumstances. By providing different systems for robust perception, the mobile platform should be able to autonomously generate a navigation map of the environment that will in turn serve to enrich a geographic information system.