[PNG] icub-macsi-head

Motor Affective Cognitive Scaffolding for the iCub

How can mechanisms for building efficient representations/abstractions, mechanisms for learning manipulation skills, and guidance mechanisms be integrated in the same experimental robotic architecture and reused for different robots?

In challenges 1 and 2, building visual representations and motor representations are taken separately. So far, we have done as if the control loop was split into two stages: building a representation of the state of the robot from visual and proprioceptive information (the sensory stage) and building a representation of the motor properties of the robot based on the estimated state (the motor stage). Accurate control of a humanoid robot in our context requires a deeper coupling, resulting in the edification of visuo-motor representations. Such representations will be necessary to perform direct visual servoing when the end effector is close to its target, for instance. Furthermore, identifying objects around, discriminating them from the robot body and building basic affordances bound to objects calls upon a close interaction between visual and motor representations: for example, an object can be conceptualized as a particular set of visual features that react together when the robot interacts with them but that cannot be controlled in all motor configurations (i.e. when it is not in contact to them), in contrast to one’s own body. Similarly, binding affordances to objects entails learning some mapping between the visual representation of the objects and its reaction and the motor models of the robot’s interactions with it.

Finally, given the huge space of visual and motor possibilities, some guided exploration mechanisms are necessary to orient the robot towards “interesting” objects and let it discover “interesting” affordances. As a result, the pieces of work necessary to address the elementary challenges listed above have to be closely coupled together. This constitutes both a scientific and a technological challenge. Scientifically, this entails that the technical representations and formalisms used in addressing challenges 1, 2 and 3 must be carefully designed and coordinated right from the start of the project. Technologically, this implies to be able to encapsulate all algorithms in a common inter-operable software platform, and orchestrate them together on the complex high-dimensional (53 DOFs) iCub robot. The technological objective is not only relevant for the evaluation/monitoring of our results, it also opens the possibility of re-use by other teams. Indeed, the iCub humanoid robot is unique since its conception was funded by the EU to become a reference humanoid research platform with several exemplar disseminated throughout Europe, including mobile eyes, audio and visual sensors and has been developed as an open platform, easily customisable for future specific needs. Thus, thanks to associated common software platforms, it makes it possible to develop code that can be easily re-used by other research teams. Several middlewares for integration of robotic applications have been proposed in the last decade, including the open-source YARP architecture provided with the iCub robot. These modular software architectures for robotics provide an abstraction of the robot hardware through standard device models and usually implement concurrency, real-time capabilities and communications between modules. Each component in these architectures is a complete piece of software capable of providing certain functionalities, seen as services, through a well defined interface. These distributed systems enjoy a de facto parallel execution model supported by the operating system, because each component can run as a separate process (possibly executed on a remote computer). Communication between modules is usually developed in a relatively fixed topology, sometimes fixed by the modules themselves. In robotics, the interactions between services are often complex compared to more traditional business applications. We often face the complexity of event-based and reactive interactions between several motors, several processing algorithms and a set of sensors, all interacting with each other in a complex way. While most approaches can handle this case, it is often a complex burden, not easy to debug and hard to maintain. With the Urbi platform, developed by GOSTAI, and building upon the existing iCub YARP architecture by making them inter-operable, we propose to go beyond the traditional modular architectures, and use a dynamic scripting language as a generic way to define the interactions between software components. Furthermore, the level of abstraction provided by Urbi makes it easier to reuse the same software on alternative robotics platforms.

Expected contributions :

In MACSi, we will frame a logical architecture dedicated to the integration of the work corresponding to the 3 other challenges and we will define the underlying generic software architecture based on Urbi and YARP that will favor the efficient integration of the corresponding functionalities. This will not only allow us to evaluate the final results of MACSi through integrated experiments, but will also allow us to make our code available and easily re-usable by other research teams.

Challenge leader : the GOSTAI company (Paris).






Dernière modification le 31/05/2011

MACSi Project (C) 2010-2012  -- Graphics by Serena Ivaldi --     Admin - Logout

Site motorisé par ZitePLUS 0.9.1