Skip to main content

Design and development of perception capabilities

This work package (WP) focuses on the ability of the robot to perceive its environment. Signals and sensory data are obtained from embedded sensors and can be processed by an embedded system to increase integration and to process data in order to transform data into salient information and then interpret and perceive the objects and features in their environment. These skills are essential for a robot to carry out tasks where it must operate sharing space with people or end users. Perception is still a huge and active challenge in signal processing and robotics, with open unsolved problems at all stages. In fact, it includes a variety of tasks, e.g., the detection and identification of entities, their localization in the environment, their behavior and intention, etc. This is a challenge to define and acquire the information needed for appropriate compliant interactions with the environment. This challenge will be solved taking into account the specific needs and characteristics of each WP in this project (WP1, WP3 and WP4).

Nowadays, with the development of new kinds of sensors such as 3D lasers, event-cameras, RGB-D sensors, and ToF sensors, our goals in this field are:

  1. Integrate and study these technologies to develop a perception process that can be used with our robots. For instance, a 3D laser offers a high resolution of points at a high speed with great precision. But the 3D lasers generate a huge amount of data per scan. In order to handle the high amount of data obtained with 3D lasers and achieve a real-time implementation, classical techniques for detection, tracking and classification of moving objects should be revisited.
  2. Use embedded or learned knowledge in the perception process to work in real environments that will be applicable to robotic systems operating in unstructured environments.

In this sharing space, we could be able to perceive person-to-person variable information to identify situation awareness. For instance, in human-robot collaboration, we should be able to perceive and estimate the different situations to recognize them and adapt the behavior of the robot according to this situation. At the same time, we should be able to estimate the intention of the human and predict his motion in a short time frame. This is a prerequisite for most control techniques of robots that control the actuators of the robot in a short time frame based on this prediction.

In the context of continuum robots, their reduced dimensions and the medical environment they operate in limits the perception possibilities. Beyond the perception of end-effector position/pose, intensively studied in standard rigid robotics, current literature shows that perceiving the shape of such slender structures and their strains is of high interest. Indeed, exteroception is highly limited by the technological and applicative complexity in terms of sensor integration, namely the reduced scale and the potential interference with the medical environment (imaging systems, material, etc.), as well as inopportune occlusions. Our goal is to complete these possibilities with model knowledge and adequate sensor distribution by developing enhanced observers. The goal is to allow reliable state observation/reconstruction in order to capture continuum robots’ body motion and stiffness. Beyond this technological challenge, the long-term objective is to improve reliability of such medical devices and enhance their safety when in contact with and within the environment.

Coordinator

Submitted on July 5, 2024

Updated on August 26, 2024