SocSMCs: making robots interact without words

When we shake hands with someone or help them moving furniture, our body is calculating much more than we know, even though we are not consciously aware. The direction in which our hand is moving, how much force is needed, how we keep the balance meanwhile… These are automatic decisions that are daily taken by our body. And while it goes on, we don’t realise anything.

Despite its complexity, this is easy for people. Instead, with robots it is a whole different thing. Robots need to have all the calculations programmed to behave in a way we find “normal”, or “socially accepted”, even speaking of what we would say that is a very basic level of motion control. They don’t have any intuition nor social education as a basis for their actions. But such interactive behaviour is highly needed for robots if we want them to adapt and help in our life.

One of the EU research projects we are involved in, SocSMCs, is focused on studying socially relevant action-effect contingencies in human-human and human-robot interaction scenarios. With the goal of achieving a robotics technology that is socially competent, some of the SocSMCs tests are being done with PAL Robotics’ humanoid robot REEM-C.

First steps: a friendly and responsive handshake

 

One of the first things that has been developed with REEM-C for the SocSMCs is to enable the humanoid to give a proper handshake, grasping the other’s hand with a controlled force and following the other person’s movements.

The challenge is to create robots that work in human environments in a more natural way for people. The robot shall be able to understand the non-verbal commands that the human uses all the time and react accordingly.

What is REEM-C learning now?

SocSMCs has more plans for our humanoid robot – such as taking a group selfie or helping someone moving a piece of furniture without speaking. REEM-C is currently learning the steps to accomplish the latter one, which are:

  1. See and recognize the object (the furniture)
  2. Reach out to the object altogether with the person with an autonomous navigation
  3. Grasp the object and lift it coordinating with the person
  4. The robot will move the object without explicitly being told just “guessing the human intentions” using its sensors

Studying these cases will help when applying similar developments to other multiple situations in which a robot needs to have an assistive role of cooperation and human-robot interaction.

Related articles
PAL Robotics and NVIDIA: Shaping the next generation of robotics
Read More

PAL Robotics and NVIDIA: Shaping the next generation of robotics

It has been over two decades of innovation for PAL Robotics, and we continue to push the boundaries of what’s possible…
CO-HAND Project
Read More

CO-HAND: Advancing human-robot collaboration in the textile industry

In the textile industry, repetitive and physically demanding tasks are still a daily challenge for many operators. Through the CO-HAND…
From Lab to Reality: ULTRADRON’s Robot Passes Physical Tests at Tecnalia
Read More

From Lab to Reality: ULTRADRON’s Robot Passes Physical Tests at Tecnalia

Our autonomous mobile robot TIAGo OMNI Base has successfully passed its first round of physical trials at the facilities of…