How is PAL collaborating in developing a responsible AI in European robotics?

By Séverin Lemaignan, HRI Team Leader at PAL Robotics
AI is shaping the future of nations worldwide, and most European countries have defined their strategic directions for its growth. HispanIA 2040, published in January 2025, is a government-led study that explores AI’s long-term impact on Spain’s economy, society, and sustainability. This study focuses on why and for what purpose AI should be developed. Spain’s vision is centered on responsible and transparent development, ensuring the technology serves the common good while aligning with democratic values and regulatory oversight.
I’m Séverin Lemaignan, leading the Human-Robot Interaction team at PAL Robotics. Artificial Intelligence has been at the core of our work for years, but much of it has remained behind the scenes—embedded in research projects, system integrations, and technical advancements that often go unnoticed. But this is the right moment to highlight our contributions and show how PAL Robotics is actively shaping human-robot interaction for AI applications in Europe.
The AI in robotics
Let’s start with this: as the technology advances at an exponential pace, the AI community is somewhat running out of problems to solve in virtual, “disembodied” environments. This is driving AI research into the physical world, where robotics – embodied AI – becomes the next frontier for innovation. This will redefine how machines perceive, interact, and adapt contextually and safely around people.
Starting from the basic advancements to the most complex projects, the objective of this article is to describe all the initiatives that are being developed in house to contribute to this exciting ecosystem.
AI Embodiment
Fisrt, what is so special about ‘embodied AI’? At PAL Robotics, we are doing some important research funded by the EU in projects like ARISE and CORESENSE, to develop an AI able to make sense of the real, physical world, what we call embodied AI: literally, and AI with a (robotic) body. That means creating, developing and publishing a series of AI technologies that includes, for instance, being able to build a full complex 3D model of what is around the robot, and turning that into what we call semantic description of the world.
A semantic model of the world is not simply positioning different types of objects in a map, but actually understanding their correlation. A semantic description expresses the meaning behind the observed elements of the world. This is something that can be done starting from spatial relationships, and pre-existing semantic information.
For example, if someone asks the robot to grab ‘their’ mug, and I see a mug on their desk, it will combine semantics (the desk ownership) and object positioning (the mug on the desk) to infer additional semantic information, that is, the mug on their desk is probably their mug. Or: a robot might be exploring a room, detecting objects and humans inside of it and inferring about what they are doing or what’s their role:
“There is an object of type keyboard that is lying on an object of type table and is in a room that seems to be like a meeting room. In this room there are also 3 humans that seem to be talking to each other.”
The semantic description can then be fed as part of a prompt to an LLM, so that the robot can use it to answer specific questions about the environment. If we ask the robot “what’s going on here?”, for example, they would be able to draw from its perception 3D model and the symbolic semantic description of the world to answer something like “there´s a meeting in course”.

In short: it´s the grounding of a semantic question into the real world. Of course people and companies are exploring this universe, but we are amongst the first companies having a full pipeline that goes from low level perception of the objects, 3D modelling, semantic description, integration with LLM, and all that running 100% on-board, at interactive speeds.
Conversational robots
When interacting with robots, non-technical people often expect to engage with large language models like ChatGPT. Until now, that was easier said than done due to specific technical limitations. But from 2025 onwards, PAL Robotics is offering TIAGo Pro and TIAGo Head with the option to integrate a high-performance AI accelerator from Nvidia, alongside the first release of our new operating system, PAL OS. This first PAL OS release, expected in Spring 2025, will include full support for LLMs, allowing users to have more natural conversations with our robots.
However, this goes beyond simply connecting an existing chatbot to a robot. Our AI runs on the robot itself (edge computing), making PAL Robotics one of the first, if not the first, to offer fully edge AI on commercial, off-the-shelf platforms. We did not design our system for edge computing by chance: by running all LLM inference on board, not only you avoid all cloud connectivity issues that are commonplace in robotics, but it also ensures compliance with privacy regulations, including GDPR, by keeping data processing local and secure—something that few companies are able to do today.
API adjustments
To program robots, you interact with their API—a set of functions that control their behavior. We’re currently restructuring our API to make it more accessible for AI tools like LLMs : we are formalizing what we call skills (“navigate to”, “take an object”, “look at human”…). Together, they build a large library of small robot actions that a LLM running on the robot can use when answering a user’s request. This effort aims to bridge the gap between traditional robot control methods and seamless integration with AI algorithms, while also making it easier to use the robots. The result of this initiative is to reduce the gap between existing technologies to control robots and what can be easily connected to state-of-the-art algorithms: while this effort is still a “work in progress”, we offer all the tools to run on-board code synthesis for robot control.
Reinforcement learning
Nowadays, robots seem more alive than ever. Every day, we see humanoids running, jumping, and doing amazing tricks on social media. The most impressive skills robots present are due to Reinforcement Learning (RL), which trains them through trial and error, like training a pet with rewards. Deep Reinforcement Learning (Deep RL) takes this process further by using neural networks to process large amounts of data and make more complex decisions. Deep RL allows the robot to recognize patterns, adapt to different situations, and handle more sophisticated tasks, such as exploring unfamiliar environments.


That neural network – what we call “the control policy”, is what decides the next steps, depending on the dynamic state of the robot. Over time, it learns which movements work best.
At PAL Robotics, we are developing a framework compatible with the most popular tools like MuJoCo MJX, BRAX, and Isaac Lab to enable some of our platforms to take advantage of these technologies. Thanks to AI, tasks like training TIAGo Pro to open doors will become simple tasks to learn and test in real robots.
Social Embedding
Another example of basic research conducted (and already published)field in AI atthat has been developed by PAL Robotics touches to the new concept of – and also published in journals – is related to social embedding. Embedding is a very common concept in the AI universe, but its application to the social domain is a completely new field of study, that we are pioneering. In terms of social interaction of humans and robots, we are trying to use machine learning to build an automatic description of the scene that is independent of those local specificities, but relying on the semantic interactions between the humans, the robot, and the environment. We are trying to apply AI learning to social situations so that robots can process situations like:
“OK, I don’t know this room, but I can recognize that there is something going on with those people that looks very much like something I’ve seen in the past, a situation where people were meeting and discussing something, so maybe I should not enter the room because they seem to be doing something important.”
It sounds simple to humans, but it’s a completely new challenge in AI, when the complex and sometimes chaotic human social interactions are really hard to model. To accelerate our research on this exciting topic, we will soon have a new PhD student focusing on the question in the frame of the EU project SWEET.
Ethical accessibility
We are using LLMs not only for the robots to be more intelligent, but also to deliver adaptable and understandable descriptions of what is going on around them. Based on the previous knowledge of the users, LLMs can generate explanations about the robot’s decisions and behaviours that are tailored to the context.
That not only addresses explainability, but ends up developing a more ethical level of accessibility to all types of users. This means we are ensuring that people truly understand how it works without needing technical expertise, leveraging AI’s interactivity to personalize explanations for each user.
In the TRAIL project, funded by EU Commission, we provided expertise in social robotics and human-robot interaction to enhance our robots’ (such as ARI or TIAGo PRO) abilities to interact intuitively and naturally with human users.
Those advancements will be used also for future compliance of the AI act and all the legislations that do require transparent AI systems to be able to explain their behaviours and their decision making. This is a completely new line of research led by our colleague and PhD student, Ferran Gebellí.
European Commission’s framework
All of PAL Robotics’ AI initiatives align with the European Commission’s priorities for robotics, which focus on:
- Improving robot behavior to ensure autonomous systems make decisions that are explainable, adaptive, and context-aware.
- Enhancing usability so robots can operate effectively in diverse environments, from industrial settings to social interactions.
- Ensuring safety in human-robot interaction by developing responsible AI that adheres to strict ethical and regulatory standards.
The AI Act (2024) and updates to the Machinery Regulation reinforce the need for transparent, compliant, and accountable AI systems—areas where PAL Robotics is already making significant contributions. Our work in Edge AI ensures GDPR-compliant data processing, while semantic understanding and social embedding enhance the way robots interact with people and their surroundings. Additionally, projects like ARISE and CoreSense address AI’s role in explainability, helping robots not only execute tasks but also communicate their reasoning in an accessible way.
By integrating AI responsibly and aligning with EU directives, PAL Robotics is actively contributing to the future of AI-driven robotics in Europe.
Collaborative projects and HispanIA core goals
Since 2004, we have spearheaded a number of public and private sector projects in different fields. With over 25 ongoing EU-funded projects, we have collaborated with three of the most important goals highlighted by the Spanish government:
Modernize the productive sector
The MANOLO project, funded by the EU, enhances AI systems’ efficiency and optimization across manufacturing and healthcare industries. It focuses on developing trustworthy AI tools and algorithms and creating dynamic, data-efficient algorithms for AI tasks.
In the project, we play an important role in integrating our TIAGo robots into MANOLO’s initiatives, showcasing robot adaptability and compliance in real-world settings.
AGIMUS is another great example. This project focuses on developing robots that can quickly adapt and perform efficiently in manufacturing by using advanced AI in perception, planning, and control. The goal is to equip robots with the skills to handle varied manufacturing tasks with minimal setup and maximum autonomy.
Strengthen the welfare state
Aligned with Industry 5.0 principles, the EU project ARISE seeks to create resilient, sustainable, and human-centric work environments, enhancing efficiency and productivity through seamless collaboration between humans and robots. The project uses TIAGo to help test how robots can work better with humans, making sure the robot’s actions are helpful and clear in real-world situations.
Achieve environmental sustainability
By designing AI to run directly on robots rather than relying on energy-intensive data centers, our approach processes data locally on the robot, reducing the need for constant cloud communication and large-scale infrastructure. By using embedded accelerators with low power consumption, our robots operate efficiently while minimizing energy use and environmental impact. This strategy supports greener AI deployment, aligning with sustainability goals by reducing overall carbon footprints in robotics and automation.
Responsible AI requires social responsibility
As AI regulations take shape in Europe, similar efforts are emerging for robotics—and we’re directly involved. Our HRI team is helping draft key standardization guidelines to ensure robotics is developed responsibly, not just for efficiency but for real societal impact. We have developed a participatory design framework to ensure that the human is at the center of our developments. An example is the SAFE-LY project, where the users’ requirements actually resulted from a first phase of use of a simplified version of the potential solution. Users can then discuss their needs not only with a tangible concept, but also start experiencing it in-situ, enriching their feedback with specific and
valuable feedback.
This goes beyond compliance. As general chair of the HRI’25 conference, for which we have set the official theme of the conference to be ‘Robots for a More Sustainable World’, and as team leader at PAL Robotics, I’m pushing researchers to focus on projects that drive sustainable robotics development. Inside the company, we advocate for user-centered design—talking to people first, building what they actually need, not what we assume they need. Robotics should empower, not alienate. That’s the future we’re working toward.