Written by Zoe Doulgeri, Fotis Dimeas and Sylvain Calinon on 22 February 2021.

Nowadays there is a trend towards a shorter product life cycle, where the industries run production for just a few months. Using industrial robots instead of hard automation is a way to achieve this. However, collaborative robots have much more potential to further increase the flexibility of production, and one of the reasons for this is robot programming by demonstration.

What makes collaborative robots different compared to standard industrial robots?

Collaborative robots can safely work alongside humans and benefit from inherent human skills, such as environment perception and decision making. These skills can be fused together with the accuracy and strength provided by the robots to revolutionize production quality, work organization and company automation levels. Moreover, cobots are able to efficiently learn and execute tasks taught by human workers in a straightforward and intuitive manner with the help of robot programming by demonstration.

The traditional programming of a robot that usually takes many hours can be performed within minutes by a non-expert operator who can physically guide a cobot for demonstrating the tasks. This translates to a reduction of costs and set-up time that tends to be significant with small production cycles and product customization and hence allows cobots to be employed even by SMEs.

What exactly is robot programming by demonstration?

Programming by Demonstration (PbD) is a method for teaching a robot new behaviours by demonstrating the task. Particularly, skills are transferred directly to the robot instead of using explicit programming through machine commands. Demonstrating how to achieve the task through examples thus allows robots to learn skills without programming each detail, which can significantly increase efficiency in many applications.

As opposed to simple “record and play”, PbD focuses on maximizing the information extracted from demonstrations, in order to generalize the taught skills to new situations or variations of the same task. Particularly, since providing demonstrations can be costly, especially in terms of time, it aims at being data-efficient by requiring as few demonstrations as possible.

In this field, the operator often has implicit knowledge on the task to achieve (he/she knows how to do it) but does not usually have the programming skills (or the time) required to reconfigure the robot. Demonstrating how to achieve the task through examples thus allows learning the skill without explicitly programming each detail. Learning of the task implies not only a kind of automatic programming of the demonstrated task but also a degree of generalization for variants of the demonstrated task without the need for further demonstrations.

Why is robot programming by demonstration special?

Traditionally, the programming of an industrial robot is performed either offline in a simulation software or online, using the teaching pendant of the robot. Both cases require expert personnel with knowledge of the robot’s programming language. Depending on the complexity of the task, this process can take many hours or days to complete and fine-tune.

Programming by demonstration is a new intuitive manner to program a robot that relies on learning from visual or kinesthetic demonstrations performed by the human involving a low cost in the development and maintenance of the programming.  

kinesthetic teaching
Teaching a robot kinesthetically how to place boards on a TV frame

How can you program the robot by demonstration?

The CoLLaboratE project aims to enable genuine human-robot collaboration with a focus on collaborative assembly tasks and to design a safe production cell that also considers ergonomic and social aspects. In CoLLaboratE, we are developing two main modalities to program a robot by demonstration.

The first one includes kinesthetic teaching, where the operator holds the robot arm and guides it through the task without significant effort. In that way, the robot learns the trajectory that needs to follow and can then execute it autonomously generalizing to new targets and execution speeds

The second modality uses visual demonstration, where one or more humans are being recorded with a 3D camera performing a task. Tracking the human motion allows the key-frames or trajectories that the robot needs to follow to be automatically extracted and generalized during execution.

visual demonstration
Visual demonstration of placing boards on a TV frame

Zoe Doulgeri
Aristotle University of Thessaloniki
Fotios Dimeas
Aristotle University of Thessaloniki
Sylvain Calinon
IDIAP Research Institute

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 820767.

The website reflects only the view of the author(s) and the Commission is not responsible for any use that may be made of the information it contains.

Project Coordinator
Prof. Zoe Doulgeri
Automation & Robotics Lab
Aristotle University of Thessaloniki
Department of Electrical & Computer Engineering
Thessaloniki 54124, Greece
Collaborate Project CoLLaboratE Project
© 2018-2022 All rights reserved.