Loading

Date:
2020-09-04

Abstract:

In this work we study the DMP spatial scaling in the Cartesian space. The DMP framework is claimed to have the
ability to generalize learnt trajectories to new initial and goal positions, maintaining the desired kinematic pattern. However we show that the existing formulations present problems in trajectory spatial scaling when used in the Cartesian space for a wide variety of tasks and examine their cause. We then propose a novel formulation alleviating these problems. Trajectory generalization analysis, is performed by deriving the trajectory tracking dynamics. The proposed formulation is compared with the existing ones through simulations and experiments on a KUKA LWR 4+ robot.


Date:
2020-03-30

Abstract:

Abstract. The goal of this work is to build the basis for a smartphone application that provides functionalities for recording human motion data, train machine learning algorithms and recognize professional gestures. First, we take advantage of the new mobile phone cameras, either infrared or stereoscopic, to record RGB-D data. Then, a bottom-up pose estimation algorithm based on Deep Learning extracts the 2D human skeleton and exports the 3rd dimension using the depth. Finally, we use a gesture recognition engine, which is based on K-means and Hidden Markov Models (HMMs). The performance of the machine learning algorithm has been tested with professional gestures using a silk-weaving and a TV-assembly datasets.


Date:
2020-03-10

Abstract:

This paper deals with the problem of the recognition of human hand touch by a robot equipped with large area tactile
sensors covering its body. This problem is relevant in the domain of physical human-robot interaction for discriminating
between human and non-human contacts and to trigger and to drive cooperative tasks or robot motions, or to ensure a
safe interaction. The underlying assumption, used in this paper, is that voluntary physical interaction tasks involve hand
touch over the robot body, and therefore the capability of recognizing hand contacts is a key element to discriminate a
purposive human touch from other types of interaction.
The proposed approach is based on a geometric transformation of the tactile data, formed by pressure measurements
associated to a non uniform cloud of 3D points (taxels) spread over a non linear manifold corresponding to the robot
body, into tactile images representing the contact pressure distribution in 2D. Tactile images can be processed using
deep learning algorithms to recognize human hands and to compute the pressure distribution applied by the various
hand segments: palm and single fingers.
Experimental results, performed on a real robot covered with robot skin, show the effectiveness of the proposed
methodology. Moreover, to evaluate its robustness, various types of failures have been simulated. A further analysis
concerning the transferability of the system has been performed, considering contacts occurring on a different
sensorized robot part.


Date:
2019-09-30

Abstract:

During the past few years, probabilistic approaches to imitation learning have earned a relevant place in the robotics literature. One of their most prominent features is that, in addition to extracting a mean trajectory from task demonstrations, they provide a variance estimation. The intuitive meaning of this variance, however, changes across different techniques, indicating either variability or uncertainty. In this paper we leverage kernelized movement primitives (KMP) to provide a new perspective on imitation learning by predicting variability, correlations and uncertainty using a single model. This rich set of information is used in combination with the fusion of optimal controllers to learn robot actions from data, with two main advantages: i) robots become safe when uncertain about their actions and ii) they are able to leverage partial demonstrations, given as elementary sub-tasks, to optimally perform a higher level, more complex task. We showcase our approach in a painting task, where a human user and a KUKA robot collaborate to paint a wooden board. The task is divided into two sub-tasks and we show that the robot becomes compliant (hence safe) outside the training regions and executes the two sub-tasks with optimal gains otherwise


Date:
2019-11-05

Abstract:

This paper presents a methodology that enables the exploitation of innovative technologies for collaborative robots through user involvement from the beginning of product development. The methodology will be applied in the EU-funded project CoLLaboratE that focuses on how industrial robots learn to collaborate with human workers in order to perform new manufacturing tasks. The presented methodology is preliminary and will be improved during the project runtime.


Date:
2019-10-15

Abstract:

This paper is concerned with a methodology for gathering user requirements (URs) to inform a later design process of industrial collaborative robots. The methodology is applied to four use cases from CoLLaboratE, which is a European project focusing on how industrial robots learn to cooperate with human workers in performing new manufacturing tasks. The project follows a User-Centered Design (UCD) approach by involving end-users in the development process. The user requirements are gathered using a mixed methodology, with the purpose of formulating a list of case-specific requirements, which can be also generalized. The results presented in this paper consist of the list of user requirements, which will serve as a basis in establishing scenarios and system requirements for the later design of a Human-Robot Collaboration (HRC) system. The described methodology contributes to the field of design of HRC systems by taking a UCD approach. The methodology is aimed at improving the solution performance and users’ acceptance of the technology, by early involvement of the users in the design process. It can also be adaptable to other development projects, where users play an essential role in creating Human-Robot Collaboration solutions.


Date:
2019-09-23

Abstract:

With the raise of collaborative robots, human-robot interaction needs to be as natural as possible. In this work, we present a framework for real-time continuous motion control of a real collaborative robot (cobot) from gestures captured by an RGB camera. Through deep learning existing techniques, we obtain human skeletal pose information both in 2D and 3D. We use it to design a controller that makes the robot mirror in real-time the movements of a human arm or hand.


Date:
2020-02-04

Abstract:

In this work, we propose an augmentation to the Dynamic Movement Primitives (DMP) framework which allows the system to generalize to moving goals without the use of any known or approximation model for estimating the goal’s motion. We aim to maintain the demonstrated velocity levels during the execution to the moving goal, generating motion profiles appropriate for human robot collaboration. The proposed method employs a modified version of a DMP, learned by a demonstration to a static goal, with adaptive temporal scaling in order to achieve reaching of the moving goal with the learned kinematic pattern. Only the current position and
velocity of the goal are required. The goal’s reaching error and its derivative is proved to converge to zero via contraction analysis. The theoretical results are verified by simulations and
experiments on a KUKA LWR4+ robot.


Date:
2020-02-04

Abstract:

A control scheme consisting of a novel coupling of a DMP based virtual reference with a low stiffness controlled robot is proposed. The overall system is proved to achieve superior tracking of a DMP encoded trajectory and accurate target reaching with respect to the conventional scheme under the presence of constant and periodic disturbances owing to unknown task dynamics and robot model uncertainties. It further preserves the desired compliance under contact forces that may arise in human interventions and collisions. Results in simulations and experiments validate the theoretical findings.


Date:
2019-06-05

Abstract:

Manual laborers from the industry sector are often subject to critical physical strain that lead to work-related musculoskeletal disorders. Lifting, poor posture and repetitive movements are among the causes of these disorders. In order to prevent them, several rules and methods have been established to identify ergonomic risks that the worker might be exposed during his/her activities. However, the ergonomic assessment though these methods is not a trivial task and a relevant degree of theoretical knowledge on the part of the analyst is necessary. Therefore in this paper, a web-based automatic ergonomic assessment module is proposed. The proposed module uses segment rotations acquired from inertial measurement units for the assessment and provides as feedback RULA scores, color visualisation and limb angles in a simple, intuitive and meaningful way. RULA is one of the most used observational methods for assessment of occupational risk factors for upper-extremity musculoskeletal disorders. By automatizing RULA an interesting perspective for extracting posture analytics for ergonomic assessment is opened, as well as the inclusion of new features that may complement it. For future work, the use of other features and sensors will be investigated for its implementation on the module.


Date:
2019-08-25

Abstract:

This work presents a statistical analysis of professional gestures from household appliances manufacturing. The goal is to investigate the hypothesis that some body segments are more involved than others in professional gestures and present thus higher ergonomic risk. The gestures were recorded with a full body Inertial Measurement Unit (IMU) suit and represented with rotations of each segment. Data dimensions have been reduced with principal component analysis (PCA), permitting us to reveal hidden correlations between the body segments and to extract the ones with the highest variance. This work aims at detecting among numerous upper body segments, which are the ones that are overused and consequently, which is the minimum number of segments that is sufficient to represent our dataset for ergonomic analysis. To validate the results, Hidden Markov Models (HMMs) based recognition method has been used and trained only with the segments from the PCA. The recognition accuracy of 95.71% was achieved confirming this hypothesis.


Date:
2019-11-23

Abstract:

Currently, biomechanics analyses of the upper human body are mostly kinematic i.e., they are concerned with the positions, velocities, and accelerations of the joints on the human body with little consideration on the forces required to produces them. Tough kinetic analysis can give insight to the torques required by the muscles to generate motion and therefore provide more information regarding human movements, it is generally used in a relatively small scope (e.g. one joint or the contact forces the hand applies). The problem is that in order to calculate the joint torques on an articulated body, such as the human arm, the correct shape and weight must be measured. For robot manipulators, this is done by the manufacturer during the designing phase, however, on the human arm, direct measurement of the volume and the weight is very dicult and extremely impractical. Methods for indirect
estimation of those parameters have been proposed, such as the use of medical imaging or standardized scaling factors (SF). However, there is always a trade o between accuracy and practicality. This paper uses computer vision (CV) to extract the shape of each body segment and find the inertia parameters. The joint torques are calculated using those parameters and they are compared to joint torques that were calculated using SF to establish the inertia properties. The purpose here is to examine a practical method for real-time joint torques calculation that can be personalized and accurate.


Date:
2020-01-31

Abstract:

Fast deployment of robot tasks requires appropriate
tools that enable efficient reuse of existing robot control
policies. Learning from Demonstration (LfD) is a popular tool
for the intuitive generation of robot policies, but the issue of
how to address the adaptation of existing policies has not been
properly addressed yet. In this work, we propose an incremental
LfD framework that efficiently solves the above-mentioned
issue. It has been implemented and tested on a number of
popular collaborative robots, including Franka Emika Panda,
Universal Robot UR10, and KUKA LWR 4.


Date:
2020-01-31

Abstract:

An assembly task is in many cases just a reverse
execution of the corresponding disassembly task. During the
assembly, the object being assembled passes consecutively from
state to state until completed, and the set of possible movements
becomes more and more constrained. Based on the observation
that autonomous learning of physically constrained tasks can
be advantageous, we use information obtained during learning
of disassembly in assembly. For autonomous learning of a
disassembly policy we propose to use hierarchical reinforcement
learning, where learning is decomposed into a highlevel
decision-making and underlying lower-level intelligent
compliant controller, which exploits the natural motion in
a constrained environment. During the reverse execution of
disassembly policy, the motion is further optimized by means
of an iterative learning controller. The proposed approach was
verified on two challenging tasks - a maze learning problem and
autonomous learning of inserting a car bulb into the casing.


Date:
2020-01-31

Abstract:

In this paper, we consider a problem how to exploit a task
space motion for lower-priority tasks when the end-effector motion allows
some deviation of the motion for the primary task. Using a common
redundancy resolution methods, the self-motion is only possible in the
null-space. Therefore, we propose a novel combination of controllers in
two spaces: in the task space and in the reduced task space, where DOFs
corresponding to spatial directions allowing the deviations are excluded.
The motion generated by the controller in the reduced task space is
mapped into the main task and by properly selecting the controller
parameters the resulting motion does not violate the constraints. To
demonstrate the effectiveness of the proposed control we show simulation
examples where motion in a constraint region is used to avoid joint
limits.


Date:
2020-01-31

Abstract:

In this paper, we propose a novel unified framework for virtual guides. The human–robot interaction
is based on a virtual robot, which is controlled by the admittance control. The unified framework
combines virtual guides, control of the dynamic behavior, and path tracking. Different virtual guides
and active constraints can be realized by using dead-zones in the position part of the admittance
controller. The proposed algorithm can act in a changing task space and allows selection of the tasksspace
and redundant degrees-of-freedom during the task execution. The admittance control algorithm
can be implemented either on a velocity or on acceleration level. The proposed framework has been
validated by an experiment on a KUKA LWR robot performing the Buzz-Wire task


Date:
2019-11-01

Abstract:

Dynamic movement primitives (DMP) are an efficient way for learning
and reproducing complex robot behaviors. A singularity free DMP formulation
for orientation in the Cartesian space is proposed by Ude et al. in 2014 and has been
largely adopted by the research community. In this work, we demonstrate the
undesired oscillatory behavior that may arise when controlling the robot’s orien-
tation with this formulation, producing a motion pattern highly deviant from the
desired and highlight its source. A correct formulation is then proposed that alle-
viates such problems while guaranteeing generation of orientation parameters that
lie in SO(3). We further show that all aspects and advantages of DMP including
ease of learning, temporal and spatial scaling and the ability to include coupling
terms are maintained in the proposed formulation. Simulations and experiments
with robot control in SO(3) are performed to demonstrate the performance of the
proposed formulation and compare it with the previously adopted one.


Date:
2019-05-24

Abstract:

In this paper, we are going to present a methodology for gathering user requirements to design industrial collaborative robots. The study takes place within CoLLaboratE, a European project focusing on how industrial
robots learn to cooperate with human workers for performing new manufacturing tasks, considering four use cases. The project follows a User-Centered Design approach by involving end-users in the development process. The user requirements will thus be gathered by applying a mixed methodology, with the purpose of formulating a list of requirements which can be generalized, but also case specific. This methodology is preliminary, and it will be improved during the following months, when the data will be collected and analyzed.


Date:
2019-06-24

Abstract:

This work focuses on the prediction of the human’s motion in a collaborative human-robot object transfer with the aim of assisting the human and minimizing his/her effort. The desired pattern of motion is learned from a human demonstration and is encoded with a DMP (Dynamic Movement Primitive). During the object transfer to unknown targets, a model reference with a DMP-based control input and an EKF-based (Extended Kalman Filter) observer for predicting the target and temporal scaling is used. Global boundedness under the emergence of bounded forces with bounded energy is proved. The object dynamics are assumed known. The validation of the proposed approach is performed through experiments using a Kuka LWR4+ robot equipped with an ATI sensor at its end-effector.



This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 820767.

The website reflects only the view of the author(s) and the Commission is not responsible for any use that may be made of the information it contains.

Contact Information
Prof. Zoe Doulgeri
Automation & Robotics Lab
Aristotle University of Thessaloniki
Department of Electrical & Computer Engineering
Thessaloniki 54124, Greece
info(at)collaborate-project(dot)eu
Collaborate Project CoLLaboratE Project
© 2018-2020 All rights reserved.