During an eight-hour shift, an industrial worker will inevitably cycle through specific postures. Those postures can cause
microtrauma on the musculoskeletal system that accumulates, which in turn can lead to chronic injury. To assess how
problematic a posture is, the rapid upper limb assessment (RULA) scoring system is widely employed by the industry.
Even though it is a very quick and efficient method of assessment, RULA is not a biomechanics-based measurement that is
anchored in a physical parameter of the human body. As such RULA does not give a detailed description of the impact each
posture has on the human joints but rather, an overarching, simplified assessment of a posture. To address this issue, this
paper proposes the use of joint angles and torques as an alternative way of ergonomics evaluation. The cumulative motion
and torque throughout a trial is compared with the average motions and torques for the same task. This allows the evaluation of each joint’s kinematic and kinetic performance while still be able to assess a task“at-a-glance”. To do this, an upper
human body model was created and the mass of each segment were assigned. The joint torques and the RULA scores were
calculated for simple range of motion (ROM) tasks, as well as actual tasks from a TV assembly line. The joint angles and
torques series were integrated and then normalized to give the kinematic and kinetic contribution of each joint during a
task as a percentage. This made possible to examine each joint’s strain during each task as well as highlight joints that need
to be more closely examined. Results show how the joint angles and torques can identify which joint is moving more and
which one is under the most strain during a task. It was also possible to compare the performance of a task with the average
performance and identify deviations that may imply improper execution. Even though the RULA is a very fast and concise
assessment tool, it leaves little room for further analyses. However, the proposed work suggests a richer alternative without
sacrificing the benefit of a quick evaluation. The biggest limitation of this work is that a pool of proper executions needs to
be recorded for each task before individual comparisons can be done.
Learning from demonstration (LfD) is an intuitive framework allowing non-expert users to easily (re-)program robots. However, the quality and quantity of demonstrations have a great influence on the generalization performances of LfD approaches. In this paper, we introduce a novel active learning framework in order to improve the generalization capabilities of control policies. The proposed approach is based on the epistemic uncertainties of Bayesian Gaussian mixture models (BGMMs). We determine the new query point location by optimizing a closed-form information-density cost based on the quadratic Rényi entropy. Furthermore, to better represent uncertain regions and to avoid local optima problem, we propose to approximate the active learning cost with a Gaussian mixture model (GMM). We demonstrate our active learning framework in the context of a reaching task in a cluttered environment with an illustrative toy example and a real experiment with a Panda robot.
Humans exhibit outstanding learning, planning and adaptation capabilities while performing different types of industrial tasks. Given some knowledge about the task requirements, humans are able to plan their limbs motion in anticipation of the execution of specific skills. For example, when an operator needs to drill a hole on a surface, the posture of her limbs varies to guarantee a stable configuration that is compatible with the drilling task specifications, e.g. exerting a force orthogonal to the surface. Therefore, we are interested in analyzing the human arms motion patterns in industrial activities. To do so, we build our analysis on the so-called manipulability ellipsoid, which captures a posture-dependent ability to perform motion and exert forces along different task directions. Through thorough analysis of the human movement manipulability, we found that the ellipsoid shape is task dependent and often provides more information about the human motion than classical manipulability indices. Moreover, we show how manipulability patterns can be transferred to robots by learning a probabilistic model and employing a manipulability tracking controller that acts on the task planning and execution according to predefined control hierarchies.
Whether in factory or household scenarios, rhythmic movements play a crucial role in many daily-life tasks. In this paper we propose a Fourier movement primitive (FMP) representation to learn such type of skills from human demonstrations. Our approach takes inspiration from the probabilistic movement primitives (ProMP) framework, and is grounded in signal processing theory through the Fourier transform. It works with minimal preprocessing, as it does not require demonstration alignment nor finding the frequency of demonstrated signals. Additionally, it does not entail the careful choice/parameterization of basis functions, that typically occurs in most forms of movement primitive representations. Indeed, its basis functions are the Fourier series, which can approximate any periodic signal. This makes FMP an excellent choice for tasks that involve a superposition of different frequencies. Finally, FMP shows interesting extrapolation capabilities as the system has the property of smoothly returning back to the demonstrations (e.g. the limit cycle) when faced with a new situation, being safe for real-world robotic tasks. We validate FMP in several experimental cases with real-world data from polishing and 8-shape drawing tasks as well as on a 7-DoF, torque-controlled, Panda robot.
We propose to formulate the problem of representing a distribution of robot configurations (e.g. joint angles) as that of approximating a product of experts. Our approach uses variational inference, a popular method in Bayesian computation, which has several practical advantages over sampling-based techniques. To be able to represent complex and multimodal distributions of configurations, mixture models are used as approximate distribution. We show that the problem of approximating a distribution of robot configurations while satisfying multiple objectives arises in a wide range of problems in robotics, for which the properties of the proposed approach have relevant consequences. Several applications are discussed, including learning objectives from demonstration, planning, and warm-starting inverse kinematics problems. Simulated experiments are presented with a 7-DoF Panda arm and a 28-DoF Talos humanoid.
A common approach to learn robotic skills is to imitate a demonstrated policy. Due to the compounding of small errors and perturbations, this approach may let the robot leave the states in which the demonstrations were provided. This requires the consideration of additional strategies to guarantee that the robot will behave appropriately when facing unknown states. We propose to use a Bayesian method to quantify the action uncertainty at each state. The proposed Bayesian method is simple to set up, computationally efficient, and can adapt to a wide range of problems. Our approach exploits the estimated uncertainty to fuse the imitation policy with additional policies. It is validated on a Panda robot with the imitation of three manipulation tasks in the continuous domain using different control input/state pairs.
The work presented in this paper proposes a gesture operational model (GOM) that describes how the body parts cooperate, to perform a situated professional gesture. The model is built upon several assumptions that determine the dynamic relationship between the body entities within the execution of the human movement. The model is based on the state-space (SS) representation, as a simultaneous equation system for all the body entities is generated, composed of a set of first-order differential equations. The coefficients of the equation system are estimated using the maximum likelihood estimation (MLE) method, and its dynamic simulation generates a dynamic tolerance of the spatial variance of the movement over time. The scientific evidence of the GOM is evaluated through its ability to improve the recognition accuracy of gestural time series that are modeled using continuous hidden Markov models (HMMs) in 5 different use cases.
In this work we study the DMP spatial scaling in the Cartesian space. The DMP framework is claimed to have the
ability to generalize learnt trajectories to new initial and goal positions, maintaining the desired kinematic pattern. However we show that the existing formulations present problems in trajectory spatial scaling when used in the Cartesian space for a wide variety of tasks and examine their cause. We then propose a novel formulation alleviating these problems. Trajectory generalization analysis, is performed by deriving the trajectory tracking dynamics. The proposed formulation is compared with the existing ones through simulations and experiments on a KUKA LWR 4+ robot.
With the rise of collaborative robots in industries, this paper proposes a human robot collaborative gripper for a windshield assembly and visual inspection application. The collaborative interface which acts as a haptic feedback device is mounted on the gripper using a deployable mechanism. The kinematics of a reconfigurable mechanism are analyzed to illustrate the advantages for using it as an unit mechanism and the concept is extended to a parallelogram based deployable four bar mechanism. A novel threefold reconfigurable four bar mechanism is developed by creating adjacent units orthogonally and the connection between such units are investigated. The proposed mechanism can be deployed and stowed in three directions. Locking of the mechanism is proposed using mechanism singularity. Kinematic simulations are performed to validate the proposed designs and analyses.
Over the last decades, Learning from Demonstration (LfD) has become a widely accepted solution for the problem of robot programming. According to LfD, the kinematic behavior is “taught” to the robot, based on a set of motion demonstrations performed by the human-teacher. The demonstrations can be either captured via kinesthetic teaching or external sensors, e.g., a camera. In this work, a controller for providing haptic cues of the robot’s kinematic behavior to the human-teacher is proposed. Guidance is provided in procedures of kinesthetic coaching during inspection and partial modification of encoded motions. The proposed controller is based on an artificial potential field, designed to adjust the intensity of the haptic communication automatically according to the human intentions. The control scheme is proved to be passive with respect to robot’s velocity and its effectiveness is experimentally evaluated in a KUKA LWR4+ robotic manipulator
This paper presents a teaching by demonstration method for contact tasks with periodic movement on planar
surfaces of unknown pose. To learn the motion on the plane, we utilize frequency oscillators with periodic movement primitives and we propose modified adaptation rules along with an extraction method of the task’s fundamental frequency by automatically discarding near-zero frequency components. Additionally, we utilize an online estimate of the normal vector to the plane, so that the robot is able to quickly adapt to rotated hinged surfaces such as a window or a door. Using the framework of progressive automation for compliance adaptation, the robot transitions seamlessly and bi-directionally between hand guidance and autonomous operation within few repetitions of the task. While the level of automation increases, a hybrid force/position controller is progressively engaged for the autonomous operation of the robot. Our methodology is verified experimentally in surfaces of different orientation, with the robot being able to adapt to surface orientation perturbations.
Abstract. The goal of this work is to build the basis for a smartphone application that provides functionalities for recording human motion data, train machine learning algorithms and recognize professional gestures. First, we take advantage of the new mobile phone cameras, either infrared or stereoscopic, to record RGB-D data. Then, a bottom-up pose estimation algorithm based on Deep Learning extracts the 2D human skeleton and exports the 3rd dimension using the depth. Finally, we use a gesture recognition engine, which is based on K-means and Hidden Markov Models (HMMs). The performance of the machine learning algorithm has been tested with professional gestures using a silk-weaving and a TV-assembly datasets.
This paper deals with the problem of the recognition of human hand touch by a robot equipped with large area tactile
sensors covering its body. This problem is relevant in the domain of physical human-robot interaction for discriminating
between human and non-human contacts and to trigger and to drive cooperative tasks or robot motions, or to ensure a
safe interaction. The underlying assumption, used in this paper, is that voluntary physical interaction tasks involve hand
touch over the robot body, and therefore the capability of recognizing hand contacts is a key element to discriminate a
purposive human touch from other types of interaction.
The proposed approach is based on a geometric transformation of the tactile data, formed by pressure measurements
associated to a non uniform cloud of 3D points (taxels) spread over a non linear manifold corresponding to the robot
body, into tactile images representing the contact pressure distribution in 2D. Tactile images can be processed using
deep learning algorithms to recognize human hands and to compute the pressure distribution applied by the various
hand segments: palm and single fingers.
Experimental results, performed on a real robot covered with robot skin, show the effectiveness of the proposed
methodology. Moreover, to evaluate its robustness, various types of failures have been simulated. A further analysis
concerning the transferability of the system has been performed, considering contacts occurring on a different
sensorized robot part.
During the past few years, probabilistic approaches to imitation learning have earned a relevant place in the robotics literature. One of their most prominent features is that, in addition to extracting a mean trajectory from task demonstrations, they provide a variance estimation. The intuitive meaning of this variance, however, changes across different techniques, indicating either variability or uncertainty. In this paper we leverage kernelized movement primitives (KMP) to provide a new perspective on imitation learning by predicting variability, correlations and uncertainty using a single model. This rich set of information is used in combination with the fusion of optimal controllers to learn robot actions from data, with two main advantages: i) robots become safe when uncertain about their actions and ii) they are able to leverage partial demonstrations, given as elementary sub-tasks, to optimally perform a higher level, more complex task. We showcase our approach in a painting task, where a human user and a KUKA robot collaborate to paint a wooden board. The task is divided into two sub-tasks and we show that the robot becomes compliant (hence safe) outside the training regions and executes the two sub-tasks with optimal gains otherwise
This paper presents a methodology that enables the exploitation of innovative technologies for collaborative robots through user involvement from the beginning of product development. The methodology will be applied in the EU-funded project CoLLaboratE that focuses on how industrial robots learn to collaborate with human workers in order to perform new manufacturing tasks. The presented methodology is preliminary and will be improved during the project runtime.
This paper is concerned with a methodology for gathering user requirements (URs) to inform a later design process of industrial collaborative robots. The methodology is applied to four use cases from CoLLaboratE, which is a European project focusing on how industrial robots learn to cooperate with human workers in performing new manufacturing tasks. The project follows a User-Centered Design (UCD) approach by involving end-users in the development process. The user requirements are gathered using a mixed methodology, with the purpose of formulating a list of case-specific requirements, which can be also generalized. The results presented in this paper consist of the list of user requirements, which will serve as a basis in establishing scenarios and system requirements for the later design of a Human-Robot Collaboration (HRC) system. The described methodology contributes to the field of design of HRC systems by taking a UCD approach. The methodology is aimed at improving the solution performance and users’ acceptance of the technology, by early involvement of the users in the design process. It can also be adaptable to other development projects, where users play an essential role in creating Human-Robot Collaboration solutions.
With the raise of collaborative robots, human-robot interaction needs to be as natural as possible. In this work, we present a framework for real-time continuous motion control of a real collaborative robot (cobot) from gestures captured by an RGB camera. Through deep learning existing techniques, we obtain human skeletal pose information both in 2D and 3D. We use it to design a controller that makes the robot mirror in real-time the movements of a human arm or hand.
In this work, we propose an augmentation to the Dynamic Movement Primitives (DMP) framework which allows the system to generalize to moving goals without the use of any known or approximation model for estimating the goal’s motion. We aim to maintain the demonstrated velocity levels during the execution to the moving goal, generating motion profiles appropriate for human robot collaboration. The proposed method employs a modified version of a DMP, learned by a demonstration to a static goal, with adaptive temporal scaling in order to achieve reaching of the moving goal with the learned kinematic pattern. Only the current position and
velocity of the goal are required. The goal’s reaching error and its derivative is proved to converge to zero via contraction analysis. The theoretical results are verified by simulations and
experiments on a KUKA LWR4+ robot.
A control scheme consisting of a novel coupling of a DMP based virtual reference with a low stiffness controlled robot is proposed. The overall system is proved to achieve superior tracking of a DMP encoded trajectory and accurate target reaching with respect to the conventional scheme under the presence of constant and periodic disturbances owing to unknown task dynamics and robot model uncertainties. It further preserves the desired compliance under contact forces that may arise in human interventions and collisions. Results in simulations and experiments validate the theoretical findings.
Manual laborers from the industry sector are often subject to critical physical strain that lead to work-related musculoskeletal disorders. Lifting, poor posture and repetitive movements are among the causes of these disorders. In order to prevent them, several rules and methods have been established to identify ergonomic risks that the worker might be exposed during his/her activities. However, the ergonomic assessment though these methods is not a trivial task and a relevant degree of theoretical knowledge on the part of the analyst is necessary. Therefore in this paper, a web-based automatic ergonomic assessment module is proposed. The proposed module uses segment rotations acquired from inertial measurement units for the assessment and provides as feedback RULA scores, color visualisation and limb angles in a simple, intuitive and meaningful way. RULA is one of the most used observational methods for assessment of occupational risk factors for upper-extremity musculoskeletal disorders. By automatizing RULA an interesting perspective for extracting posture analytics for ergonomic assessment is opened, as well as the inclusion of new features that may complement it. For future work, the use of other features and sensors will be investigated for its implementation on the module.
This work presents a statistical analysis of professional gestures from household appliances manufacturing. The goal is to investigate the hypothesis that some body segments are more involved than others in professional gestures and present thus higher ergonomic risk. The gestures were recorded with a full body Inertial Measurement Unit (IMU) suit and represented with rotations of each segment. Data dimensions have been reduced with principal component analysis (PCA), permitting us to reveal hidden correlations between the body segments and to extract the ones with the highest variance. This work aims at detecting among numerous upper body segments, which are the ones that are overused and consequently, which is the minimum number of segments that is sufficient to represent our dataset for ergonomic analysis. To validate the results, Hidden Markov Models (HMMs) based recognition method has been used and trained only with the segments from the PCA. The recognition accuracy of 95.71% was achieved confirming this hypothesis.
Currently, biomechanics analyses of the upper human body are mostly kinematic i.e., they are concerned with the positions, velocities, and accelerations of the joints on the human body with little consideration on the forces required to produces them. Tough kinetic analysis can give insight to the torques required by the muscles to generate motion and therefore provide more information regarding human movements, it is generally used in a relatively small scope (e.g. one joint or the contact forces the hand applies). The problem is that in order to calculate the joint torques on an articulated body, such as the human arm, the correct shape and weight must be measured. For robot manipulators, this is done by the manufacturer during the designing phase, however, on the human arm, direct measurement of the volume and the weight is very dicult and extremely impractical. Methods for indirect
estimation of those parameters have been proposed, such as the use of medical imaging or standardized scaling factors (SF). However, there is always a trade o between accuracy and practicality. This paper uses computer vision (CV) to extract the shape of each body segment and find the inertia parameters. The joint torques are calculated using those parameters and they are compared to joint torques that were calculated using SF to establish the inertia properties. The purpose here is to examine a practical method for real-time joint torques calculation that can be personalized and accurate.
Fast deployment of robot tasks requires appropriate
tools that enable efficient reuse of existing robot control
policies. Learning from Demonstration (LfD) is a popular tool
for the intuitive generation of robot policies, but the issue of
how to address the adaptation of existing policies has not been
properly addressed yet. In this work, we propose an incremental
LfD framework that efficiently solves the above-mentioned
issue. It has been implemented and tested on a number of
popular collaborative robots, including Franka Emika Panda,
Universal Robot UR10, and KUKA LWR 4.
An assembly task is in many cases just a reverse
execution of the corresponding disassembly task. During the
assembly, the object being assembled passes consecutively from
state to state until completed, and the set of possible movements
becomes more and more constrained. Based on the observation
that autonomous learning of physically constrained tasks can
be advantageous, we use information obtained during learning
of disassembly in assembly. For autonomous learning of a
disassembly policy we propose to use hierarchical reinforcement
learning, where learning is decomposed into a highlevel
decision-making and underlying lower-level intelligent
compliant controller, which exploits the natural motion in
a constrained environment. During the reverse execution of
disassembly policy, the motion is further optimized by means
of an iterative learning controller. The proposed approach was
verified on two challenging tasks - a maze learning problem and
autonomous learning of inserting a car bulb into the casing.
In this paper, we consider a problem how to exploit a task
space motion for lower-priority tasks when the end-effector motion allows
some deviation of the motion for the primary task. Using a common
redundancy resolution methods, the self-motion is only possible in the
null-space. Therefore, we propose a novel combination of controllers in
two spaces: in the task space and in the reduced task space, where DOFs
corresponding to spatial directions allowing the deviations are excluded.
The motion generated by the controller in the reduced task space is
mapped into the main task and by properly selecting the controller
parameters the resulting motion does not violate the constraints. To
demonstrate the effectiveness of the proposed control we show simulation
examples where motion in a constraint region is used to avoid joint
In this paper, we propose a novel unified framework for virtual guides. The human–robot interaction
is based on a virtual robot, which is controlled by the admittance control. The unified framework
combines virtual guides, control of the dynamic behavior, and path tracking. Different virtual guides
and active constraints can be realized by using dead-zones in the position part of the admittance
controller. The proposed algorithm can act in a changing task space and allows selection of the tasksspace
and redundant degrees-of-freedom during the task execution. The admittance control algorithm
can be implemented either on a velocity or on acceleration level. The proposed framework has been
validated by an experiment on a KUKA LWR robot performing the Buzz-Wire task
Dynamic movement primitives (DMP) are an efficient way for learning
and reproducing complex robot behaviors. A singularity free DMP formulation
for orientation in the Cartesian space is proposed by Ude et al. in 2014 and has been
largely adopted by the research community. In this work, we demonstrate the
undesired oscillatory behavior that may arise when controlling the robot’s orien-
tation with this formulation, producing a motion pattern highly deviant from the
desired and highlight its source. A correct formulation is then proposed that alle-
viates such problems while guaranteeing generation of orientation parameters that
lie in SO(3). We further show that all aspects and advantages of DMP including
ease of learning, temporal and spatial scaling and the ability to include coupling
terms are maintained in the proposed formulation. Simulations and experiments
with robot control in SO(3) are performed to demonstrate the performance of the
proposed formulation and compare it with the previously adopted one.
In this paper, we are going to present a methodology for gathering user requirements to design industrial collaborative robots. The study takes place within CoLLaboratE, a European project focusing on how industrial
robots learn to cooperate with human workers for performing new manufacturing tasks, considering four use cases. The project follows a User-Centered Design approach by involving end-users in the development process. The user requirements will thus be gathered by applying a mixed methodology, with the purpose of formulating a list of requirements which can be generalized, but also case specific. This methodology is preliminary, and it will be improved during the following months, when the data will be collected and analyzed.
This work focuses on the prediction of the human’s motion in a collaborative human-robot object transfer with the aim of assisting the human and minimizing his/her effort. The desired pattern of motion is learned from a human demonstration and is encoded with a DMP (Dynamic Movement Primitive). During the object transfer to unknown targets, a model reference with a DMP-based control input and an EKF-based (Extended Kalman Filter) observer for predicting the target and temporal scaling is used. Global boundedness under the emergence of bounded forces with bounded energy is proved. The object dynamics are assumed known. The validation of the proposed approach is performed through experiments using a Kuka LWR4+ robot equipped with an ATI sensor at its end-effector.