2023-05-31
In this work a control scheme is proposed to enforce dynamic obstacle avoidance constraints to the full body of actively compliant robots. We argue that both compliance and accuracy are necessary to build safe collaborative robotic systems; obstacle avoidance is usually not enough, due to the reliance on perception systems which exhibit delays and errors. Our scheme is able to successfully avoid obstacles, while remaining compliant in the entirety of the executed task. Therefore, in case of unexpected collisions due to perception system errors, the robot remains safe for humans and its environment. Our approach is validated through experiments with simulated and real obstacles utilizing a 7-dof KUKA LBR iiwa robotic manipulator.
2020-01-31
In this paper, we propose a novel unified framework for virtual guides. The human–robot interaction
is based on a virtual robot, which is controlled by the admittance control. The unified framework
combines virtual guides, control of the dynamic behavior, and path tracking. Different virtual guides
and active constraints can be realized by using dead-zones in the position part of the admittance
controller. The proposed algorithm can act in a changing task space and allows selection of the tasksspace
and redundant degrees-of-freedom during the task execution. The admittance control algorithm
can be implemented either on a velocity or on acceleration level. The proposed framework has been
validated by an experiment on a KUKA LWR robot performing the Buzz-Wire task
2020-01-31
In this paper, we consider a problem how to exploit a task
space motion for lower-priority tasks when the end-effector motion allows
some deviation of the motion for the primary task. Using a common
redundancy resolution methods, the self-motion is only possible in the
null-space. Therefore, we propose a novel combination of controllers in
two spaces: in the task space and in the reduced task space, where DOFs
corresponding to spatial directions allowing the deviations are excluded.
The motion generated by the controller in the reduced task space is
mapped into the main task and by properly selecting the controller
parameters the resulting motion does not violate the constraints. To
demonstrate the effectiveness of the proposed control we show simulation
examples where motion in a constraint region is used to avoid joint
limits.
2020-01-31
An assembly task is in many cases just a reverse
execution of the corresponding disassembly task. During the
assembly, the object being assembled passes consecutively from
state to state until completed, and the set of possible movements
becomes more and more constrained. Based on the observation
that autonomous learning of physically constrained tasks can
be advantageous, we use information obtained during learning
of disassembly in assembly. For autonomous learning of a
disassembly policy we propose to use hierarchical reinforcement
learning, where learning is decomposed into a highlevel
decision-making and underlying lower-level intelligent
compliant controller, which exploits the natural motion in
a constrained environment. During the reverse execution of
disassembly policy, the motion is further optimized by means
of an iterative learning controller. The proposed approach was
verified on two challenging tasks - a maze learning problem and
autonomous learning of inserting a car bulb into the casing.
2020-01-31
Fast deployment of robot tasks requires appropriate
tools that enable efficient reuse of existing robot control
policies. Learning from Demonstration (LfD) is a popular tool
for the intuitive generation of robot policies, but the issue of
how to address the adaptation of existing policies has not been
properly addressed yet. In this work, we propose an incremental
LfD framework that efficiently solves the above-mentioned
issue. It has been implemented and tested on a number of
popular collaborative robots, including Franka Emika Panda,
Universal Robot UR10, and KUKA LWR 4.
2022-05-20
To prevent work-related musculoskeletal disorders (WMSD) the ergonomists apply manual heuristic methods to determine when the worker is exposed to risk factors. However, these methods require an observer and the results can be subjective. This paper proposes a method to automatically evaluate the ergonomic risk factors when performing a set of postures from the ergonomic assessment worksheet (EAWS). Joint angle motion data have been recorded with a full-body motion capture system. These data modeled the motion patterns of four different risk factors, with the use of hidden Markov models (HMMs). Based on the EAWS, automated scores were assigned by the HMMs and were compared to the scores calculated manually. Because the method proposed here is intrusive and requires expensive equipment, kinematic data from a reduced set of two sensors was also evaluated.
2021-09-01
General Info:
This benchmark provides motion capture (MoCap) files in .bvh form. The recordings were done in the span of May 2019 to January 2020 for the needs of the CoLLaboratE and MINGEI H2020 projects funded by the European Commission. The tasks included are:
- TV assembling
- Airplane component manufacturing
- High ergonomic hazard motions
- Silk-Weaving
- Glassblowing
- Mastic Cultivation
The TV assembly and airplane component manufacturing tasks were recorded in real-world conditions inside the factory during the actual production of the items. The high ergonomic hazard motions were recorded in a controlled lab environment and serve as baseline/prototype motions for ergonomic risk assessment.
The silk-weaving, glassblowing, and mastic cultivation data sets were created, corresponding to movements performed by skilled craftsmen and mastic farmers. These data sets were produced in order to extract the expert's gestural knowledge and analyze their dexterity while doing their crafts.
Naming Convention:
All files in this benchmark follow a strict naming convention to allow for easier parsing by scripts. The names have a total of 12 or 13 characters that convey the following information:
- The first three or fours characters label the recording session (e.g., LAB, PLN, GBBC, MCSN, etc.)
- The next three characters label the subject number (e.g., S01, S02, S03, etc.)
- The next three characters label the posture or gesture number (e.g., P01, P02, G01, G02, etc.)
- The final three characters label the repetition number (e.g., R01, R02, R03, etc.)
For example, LABS02P03R01 denotes a lab recording of the second subject, performing the third posture for the first time.
Recording Sessions:
There are six recording sessions in this benchmark, the ergonomic risk motion recorded in the lab (denoted as "LAB"), the construction of an airplane component (denoted as "PLN"), and the assembling and packaging of TVs (denoted as "TV*"), the silk weaving (denoted as "SW*"), glassblowing (denoted as "GB*"), and mastic cultivation (denoted as "MC*").
The postures are the following:
LAB:
- Standing:
- P01: The subject stays in I-pose
- P02: The subject rotates his/her torso to the left as far the person can
- P03: The subject will laterally bend his/her torso to the left for 6 seconds
- P04: The subject bends more than 20° but less than 60°
- P05: The subject bends more than 20° but less than 60° while rotating and laterally bending the torso to the left
- P06: The subject stretches his/her arms, and bends forward more than 20° but less than 60° while rotating and laterally bending the torso to the left
- P07: The subject bends more than 60°
- P08: The subject bends more than 60° while rotating and laterally bending the torso to the left
- P09: The subject stretches his/her arms, and bends forward more than 60° while rotating and laterally bending the torso to the left
- P10: The subject upright, raises the elbows above the shoulder level with the forearms bent 90° (
- P11: The subject raises the elbows above the shoulder level with the forearms bent 90° while rotating and laterally bending the torso to the left
- P12: The subject raises the elbows above the shoulder level with the arms stretched while rotating and laterally bending the torso to the left
- P13: The subject upright, raises the hands above the head
- P14: The subject raises the hands above the head with the arms stretched while rotating and laterally bending the torso to the left
- Sitting on a chair:
- P15: The subject sits upright
- P16: The subject bends forward more than 60°
- P17: The subject bends forward more than 60° while rotating and laterally bending the torso to the left
- P18: The subject stretches the arms, and bends forward more than 60° while rotating and laterally bending the torso to the left
- P19: The subject raises the hands above the head with arms stretched
- P20: The subject raises the hands above the head with the arms stretched while rotating and laterally bending the torso to the left
- Kneeling:
- P21: The subject stays upright
- P22: The subject rotates the torso to the left as far he/she can
- P23: The subject will laterally bend the torso to the left for 6 seconds
- P24: The subject bends more than 60°
- P25: The subject bends more than 60° while rotating and laterally bending the torso to the left
- P26: The subject stretches the arms, and bends forward more than 60° while rotating and laterally bending the torso to the left
- P27: The subject upright, raises the elbows to the shoulder level with the arms stretched
- P28: The subject raises the elbows to the shoulder level with the arms stretched while rotating and laterally bending the torso to the left
The TV assembling tasks are further divided. The subtasks are: packing the TVs on a stack for shipping (denoted as "TVP" for medium-sized TVs and "TVL" for larger TVs), placing assembling and placing electronic circuit boards on the chassis (denoted as "TVB"), and screwing the boards on the TV chassis (denoted as "TV_"). Each task is comprised of a number of postures.
TV Assembling:
- Assembling the board and placing it on the TV chassis (TVB):
- P01: Reaching high, above the shoulder level, to pick one component
- P02: Reaching low, below the knee level, to pick up the second component
- P03: Connecting the components and placing the board on the chassis to be screwed
- Screwing an electrical circuit board on the TV chassis (TV_) :
- P01: A screw is placed on a power tool and it is being screwed on the chassis. The process is repeated four times
- Preparing TVs for Shipping (TVP & TVL):
- P01: Placing TVs on a wooden pallet (bottom level)
- P02: Preparing to wrap the bottom level with a membrane
- P03: Wrapping the bottom level
- P04: Placing TVs on top of the bottom level (second level)
- P05: Placing TVs on top of the second level (third level)
- P06: Wrapping the second level with a plastic membrane
- P07: Wrapping the third level with a plastic membrane
- P08: Placing TVs on top of the third level (fourth level)
- P09: Wrapping the fourth level with a plastic membrane
Riveting of an airplane floater (PLN):
- P01: Rivet with the pneumatic hammer.
- P02: Prepare the pneumatic hammer and grab rivets.
- P03: Place the bucking bar to counteract the incoming rivet.
The tasks recorded for silk weaving, glassblowing, and mastic cultivation data sets were segmented by gestures (e.g., G01, G02, etc.) . The tasks recorded for these three data sets are the following:
Silk weaving (SW*):
- The creation of the punch cards (SWPC).
- Preparation of the beam (SWPB).
- Wrapping of the beam (SWWB).
- Jacquard weaving with small loom (SWSL).
- Jacquard weaving with medium size loom (SWML).
- Jacquard weaving with large loom (SWLL).
Glassblowing (GB*):
- Beak cutting (GBBC).
- Blowing and shaping (GBBS).
- Cervix refining (GBCR).
- Cord laying (GBCL).
- Finish details (GBFD).
- Handle laying (GBHL).
- Transfer to punty (GBTP).
- Leg and foot laying (GBLF).
Mastic Cultivation (MC*):
- Scrapping with new tool (MCSN).
- Scrapping with old tool (MCSO).
- Sweeping (MCSW).
- Dusting (MCDU).
- Embroidery A (MCEA).
- Embroidery B (MCEB).
- Embroidery with an axe (MCEX).
- Gathering (MCGA).
- Harvesting (MCHA).
- Wiping (MCWI).
- Shifting A (MCSA).
- Shifting B (MCSB).
- Cleaning with the wind (MCCW).
The motion capture files were processed and segmented with a 3D character animation software (MotionBuilder, Autodesk Inc., San Rafael, CA. USA) and exported to Biovision Hierarchy (BVH) files.
2022-03-28
The dataset consists of color images of the fixture with inserted copper sliding rings, which is used to evaluate and validate the process of assembly of an object with low tolerances supported by multi modal exception strategy learning and ergodic control. This dataset is used to classify the insertion process into three states: OK, NotOK or NoPart. The dataset consists of two main classes: Valid Insertion, Invalid Insertion in each of the four insertion slots.
2022-03-17
The dataset consists of color images of different outcomes of the copper ring insertion task as well as the corresponding Cartesian pose of the robot end-effector and force torque data measured in the robot wrist. Data is organized into folders, representing one of four possible insertion slots from 1 to 4. In each of the folder, data is further split into 13 cases:
- error in insertion target position ranging from -3 to 3 mm in x direction,
- error in insertion target position ranging from -3 to 3 mm in y direction,
- no positional error.
Multiple attempts were made for each case.
Each entry has unique date-time tag and comprises five files: RGB image (.jpg) and .csv files with robot reference and measured target pose in Cartesian space (positions and quaternions) and raw force-torque sensor data and force-torque data transformed to the tool frame.
Total number of entries is 300.
The experiments were performed with Franka Emika Panda collaborative robot. For acquisition of image data an Intel Realsense D435 RGB-D camera has been utilized. Force-torque measurements were done using ATI Nano25 sensor.
2022-03-16
The dataset consists of a set of computer-generated images for the four objects (TV frame, green PCB, yellow PCB, and screwdriver) involved in the TV assembly task of CoLLaboratE. For every frame a pair of rgb and depth image, the mask for each object, object poses and the corresponding camera pose are stored. In total there are 3700 different poses for every object, and in addition to that 17 different backgrounds were used to augment the RGB image collection, resulting in a total of 62900 rgb images.
Regarding the acquisition of the data, the virtual camera was set to simulate a real camera (Astra RGBD camera) initially used in the CoLLaboratE task of a TV assembly. Thus, the rgb, and depth images (640x480) and the corresponding segmentation masks were extracted for all the animation frames (3700 different camera positions x 17 different backgrounds).
2022-03-09
This dataset consists of color and depth videos of human worker motions and their corresponding Cartesian trajectories of the worker’s hand, captured using OptiTrack motion capture system. Each motion sample comprises 4 files (RGB video, depth video, a Cartesian trajectory of worker’s hand and a label of the motion, representing one of four possible goals from 1 to 4). Total number of motion samples: 811.
Structure: MPEG-4 videos of human worker motion and corresponding NumPy array files, containing Cartesian trajectories and a label of the motion. Dataset is in a .zip archive. Extract using 7-Zip or similar software. Video files (.avi) can be opened using VLC media player or any other video player that supports MPEG-4 codec. The .npy files can be loaded using Python (>=3.7) and the Python library NumPy (>=1.19.2).
2020-02-12
In this article, a passive physical human-robot interaction (pHRI) controller is proposed to enhance pHRI performance in terms of precision, cognitive load, and user effort, in cases partial knowledge of the task is available. Partial knowledge refers to a subspace of SE(3) determined by the desired task, generally mixing both position and orientation variables, and is mathematically approximated by parametric expressions. The proposed scheme, which utilizes the notion of virtual constraints and the prescribed performance control methodology, is proved to be passive with respect to the interaction force, while guaranteeing constraint satisfaction in all cases. The control scheme is experimentally validated and compared with a dissipative control scheme utilizing a KUKA LWR4+ robot in a master-slave task; experiments also include an application to a robotic assembly case.
2020-03-30
Abstract. The goal of this work is to build the basis for a smartphone application that provides functionalities for recording human motion data, train machine learning algorithms and recognize professional gestures. First, we take advantage of the new mobile phone cameras, either infrared or stereoscopic, to record RGB-D data. Then, a bottom-up pose estimation algorithm based on Deep Learning extracts the 2D human skeleton and exports the 3rd dimension using the depth. Finally, we use a gesture recognition engine, which is based on K-means and Hidden Markov Models (HMMs). The performance of the machine learning algorithm has been tested with professional gestures using a silk-weaving and a TV-assembly datasets.
2020-05-31
Assembly tasks performed with a robot often fail due to unforeseen situations, regardless of the fact that we carefully learned and optimized the assembly policy. This problem is even more present in humanoid robots acting in an unstructured environment where it is not possible to anticipate all factors that might lead to the failure of the given task. In this work, we propose a concurrent LfD framework, which associates demonstrated exception strategies to the given context. Whenever a failure occurs, the proposed algorithm generalizes past experience regarding the current context and generates an appropriate policy that solves the assembly issue. For this purpose, we applied PCA on force/torque data, which generates low dimensional descriptor of the current context. The proposed framework was validated in a peg-in-hole (PiH) task using Franka-Emika Panda robot.
2021-06-16
The distinguishing property of Reconfigurable Manufacturing Systems (RMS) is that they can rapidly and efficiently adapt to new production requirements, both in terms of their capacity and functionalities. For this type of systems to achieve the desired efficiency, it should be possible to easily and quickly setup and reconfigure all of their components. This includes fixturing jigs that are used to hold workpieces firmly in place to enable a robot to carry out the desired production processes.
In this paper, we formulate a constrained nonlinear optimization problem that must be solved to determine an optimal layout of reconfigurable fixtures for a given set of workpieces. The optimization problem takes into account the kinematic limitations of the fixtures, which are built in shape of Sterwart platforms, and the characteristics of the workpieces that need to be fastened into the fixturing system. Experimental results are presented that demonstrate that the automatically computed fixturing system layouts satisfy different constraints typically imposed in production environments.
2021-04-14
This paper addresses the problem of imposing pre-defined performance characteristics (by means of maximum steady-state error and minimum convergence rate) on the output tracking errors for a class of uncertain multi-input multi-output (MIMO) nonlinear system in the presence of state quantization implemented by uniform-hysteretic quantizers. A low-complexity control design that requires reduced system knowledge and utilizes only quantized measurements of the state is proposed. The desired performance is achieved by assuming knowledge of the step-size of the quantizers involved. Simulation results verify the theoretical findings.
2019-10-20
During the past few years, probabilistic approaches to imitation learning have earned a relevant place in the robotics literature. One of their most prominent features is that, in addition to extracting a mean trajectory from task demonstrations, they provide a variance estimation. The intuitive meaning of this variance, however, changes across different techniques, indicating either variability or uncertainty. In this paper we leverage kernelized movement primitives (KMP) to provide a new perspective on imitation learning by predicting variability, correlations and uncertainty using a single model. This rich set of information is used in combination with the fusion of optimal controllers to learn robot actions from data, with two main advantages: i) robots become safe when uncertain about their actions and ii) they are able to leverage partial demonstrations, given as elementary sub-tasks, to optimally perform a higher level, more complex task. We showcase our approach in a painting task, where a human user and a KUKA robot collaborate to paint a wooden board. The task is divided into two sub-tasks and we show that the robot becomes compliant (hence safe) outside the training regions and executes the two sub-tasks with optimal gains otherwise.
2021-09-22
Probability distributions are key components of many learning from demonstration (LfD) approaches, with the spaces chosen to represent tasks playing a central role. Although the robot configuration is defined by its joint angles, end-effector poses are often best explained within several task spaces. In many approaches, distributions within relevant task spaces are learned independently and only combined at the control level. This simplification implies several problems that are addressed in this work. We show that the fusion of models in different task spaces can be expressed as products of experts (PoE), where the probabilities of the models are multiplied and renormalized so that it becomes a proper distribution of joint angles. Multiple experiments are presented to show that learning the different models jointly in the PoE framework significantly improves the quality of the final model. The proposed approach particularly stands out when the robot has to learn hierarchical objectives that arise when a task requires the prioritization of several sub-tasks (e.g. in a humanoid robot, keeping balance has a higher priority than reaching for an object). Since training the model jointly usually relies on contrastive divergence, which requires costly approximations that can affect performance, we propose an alternative strategy using variational inference and mixture model approximations. In particular, we show that the proposed approach can be extended to PoE with a nullspace structure (PoENS), where the model is able to recover secondary tasks that are masked by the resolution of tasks of higher-importance.
2021-02-23
In the context of learning from demonstration (LfD), trajectory policy representations such as probabilistic movement primitives (ProMPs) allow for rich modeling of demonstrated skills. To reproduce a learned skill with a real robot, a feedback controller is required to cope with perturbations and to react to dynamic changes in the environment. In this letter, we propose a generalized probabilistic control approach that merges the probabilistic modeling of the demonstrated movements and the feedback control action for reproducing the demonstrated behavior. We show that our controller can be easily employed, outperforming both original controller and a controller with constant feedback gains. Furthermore, we show that the proposed approach is able to solve dynamically changing tasks by modeling the demonstrated behavior as Gaussian mixtures and by introducing context variables. We demonstrate the capability of the approach with experiments in simulation and by teaching a 7-axis Franka Emika Panda robot to drop a ball into a moving box with only few demonstrations.
2020-10-10
In learning from demonstrations, many generative models of trajectories make simplifying assumptions of independence. Correctness is sacrificed in the name of tractability and speed of the learning phase. The ignored dependencies, which often are the kinematic and dynamic constraints of the system, are then only restored when synthesizing the motion, which introduces possibly heavy distortions. In this work, we propose to use those approximate trajectory distributions as close-to-optimal discriminators in the popular generative adversarial framework to stabilize and accelerate the learning procedure. The two problems of adaptability and robustness are addressed with our method. In order to adapt the motions to varying contexts, we propose to use a product of Gaussian policies defined in several parametrized task spaces. Robustness to perturbations and varying dynamics is ensured with the use of stochastic gradient descent and ensemble methods to learn the stochastic dynamics. Two experiments are performed on a 7-DoF manipulator to validate the approach.
2021-02-03
Humans use their limbs to perform various movements to interact with an external environment. Thanks to limb's variable and adaptive stiffness, humans can adapt their movements to the external unstable dynamics. The underlying adaptive mechanism has been investigated, employing a simple planar device perturbed by external 2D force patterns. In this work, we will employ a more advanced, compliant robot arm to extend previous work to a more realistic 3D-setting. We study the adaptive mechanism and use machine learning to capture the human adaptation behavior. In order to model human's stiffness adaptive skill, we give human subjects the task to reach for a target by moving a handle assembled on the end-effector of a compliant robotic arm. The arm is force controlled and the human is required to navigate the handle inside a non-visible, virtual maze and explore it only through robot force feedback when contacting maze virtual walls. By sampling the hand's position and force data, a computational model based on a combination of model predictive control and nonlinear regression is used to predict participants' successful trials. Our study shows that participants selectively increased the stiffness within the axis direction of uncertainty to compensate for instability caused by a divergent external force field. The learned controller was able to successfully mimic this behavior. When it is deployed on the robot for the navigation task, the robot arm successfully adapt to the unstable dynamics in the virtual maze, in a similar manner as observed in the participants' adaptation skill.
2021-07-02
In robotics, ergodic control extends the tracking principle by specifying a probability distribution over an area to cover instead of a trajectory to track. The original problem is formulated as a spectral multiscale coverage problem, typically requiring the spatial distribution to be decomposed as Fourier series. This approach does not scale well to control problems requiring exploration in search space of more than two dimensions. To address this issue, we propose the use of tensor trains, a recent low-rank tensor decomposition technique from the field of multilinear algebra. The proposed solution is efficient, both computationally and storagewise, hence making it suitable for its online implementation in robotic systems. The approach is applied to a peg-in-hole insertion task requiring full 6-D end-effector poses, implemented with a seven-axis Franka Emika Panda robot. In this experiment, ergodic exploration allows the task to be achieved without requiring the use of force/torque sensors.
2021-03-18
Learning from Demonstration permits non-expert users to easily and intuitively reprogram robots. Among approaches embracing this paradigm, probabilistic movement primitives (ProMPs) are a well-established and widely used method to learn trajectory distributions. However, providing or requesting useful demonstrations is not easy, as quantifying what constitutes a good demonstration in terms of generalization capabilities is not trivial. In this letter, we propose an active learning method for contextual ProMPs for addressing this problem. More specifically, we learn the trajectory distributions using a Bayesian Gaussian mixture model (BGMM) and then leverage the notion of epistemic uncertainties to iteratively choose new context query points for demonstrations. We show that this approach reduces the required number of human demonstrations. We demonstrate the effectiveness of the approach on a pouring task, both in simulation and on a real 7-DoF Franka Emika robot.
2020-09-30
To prevent work-related musculoskeletal disorders (WMSD) the ergonomists apply manual heuristic methods to determine when the worker is exposed to risk factors. However, these methods require an observer and the results can be subjective. This paper proposes a method to automatically evaluate the ergonomic risk factors when performing a set of postures from the ergonomic assessment worksheet (EAWS). Joint angle motion data have been recorded with a full-body motion capture system. These data modeled the motion patterns of four different risk factors, with the use of hidden Markov models (HMMs). Based on the EAWS, automated scores were assigned by the HMMs and were compared to the scores calculated manually. Because the method proposed here is intrusive and requires expensive equipment, kinematic data from a reduced set of two sensors was also evaluated.
2021-04-03
In industry, ergonomists apply heuristic methods to determine workers’ exposure to ergonomic risks; however, current methods are limited to evaluating postures or measuring the duration and frequency of professional tasks. The work described here aims to deepen ergonomic analysis by using joint angles computed from inertial sensors to model the dynamics of professional movements and the collaboration between joints. This work is based on the hypothesis that with these models, it is possible to forecast workers’ posture and identify the joints contributing to the motion, which can later be used for ergonomic risk prevention. The modeling was based on the Gesture Operational Model, which uses autoregressive models to learn the dynamics of the joints by assuming associations between them. Euler angles were used for training to avoid forecasting errors such as bone stretching and invalid skeleton configurations, which commonly occur with models trained with joint positions. The statistical significance of the assumptions of each model was computed to determine the joints most involved in the movements. The forecasting performance of the models was evaluated, and the selection of joints was validated, by achieving a high gesture recognition performance. Finally, a sensitivity analysis was conducted to investigate the response of the system to disturbances and their effect on the posture.
2021-10-21
In this paper, we consider the problem of generating smooth Cartesian paths for robots passing through a sequence of waypoints. For interpolation between waypoints we propose to use radial basis functions (RBF). First, we describe RBF based on Gaussian kernel functions and how the weights are calculated. The path generation considers also boundary conditions for velocity and accelerations. Then we present how RBF parameters influence the shape of the generated path. The proposed RBF method is compared with paths generated by a spline and linear interpolation. The results demonstrate the advantages of the proposed method, which is offering a good alternative to generate smooth Cartesian paths.
2021-10-21
In this paper we discuss a methodology for learning human-robot collaboration tasks by human guidance. In the proposed framework, the robot learns the task in multiple repetitions of the task by comparing and adapting the performed trajectories so that the robot’s performance naturally evolves into a collaborative behavior. When comparing the trajectories
of two learning cycles, the problem of accurate phase determination arises because the imprecise phase determination affects the precision of the learned collaborative behavior. To solve this issue, we propose a new projection algorithm for measuring the similarity of two trajectories. The proposed algorithm was experimentally verified and compared to the performance of dynamic time warping in learning of human-robot collaboration tasks with Franka Emika Panda collaborative robot.
2021-04-15
Traditional robot programming is often not feasible in small-batch production, as it is time-consuming, inefficient, and
expensive. To shorten the time necessary to deploy robot tasks, we need appropriate tools to enable efficient reuse of existing robot control policies. Incremental Learning from Demonstration (iLfD) and reversible Dynamic Movement Primitives (DMP) provide a framework for efficient policy demonstration and adaptation. In this paper, we extend our previously proposed framework with improvements that provide better performance and lower the algorithm’s computational burden. Further, we analyse the learning stability and evaluate the proposed framework with a comprehensive user study. The proposed methods have been evaluated on two popular collaborative robots, Franka Emika Panda and Universal Robot UR10.
2021-09-23
Industry 4.0 paradigm is boosting the use of mobile robots in industrial applications. They must travel in areas with humans, obstacles and other vehicles. These robots are equipped with sensors such as lidars that allow them to perceive if there are obstacles close to them. This information can be exploited to avoid losses of performance. To do it, it is necessary to define obstacle avoidance algo-rithms to adjust the path of the robots maintaining a certain safety distance to the obstacles. In this work, an iterative obstacle avoidance for mobile robots is pre-sented. The core idea is to enclose the obstacles in different bounding boxes in an iterative way and using some corners of the bounding boxes points to define the path. The algorithm also decides if the obstacles are avoided from the left or from the right. The algorithm has been intensively validated in simulation with positive results.
2021-09-23
Automated Guided Vehicles (AGVs) and autonomous robots share their workspace with humans and other manned industrial vehicles. This may not only cause unexpected stops and losses of performance but, still more important, compromise the safety of people and other vehicles. To prevent them from col-liding with people or things, it is possible to define restricted zones through which AGVs cannot circulate in any case. In this work, an architecture to update re-stricted areas of an AGV trajectory is designed. This safety system is based on machine learning techniques. Specifically, different clustering methods have been applied. The clusters are shaped as ellipses by a Gaussian mixture model distribution. Three clustering methods are compared regarding some metrics, such as wasted space and places non-covered by the forbidden zones. Results show how the best performance is obtained with the Gaussian method.
2021-10-18
In this work, the problem of human-robot collaborative object transfer to unknown target poses is addressed. The desired pattern of the end-effector pose trajectory to a known target pose is encoded using DMPs (Dynamic Movement Primitives). During transportation of the object to new unknown targets, a DMP-based reference model and an EKF (Extended Kalman Filter) for estimating the target pose and time duration of the human's intended motion is proposed. A stability analysis of the overall scheme is provided. Experiments using a Kuka LWR4+ robot equipped with an ATI sensor at its end-effector validate its efficacy with respect to the required human effort and compare it with an admittance control scheme.
2021-05-31
In this work, a novel Dynamic Movement Primitive (DMP) formulation is proposed which supports reversibility, i.e. backwards reproduction of a learned trajectory. Apart from sharing all favourable properties of the original DMP, decoupling the teaching of position and velocity profiles and bidirectional drivability along the encoded path are also supported. Original DMP have been extensively used for encoding and reproducing a desired motion pattern in several robotic applications. However, they lack reversibility, which is a useful and expedient property that can be leveraged in many scenarios. The proposed formulation is analyzed theoretically and its practical usefulness is showcased in an assembly by insertion experimental scenario.
2021-08-23
In this work, the problem of cooperative human-robot manipulation of an object with large inertia is addressed, considering the availability of a kinematically controlled industrial robot. In particular, a variable admittance control scheme is proposed, where the damping is adjusted based on the power transmitted from the human to the robot, with the aim of minimizing the energy injected by the human while also allowing her/him to have control over the task. The proposed approach is evaluated via a human-in-the-loop setup and compared to a generic variable damping state-of-the-art method. The proposed approach is shown to achieve significant reduction of the human’s effort and minimization of unintended overshoots and oscillations, which may deteriorate the user’s feeling of control over the task.
2021-01-29
CERTH dataset is a single-view dataset, which comprises 2 actors performing a collaborative task. Specifically, it shows 2 actors assembling an LCD-TV. An Orbbec Astra (0.6m-8m) was used in order to record both RGB (1280 x 720) and Depth (640 x 480) data at 30 fps. The final dataset includes 700 frames and can be utilized as a testing dataset, while 1 frame every 10 frames has been manually annotated for evaluation purposes. Given that CERTH dataset is single-view, we managed to manually annotate 70% of the total number of joints, whereas the rest of them are deemed as occluded.
2020-03-26
During an eight-hour shift, an industrial worker will inevitably cycle through specific postures. Those postures can cause
microtrauma on the musculoskeletal system that accumulates, which in turn can lead to chronic injury. To assess how
problematic a posture is, the rapid upper limb assessment (RULA) scoring system is widely employed by the industry.
Even though it is a very quick and efficient method of assessment, RULA is not a biomechanics-based measurement that is
anchored in a physical parameter of the human body. As such RULA does not give a detailed description of the impact each
posture has on the human joints but rather, an overarching, simplified assessment of a posture. To address this issue, this
paper proposes the use of joint angles and torques as an alternative way of ergonomics evaluation. The cumulative motion
and torque throughout a trial is compared with the average motions and torques for the same task. This allows the evaluation of each joint’s kinematic and kinetic performance while still be able to assess a task“at-a-glance”. To do this, an upper
human body model was created and the mass of each segment were assigned. The joint torques and the RULA scores were
calculated for simple range of motion (ROM) tasks, as well as actual tasks from a TV assembly line. The joint angles and
torques series were integrated and then normalized to give the kinematic and kinetic contribution of each joint during a
task as a percentage. This made possible to examine each joint’s strain during each task as well as highlight joints that need
to be more closely examined. Results show how the joint angles and torques can identify which joint is moving more and
which one is under the most strain during a task. It was also possible to compare the performance of a task with the average
performance and identify deviations that may imply improper execution. Even though the RULA is a very fast and concise
assessment tool, it leaves little room for further analyses. However, the proposed work suggests a richer alternative without
sacrificing the benefit of a quick evaluation. The biggest limitation of this work is that a pool of proper executions needs to
be recorded for each task before individual comparisons can be done.
2020-09-15
Learning from demonstration (LfD) is an intuitive framework allowing non-expert users to easily (re-)program robots. However, the quality and quantity of demonstrations have a great influence on the generalization performances of LfD approaches. In this paper, we introduce a novel active learning framework in order to improve the generalization capabilities of control policies. The proposed approach is based on the epistemic uncertainties of Bayesian Gaussian mixture models (BGMMs). We determine the new query point location by optimizing a closed-form information-density cost based on the quadratic Rényi entropy. Furthermore, to better represent uncertain regions and to avoid local optima problem, we propose to approximate the active learning cost with a Gaussian mixture model (GMM). We demonstrate our active learning framework in the context of a reaching task in a cluttered environment with an illustrative toy example and a real experiment with a Panda robot.
2020-09-15
Humans exhibit outstanding learning, planning and adaptation capabilities while performing different types of industrial tasks. Given some knowledge about the task requirements, humans are able to plan their limbs motion in anticipation of the execution of specific skills. For example, when an operator needs to drill a hole on a surface, the posture of her limbs varies to guarantee a stable configuration that is compatible with the drilling task specifications, e.g. exerting a force orthogonal to the surface. Therefore, we are interested in analyzing the human arms motion patterns in industrial activities. To do so, we build our analysis on the so-called manipulability ellipsoid, which captures a posture-dependent ability to perform motion and exert forces along different task directions. Through thorough analysis of the human movement manipulability, we found that the ellipsoid shape is task dependent and often provides more information about the human motion than classical manipulability indices. Moreover, we show how manipulability patterns can be transferred to robots by learning a probabilistic model and employing a manipulability tracking controller that acts on the task planning and execution according to predefined control hierarchies.
2020-06-01
Whether in factory or household scenarios, rhythmic movements play a crucial role in many daily-life tasks. In this paper we propose a Fourier movement primitive (FMP) representation to learn such type of skills from human demonstrations. Our approach takes inspiration from the probabilistic movement primitives (ProMP) framework, and is grounded in signal processing theory through the Fourier transform. It works with minimal preprocessing, as it does not require demonstration alignment nor finding the frequency of demonstrated signals. Additionally, it does not entail the careful choice/parameterization of basis functions, that typically occurs in most forms of movement primitive representations. Indeed, its basis functions are the Fourier series, which can approximate any periodic signal. This makes FMP an excellent choice for tasks that involve a superposition of different frequencies. Finally, FMP shows interesting extrapolation capabilities as the system has the property of smoothly returning back to the demonstrations (e.g. the limit cycle) when faced with a new situation, being safe for real-world robotic tasks. We validate FMP in several experimental cases with real-world data from polishing and 8-shape drawing tasks as well as on a 7-DoF, torque-controlled, Panda robot.
2020-09-15
We propose to formulate the problem of representing a distribution of robot configurations (e.g. joint angles) as that of approximating a product of experts. Our approach uses variational inference, a popular method in Bayesian computation, which has several practical advantages over sampling-based techniques. To be able to represent complex and multimodal distributions of configurations, mixture models are used as approximate distribution. We show that the problem of approximating a distribution of robot configurations while satisfying multiple objectives arises in a wide range of problems in robotics, for which the properties of the proposed approach have relevant consequences. Several applications are discussed, including learning objectives from demonstration, planning, and warm-starting inverse kinematics problems. Simulated experiments are presented with a 7-DoF Panda arm and a 28-DoF Talos humanoid.
2019-10-01
A common approach to learn robotic skills is to imitate a demonstrated policy. Due to the compounding of small errors and perturbations, this approach may let the robot leave the states in which the demonstrations were provided. This requires the consideration of additional strategies to guarantee that the robot will behave appropriately when facing unknown states. We propose to use a Bayesian method to quantify the action uncertainty at each state. The proposed Bayesian method is simple to set up, computationally efficient, and can adapt to a wide range of problems. Our approach exploits the estimated uncertainty to fuse the imitation policy with additional policies. It is validated on a Panda robot with the imitation of three manipulation tasks in the continuous domain using different control input/state pairs.
2020-08-13
The work presented in this paper proposes a gesture operational model (GOM) that describes how the body parts cooperate, to perform a situated professional gesture. The model is built upon several assumptions that determine the dynamic relationship between the body entities within the execution of the human movement. The model is based on the state-space (SS) representation, as a simultaneous equation system for all the body entities is generated, composed of a set of first-order differential equations. The coefficients of the equation system are estimated using the maximum likelihood estimation (MLE) method, and its dynamic simulation generates a dynamic tolerance of the spatial variance of the movement over time. The scientific evidence of the GOM is evaluated through its ability to improve the recognition accuracy of gestural time series that are modeled using continuous hidden Markov models (HMMs) in 5 different use cases.
2020-09-04
In this work we study the DMP spatial scaling in the Cartesian space. The DMP framework is claimed to have the
ability to generalize learnt trajectories to new initial and goal positions, maintaining the desired kinematic pattern. However we show that the existing formulations present problems in trajectory spatial scaling when used in the Cartesian space for a wide variety of tasks and examine their cause. We then propose a novel formulation alleviating these problems. Trajectory generalization analysis, is performed by deriving the trajectory tracking dynamics. The proposed formulation is compared with the existing ones through simulations and experiments on a KUKA LWR 4+ robot.
2020-07-18
With the rise of collaborative robots in industries, this paper proposes a human robot collaborative gripper for a windshield assembly and visual inspection application. The collaborative interface which acts as a haptic feedback device is mounted on the gripper using a deployable mechanism. The kinematics of a reconfigurable mechanism are analyzed to illustrate the advantages for using it as an unit mechanism and the concept is extended to a parallelogram based deployable four bar mechanism. A novel threefold reconfigurable four bar mechanism is developed by creating adjacent units orthogonally and the connection between such units are investigated. The proposed mechanism can be deployed and stowed in three directions. Locking of the mechanism is proposed using mechanism singularity. Kinematic simulations are performed to validate the proposed designs and analyses.
2020-10-25
Over the last decades, Learning from Demonstration (LfD) has become a widely accepted solution for the problem of robot programming. According to LfD, the kinematic behavior is “taught” to the robot, based on a set of motion demonstrations performed by the human-teacher. The demonstrations can be either captured via kinesthetic teaching or external sensors, e.g., a camera. In this work, a controller for providing haptic cues of the robot’s kinematic behavior to the human-teacher is proposed. Guidance is provided in procedures of kinesthetic coaching during inspection and partial modification of encoded motions. The proposed controller is based on an artificial potential field, designed to adjust the intensity of the haptic communication automatically according to the human intentions. The control scheme is proved to be passive with respect to robot’s velocity and its effectiveness is experimentally evaluated in a KUKA LWR4+ robotic manipulator
2020-10-26
This paper presents a teaching by demonstration method for contact tasks with periodic movement on planar
surfaces of unknown pose. To learn the motion on the plane, we utilize frequency oscillators with periodic movement primitives and we propose modified adaptation rules along with an extraction method of the task’s fundamental frequency by automatically discarding near-zero frequency components. Additionally, we utilize an online estimate of the normal vector to the plane, so that the robot is able to quickly adapt to rotated hinged surfaces such as a window or a door. Using the framework of progressive automation for compliance adaptation, the robot transitions seamlessly and bi-directionally between hand guidance and autonomous operation within few repetitions of the task. While the level of automation increases, a hybrid force/position controller is progressively engaged for the autonomous operation of the robot. Our methodology is verified experimentally in surfaces of different orientation, with the robot being able to adapt to surface orientation perturbations.
2020-03-30
Abstract. The goal of this work is to build the basis for a smartphone application that provides functionalities for recording human motion data, train machine learning algorithms and recognize professional gestures. First, we take advantage of the new mobile phone cameras, either infrared or stereoscopic, to record RGB-D data. Then, a bottom-up pose estimation algorithm based on Deep Learning extracts the 2D human skeleton and exports the 3rd dimension using the depth. Finally, we use a gesture recognition engine, which is based on K-means and Hidden Markov Models (HMMs). The performance of the machine learning algorithm has been tested with professional gestures using a silk-weaving and a TV-assembly datasets.
2020-03-10
This paper deals with the problem of the recognition of human hand touch by a robot equipped with large area tactile
sensors covering its body. This problem is relevant in the domain of physical human-robot interaction for discriminating
between human and non-human contacts and to trigger and to drive cooperative tasks or robot motions, or to ensure a
safe interaction. The underlying assumption, used in this paper, is that voluntary physical interaction tasks involve hand
touch over the robot body, and therefore the capability of recognizing hand contacts is a key element to discriminate a
purposive human touch from other types of interaction.
The proposed approach is based on a geometric transformation of the tactile data, formed by pressure measurements
associated to a non uniform cloud of 3D points (taxels) spread over a non linear manifold corresponding to the robot
body, into tactile images representing the contact pressure distribution in 2D. Tactile images can be processed using
deep learning algorithms to recognize human hands and to compute the pressure distribution applied by the various
hand segments: palm and single fingers.
Experimental results, performed on a real robot covered with robot skin, show the effectiveness of the proposed
methodology. Moreover, to evaluate its robustness, various types of failures have been simulated. A further analysis
concerning the transferability of the system has been performed, considering contacts occurring on a different
sensorized robot part.
2019-11-05
This paper presents a methodology that enables the exploitation of innovative technologies for collaborative robots through user involvement from the beginning of product development. The methodology will be applied in the EU-funded project CoLLaboratE that focuses on how industrial robots learn to collaborate with human workers in order to perform new manufacturing tasks. The presented methodology is preliminary and will be improved during the project runtime.
2019-10-15
This paper is concerned with a methodology for gathering user requirements (URs) to inform a later design process of industrial collaborative robots. The methodology is applied to four use cases from CoLLaboratE, which is a European project focusing on how industrial robots learn to cooperate with human workers in performing new manufacturing tasks. The project follows a User-Centered Design (UCD) approach by involving end-users in the development process. The user requirements are gathered using a mixed methodology, with the purpose of formulating a list of case-specific requirements, which can be also generalized. The results presented in this paper consist of the list of user requirements, which will serve as a basis in establishing scenarios and system requirements for the later design of a Human-Robot Collaboration (HRC) system. The described methodology contributes to the field of design of HRC systems by taking a UCD approach. The methodology is aimed at improving the solution performance and users’ acceptance of the technology, by early involvement of the users in the design process. It can also be adaptable to other development projects, where users play an essential role in creating Human-Robot Collaboration solutions.
2019-09-23
With the raise of collaborative robots, human-robot interaction needs to be as natural as possible. In this work, we present a framework for real-time continuous motion control of a real collaborative robot (cobot) from gestures captured by an RGB camera. Through deep learning existing techniques, we obtain human skeletal pose information both in 2D and 3D. We use it to design a controller that makes the robot mirror in real-time the movements of a human arm or hand.
2020-02-04
In this work, we propose an augmentation to the Dynamic Movement Primitives (DMP) framework which allows the system to generalize to moving goals without the use of any known or approximation model for estimating the goal’s motion. We aim to maintain the demonstrated velocity levels during the execution to the moving goal, generating motion profiles appropriate for human robot collaboration. The proposed method employs a modified version of a DMP, learned by a demonstration to a static goal, with adaptive temporal scaling in order to achieve reaching of the moving goal with the learned kinematic pattern. Only the current position and
velocity of the goal are required. The goal’s reaching error and its derivative is proved to converge to zero via contraction analysis. The theoretical results are verified by simulations and
experiments on a KUKA LWR4+ robot.
2020-02-04
A control scheme consisting of a novel coupling of a DMP based virtual reference with a low stiffness controlled robot is proposed. The overall system is proved to achieve superior tracking of a DMP encoded trajectory and accurate target reaching with respect to the conventional scheme under the presence of constant and periodic disturbances owing to unknown task dynamics and robot model uncertainties. It further preserves the desired compliance under contact forces that may arise in human interventions and collisions. Results in simulations and experiments validate the theoretical findings.
2019-06-05
Manual laborers from the industry sector are often subject to critical physical strain that lead to work-related musculoskeletal disorders. Lifting, poor posture and repetitive movements are among the causes of these disorders. In order to prevent them, several rules and methods have been established to identify ergonomic risks that the worker might be exposed during his/her activities. However, the ergonomic assessment though these methods is not a trivial task and a relevant degree of theoretical knowledge on the part of the analyst is necessary. Therefore in this paper, a web-based automatic ergonomic assessment module is proposed. The proposed module uses segment rotations acquired from inertial measurement units for the assessment and provides as feedback RULA scores, color visualisation and limb angles in a simple, intuitive and meaningful way. RULA is one of the most used observational methods for assessment of occupational risk factors for upper-extremity musculoskeletal disorders. By automatizing RULA an interesting perspective for extracting posture analytics for ergonomic assessment is opened, as well as the inclusion of new features that may complement it. For future work, the use of other features and sensors will be investigated for its implementation on the module.
2019-08-25
This work presents a statistical analysis of professional gestures from household appliances manufacturing. The goal is to investigate the hypothesis that some body segments are more involved than others in professional gestures and present thus higher ergonomic risk. The gestures were recorded with a full body Inertial Measurement Unit (IMU) suit and represented with rotations of each segment. Data dimensions have been reduced with principal component analysis (PCA), permitting us to reveal hidden correlations between the body segments and to extract the ones with the highest variance. This work aims at detecting among numerous upper body segments, which are the ones that are overused and consequently, which is the minimum number of segments that is sufficient to represent our dataset for ergonomic analysis. To validate the results, Hidden Markov Models (HMMs) based recognition method has been used and trained only with the segments from the PCA. The recognition accuracy of 95.71% was achieved confirming this hypothesis.
2019-11-23
Currently, biomechanics analyses of the upper human body are mostly kinematic i.e., they are concerned with the positions, velocities, and accelerations of the joints on the human body with little consideration on the forces required to produces them. Tough kinetic analysis can give insight to the torques required by the muscles to generate motion and therefore provide more information regarding human movements, it is generally used in a relatively small scope (e.g. one joint or the contact forces the hand applies). The problem is that in order to calculate the joint torques on an articulated body, such as the human arm, the correct shape and weight must be measured. For robot manipulators, this is done by the manufacturer during the designing phase, however, on the human arm, direct measurement of the volume and the weight is very dicult and extremely impractical. Methods for indirect
estimation of those parameters have been proposed, such as the use of medical imaging or standardized scaling factors (SF). However, there is always a trade o between accuracy and practicality. This paper uses computer vision (CV) to extract the shape of each body segment and find the inertia parameters. The joint torques are calculated using those parameters and they are compared to joint torques that were calculated using SF to establish the inertia properties. The purpose here is to examine a practical method for real-time joint torques calculation that can be personalized and accurate.
2019-11-01
Dynamic movement primitives (DMP) are an efficient way for learning
and reproducing complex robot behaviors. A singularity free DMP formulation
for orientation in the Cartesian space is proposed by Ude et al. in 2014 and has been
largely adopted by the research community. In this work, we demonstrate the
undesired oscillatory behavior that may arise when controlling the robot’s orien-
tation with this formulation, producing a motion pattern highly deviant from the
desired and highlight its source. A correct formulation is then proposed that alle-
viates such problems while guaranteeing generation of orientation parameters that
lie in SO(3). We further show that all aspects and advantages of DMP including
ease of learning, temporal and spatial scaling and the ability to include coupling
terms are maintained in the proposed formulation. Simulations and experiments
with robot control in SO(3) are performed to demonstrate the performance of the
proposed formulation and compare it with the previously adopted one.
2019-05-24
In this paper, we are going to present a methodology for gathering user requirements to design industrial collaborative robots. The study takes place within CoLLaboratE, a European project focusing on how industrial
robots learn to cooperate with human workers for performing new manufacturing tasks, considering four use cases. The project follows a User-Centered Design approach by involving end-users in the development process. The user requirements will thus be gathered by applying a mixed methodology, with the purpose of formulating a list of requirements which can be generalized, but also case specific. This methodology is preliminary, and it will be improved during the following months, when the data will be collected and analyzed.
2019-06-24
This work focuses on the prediction of the human’s motion in a collaborative human-robot object transfer with the aim of assisting the human and minimizing his/her effort. The desired pattern of motion is learned from a human demonstration and is encoded with a DMP (Dynamic Movement Primitive). During the object transfer to unknown targets, a model reference with a DMP-based control input and an EKF-based (Extended Kalman Filter) observer for predicting the target and temporal scaling is used. Global boundedness under the emergence of bounded forces with bounded energy is proved. The object dynamics are assumed known. The validation of the proposed approach is performed through experiments using a Kuka LWR4+ robot equipped with an ATI sensor at its end-effector.