LTH-image


Abstract

On Motion Control and Machine Learning for Robotic Assembly

Martin Karlsson, Dept. of Automatic Control, Lund University

Abstract:  

Typically, industrial robots require very structured and predictable working environments, and explicit programming, in order to perform well. Therefore, expensive and time-consuming engineering work is a key obstruction when mediating tasks to robots. This thesis presents methods that decrease the engineering work required for robot programming, and increase the ability of robots to handle unforeseen events. This has two main benefits. The programming can be done faster, and it becomes accessible to users without engineering experience. Even though these methods could be used for various types of robot applications, this thesis is focused on robotic assembly tasks.

Two main topics are explored. In the first part, we consider adjustment of robot trajectories, generated by dynamical movement primitives (DMPs). The framework of DMPs as robot trajectory generators has been much celebrated in robotics research, due to its convergence properties and emphasis on easy modification. For instance, time scale and goal state can be adjusted by one parameter each, commonly without further considerations. In this thesis, the DMP framework is extended with a method that allows a robot operator to adjust DMPs by demonstration, without any traditional computer programming or other engineering work required. Given a generated trajectory with a faulty last part, the operator can use lead-through programming to demonstrate a corrective trajectory. A modified DMP is formed, based on the first part of the faulty trajectory and the last part of the corrective one. Further, a method for handling perturbations during execution of DMPs on robots is considered. Two-degree-of-freedom control is used together with temporal coupling, to achieve practically realizable reference trajectory tracking and perturbation recovery.

In the second part of the thesis, a method that enables robots to learn to recognize contact force/torque transients acting on the end-effector, without using a force/torque sensor, is presented. A recurrent neural network (RNN) is used for transient detection, with robot joint torques as input. A machine learning approach to determine the parameters of the RNN is presented.

Each of the methods presented in this thesis is implemented in a real-time application, and verified experimentally on a robot.