Implementation methodology


The ProgHRC will be implemented in 4 stages, divided into interdependent and time-overlapping modules of work respectively, which are necessary to produce the expected results and bridge the gap between research and practical application in industry.

Starting in the 1st stage from the identification of the needs in the industry, the specifications of the robotic system will be set and the usage scenarios that will be tested in the facilities of the potential end-user will be defined.

Based on the scenarios and the requirements of the end-use, in the 2nd stage will be developed integrated solutions beyond the technological weighting (beyond state of the art), based on the literature and the research that has already been carried out by the AUTh.

The particular parts of that research are:

  • Demonstration of human movement with kinesthetic guidance of the arm, recognition, and coding of movement in a way that allows generalization and then application of gradual automation with continuous change of roles between human and robot.
  • Workplace recognition with mechanical vision and automatic adaptation of the robot movement to changes in work objectives.
  • Security of cooperation by avoiding obstacles, distinguishing between desirable and unwanted contacts, and recognizing conflicts.

In the field of demonstration programming, modern cooperative robots allow “zero gravity” control where humans can grasp the arm and kinesthetically guide it to any configuration it wishes. By showing in this way the control points of the trajectory that it must reproduce, it is the simplest form of kinesthetic learning, which, however, is quite restrictive as it does not have possibilities of generalization and adaptability. A method that can learn its motion is demonstrated to the robot after a single demonstration and has the above capabilities, called Dynamic Movement Primitives (DMP). Using DMP can reproduce any form of point-to-point movement given the target. Unlike simple track recording, DMPs can produce smooth motion of the robot approaching that of the demonstration, even towards a new target. They also can be adjusted temporally, eg for synchronization, and spatially, eg to avoid obstacles. In addition to motion, the DMP can encode force profiles for work with the robot in contact with its environment or it can be extended by introducing force/torque feedback into the trajectory it produces.

The AUTh research team has developed kinesthetically guided programming methods with DMP for both position and orientation, enabling the robot to synchronize with humans. A methodology has been developed that contains the basic mathematical tools to allow progressive automation of demonstration tasks. In addition, this methodology includes ways to impose virtual constraints by the robot on humans during the demonstration in the form of forces and moments, in order to allow easier handling and reduce the mental load of the operator. Within the framework of the proposed project, the specific methods will be extended so that the robot can perform a wide range of tasks, automatically recognizing the type of movement (periodic or not) and selecting the appropriate type of DMP for coding. In this regard, ways of representing space coordinates that combine position and orientation will be studied. DMP superimposition of complex motions will also be studied.

The use of mechanical vision with depth cameras in robotics has applications, among others, in position recognition and orientation of objects and the direct adjustment of DMP parameters with Convolution Neural Networks according to the image seen by the camera. Although machine learning can provide end-to-end solutions to the problem of robot motion generation with only the input as the input and omitting intermediate stages (such as object recognition and space estimation), it cannot be applied at this stage. directly because it requires a lot of training time with many repetitions that make it difficult to use. Alternatively, the use of tags (markers, tags) on objects or the use of 3D models, allows immediate identification with very good accuracy. As the goal is to minimize programming time and user interaction with graphical environments, a scene recognition study will be performed on the robot workplace with 3D cameras (for wide coverage and avoidance of dead spots) to correlate the robot’s movements with objects. are in place, using methods of concluding. By constantly assessing the positions of objects in space, the robot will now be able to adapt its movement to new positions.

The issue of safety when working with humans and robots is usually approached in the literature either in the phase of avoiding a possible collision with obstacles while the robot is moving using the depth cameras or in the phase of reaction after a collision recognized by force/torque sensors. Two levels of security will be implemented in ProgHRC. At the higher level, the depth-of-field camera will detect dynamic obstacles that arise in space but are not related to the work being performed and will therefore be avoided by the robot by introducing pairing terms into the DMP that produces arm movement. At the same level, an innovative method will be developed to distinguish between voluntary and involuntary impending contacts of the robot with the human body, according to the body part that is closest to the robot. If camera recognition fails (image loss), the lower level of security will guarantee rapid contact recognition by the arm/torque sensors built into the arm, distinguishing the type of contact and the robot responding appropriately or preventing deployment. large forces from collision or for changing roles during the human intervention

The 3rd stage of the project will be the synthesis of the overall system (hardware and software), its testing in a real industrial environment, and its final evaluation with standard methods. For the first 3 stages, a repetitive approach has been chosen so that in the middle of the project the first results have emerged, tested early in real applications, and then there is the possibility of redesigning them according to the first evaluation.

In the 4th stage will be the procedures for the protection of intellectual property for the final architecture and the methodology that will emerge as a result of the project.


This project has received funding from the European Union