TY - GEN
T1 - Deep Learning for Safe Human-Robot Collaboration
AU - Duque-Suárez, Nicolás
AU - Amaya-Mejía, Lina María
AU - Martinez, Carol
AU - Jaramillo-Ramirez, Daniel
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - Recent advances in computer vision and deep learning have lead to implementations in different industrial applications such as collaborative robotics, making robots able to perform harder tasks and giving them consciousness of their environment, easing interaction with humans. With the objective of eliminating physical barriers between humans and robots, a security system for industrial collaborative robots based on computer vision and deep learning is proposed, where an RGBD camera is used to detect and track people located inside the robot’s workspace. Detection is made with a previously trained convolutional neural network. The position of every detection is fed to the tracker, that identifies the subjects in scene and keeps record of them in case the detector fails. The detected subject’s 3D position and height are represented in a simulation of the workspace, where the robot’s speed changes depending on its distance to the manipulator following international safety guidelines. This paper shows the implementation of the detector and tracker algorithms, the subject’s 3D position, the security zones definition and the integration of the vision system with the robot and workspace. Results show the system’s ability to detect and track subjects in scene, and the robot’s capacity to change its speed depending on the subject’s location.
AB - Recent advances in computer vision and deep learning have lead to implementations in different industrial applications such as collaborative robotics, making robots able to perform harder tasks and giving them consciousness of their environment, easing interaction with humans. With the objective of eliminating physical barriers between humans and robots, a security system for industrial collaborative robots based on computer vision and deep learning is proposed, where an RGBD camera is used to detect and track people located inside the robot’s workspace. Detection is made with a previously trained convolutional neural network. The position of every detection is fed to the tracker, that identifies the subjects in scene and keeps record of them in case the detector fails. The detected subject’s 3D position and height are represented in a simulation of the workspace, where the robot’s speed changes depending on its distance to the manipulator following international safety guidelines. This paper shows the implementation of the detector and tracker algorithms, the subject’s 3D position, the security zones definition and the integration of the vision system with the robot and workspace. Results show the system’s ability to detect and track subjects in scene, and the robot’s capacity to change its speed depending on the subject’s location.
UR - http://www.scopus.com/inward/record.url?scp=85121633122&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-90033-5_26
DO - 10.1007/978-3-030-90033-5_26
M3 - Conference contribution
AN - SCOPUS:85121633122
SN - 9783030900328
T3 - Lecture Notes in Networks and Systems
SP - 239
EP - 251
BT - Advances in Automation and Robotics Research - Proceedings of the 3rd Latin American Congress on Automation and Robotics, LACAR 2021
A2 - Moreno, Héctor A.
A2 - Carrera, Isela G.
A2 - Ramírez-Mendoza, Ricardo A.
A2 - Baca, José
A2 - Banfield, Ilka A.
PB - Springer Science and Business Media Deutschland GmbH
T2 - 3rd Latin American Congress on Automation and Robotics, LACAR 2021
Y2 - 17 November 2021 through 19 November 2021
ER -