The human operator will be equipped with a set of sensors and wearable devices, mainly taken off-the-shelf form the market with minor adaptation, which realize the technological support for the human-platform interaction.
The scope is twofold. On one side, these devices augment the situation awareness of the human operator and, on the other side they augment the human capabilities and presence on the rescuing scene. In particular, to improve situation awareness, some the following off-the-shelf products/components can be taken into account:
Human capabilities and presence on the rescue scene can be improved trough sensors that can properly fit on the rescuer’s arms. More specifically, 3D motion tracking and 3D motion capture systems, as the one produced by, e.g., Xsens http://www.xsens.com/, will be integrated in the human equipment.
For the motion tracking, Xsens provides MTw, i.e., a small, highly accurate wireless inertial 3D motion tracker system. The MTw realizes a new standard for ambulatory 3D kinematic measurement (motion and orientation) by making use of a radio protocol, which implements a very accurate (≤ 10 μs difference) time synchronization between multiple MTw and third party devices in a wireless body-area network.
The multiple MTw devices can be fixed to the human body by simple body straps. For the motion capture, Xsens provides MTx, i.e., a measurement unit for orientation measurement of human body segments. The MTx uses three rate gyroscopes (up to 18g) to track rapidly changes in the 3D orientations and it measures the directions of gravity and magnetic north to provide a stable reference. A real-time algorithm fuses all sensor information to calculate accurate 3D orientation, with a stable and highly dynamic response.
The Xsens MVN Motion Capture solution consists of a set of MTx measurement units attached to the body by a lycra suit or by straps. This flexible and portable motion capture system can be used indoors and outdoors. These sensors give the required information on the dynamics of gestures (e.g., arm’s position/velocity, impedance) of the rescuer that can be collected and fused by the cognitive algorithms in order to instruct and guide the robotic actors in terms of e.g., actions to be performed, path to be followed.
The requirements and performance metrics for the technology for the human-platform interaction will be derived in order to facilitate the communication between the rescuer and the robotic part of the SHERPA team.