Motion analysis for identification of overused body segments: the packaging task in industry 4.0

. This work presents a statistical analysis of professional gestures from household appliances manufacturing. The goal is to investigate the hypothesis that some body segments are more involved than others in professional gestures and present thus higher ergonomic risk. The gestures were recorded with a full body Inertial Measurement Unit (IMU) suit and represented with rotations of each segment. Data dimensions have been reduced with principal component analysis (PCA), permitting us to reveal hidden correlations between the body segments and to extract the ones with the highest variance. This work aims at detecting among numerous upper body segments, which are the ones that are overused and consequently, which is the minimum number of segments that is sufficient to represent our dataset for ergonomic analysis. To validate the results, Hidden Markov Models (HMMs) based recognition method has been used and trained only with the segments from the PCA. The recognition accuracy of 95.71% was achieved confirming this hypothesis .


Introduction
In industrial context, worker's health is directly linked to company's productivity. Ergonomists apply various methods to assess professional postures and gestures and to prevent Musculo-Skeletal Disorders (MSD). Most of these methods are based on observations and a qualitative posture evaluation [1]. One of the most used methods is RULA where the positions of individual body segments are observed and the more there is a deviation from the neutral posture the higher score, which represents the level of MSD risk [2]. The use of motion capture (mocap) technology may bring a significant added value to this analysis and complete it with parameters such as precise information about movement's biomechanics. However, the data provided by mocap may be too complex and in some cases redundant for ergonomic analysis. In this work our goal is to validate that only few body segments form groups of potentially overused body parts. Similar studies of body segments categorisation have been done in the field of expressive gestures [3], but also of handicraft movements [4]. The conclusion from this analysis could be used to define the minimum necessary number of segments to be recorded and analysed.

Method
The dataset used for the analysis has been captured with an Inertial Measurement Unit (IMU) full body suit from Nansense Inc. [5] under real conditions in a factory. One worker was recorded performing the « packaging » task, that consists of grasping boxes of TVs from a conveyor and placing them on a palette in 4 different levels. Each level includes 8 boxes of TVs. Once the worker completed one level, he moved to the next one until finishing the palette with the 4th level. The suit is composed of 52 sensors placed throughout the body. Through the inverse kinematics solver provided by Nansense Studio, the body segments' rotations (Euler angle) on 3 axes X, Y and Z were computed. Fig. 1 illustrates the worker placing a box on the 4 th level. This study was focused only on the upper body of the worker excluding the fingers recorded with the gloves. The dataset included rotations on 3 XYZ axis from 17 sensors resulting in 51 variables in total. This dataset was separated into 4 subsets corresponding to the 4 levels. Each of the subsets included thus the gestures of grasping and placing a box on the corresponding level (from 1st to 4th) while repeating the procedure 8 times (for 8 boxes).

Dimension reduction with PCA
Before applying principal component analysis (PCA), Factor Analysis has been used to preprocess and fuse the 3 axes rotations from 17 sensors to facilitate the interpretation of the results and to have one variable per sensor. The weights of each XYZ variable have been calculated, and each rotation has been multiplied by its weight and divided by the sum of the weights, as explained in [4]. By analysing the PCA results, a different group of variables can be detected in each component. In C1, the spine and shoulders, which are generally linked to the back, result from having the highest eigenvalues, unlike C2 where the highest were the variables related to the arms. These body segments identified appear to be consistent with the body segments that, according to the RULA, mainly cause the high ergonomic risk of the gesture. These are the back and arms, segments that have the highest score in RULA. From each component, only the variables that had the highest mean eigenvalues per body segment were chosen for gesture recognition. For example, as the back has more than three variables covering the same body segment (Spine, Spine 1, Spine 2, Spine 3), Spine 1 was selected since it had the highest mean eigenvalues.

Gesture recognition with hidden Markov models
For gesture recognition Hidden Markov Models (HMMs) has proved to be a prominent tool [4]; hence it was used for this study. The XYZ rotations of the variables from C1 and C2, highlighted in Table 1, were used separately for the gesture recognition. HMMs were trained with 4 classes where each class corresponded to the gesture of placing the box on 1 of the 4 levels of the palette. Therefore, the dataset used in this section has 4 classes for 4 levels of the palette and 8 repetitions per class.

Results
To evaluate the proposed method, the dataset was split in an 80% training set -20% test set to estimate the accuracy of the gesture/level recognition. This evaluation was repeated 10 times taking in each a different training set and test set, as the samples were selected randomly in each iteration. The results showed 81.43% of accuracy for the C1 variables and 95.71% for the C2. Consequently, the use of the 4 variables contained in C2 are sufficient to recognise high ergonomic risk gestures, since only 2 gestures from Level 3 and 1 from Level 4 were misclassified during the whole evaluation.

Conclusion
In an industrial context, workers perform complex professional gestures that contain essential information about ergonomic risks. In this work we formulate the hypothesis that some body segments are more involved than others in "packaging" professional gestures and they present thus a higher risk of injury. PCA underlined some groups of variables that corresponded to the ones with the highest RULA (back and arms) score. When those variables were used separately for gesture recognition, a better accuracy was achieved with the variables of C2 confirming that these variables seem to be the ones that represent the best our data. Being able to identify those segments could be interesting for a more fast and efficient ergonomic analysis of worker's gestures. At the same time, since the use of full-body mocap suit in industrial context has several difficulties, this analysis could contribute to the identification of the minimum number of segments to record by using more acceptable technologies such as a smartphone (for the back) or a smartwatch (for the arms). To generalise these first results the future work would consist of performing a similar analysis on a bigger dataset including recordings from more than one worker as well as on different types of features.