abstract
- Analysis of smart devices’ sensor data for the classification of human activities has become increasingly targeted by industry and motion research. With the popularization of smartwatches, this data becomes available to everyone. The user’s data from accelerometers and gyroscopes is conventionally analyzed as a multivariate time series to obtain reliable information about the user’s activity at a specific moment. Due to the particular sampling rate instabilities of each device, previous approaches mainly work with feature extraction methods to generalize the information independently of the gear, which requires a lot of time and expertise. To overcome this problem, we present an end-to-end model for activity classification based on convolutional neural networks of different dimensions without extensive feature extraction. The data preprocessing is not computationally intensive and the model can deal with the irregularities of the data. By representing the input as twofold – both, interpolated 1D time series and encoded time series as images with the help of Gramian Angular Summation Fields – the use of computer vision techniques is enabled. In addition, an online prediction is possible and the accuracy is comparable to feature extraction approaches. The model is validated with random 10-fold and leave-one-user-out cross-validation showing improvement regarding the generalization of the task.