Performance Analysis of Smartphone-Sensor Behavior for Human Activity Recognition

Abstract:  

The proliferation of smartphones has significantly facilitated people’s daily life, and diverse and powerful embedded sensors make smartphone a ubiquitous platform to acquire and analyze data, which may also provide great potential for efficient human activity recognition. This paper presents a systematic performance analysis of motion-sensor behavior for human activity recognition via smartphones. Sensory data sequences are collected via smartphones when participants perform typical and daily human activities. A cycle detection algorithm is applied to segment the data sequence for obtaining the activity unit, which is then characterized by time-, frequency-, and wavelet-domain features. Then both personalized and generalized model using diverse classification algorithms are developed and implemented to perform activity recognition. Analyses are conducted using 27,681 sensory samples from 10 subjects, and the performance is measured in the form of F-score under various placement settings, and in terms of sensitivity to user space, stability to combination of motion sensors, and impact of data imbalance. Extensive results show that each individual has its own specific and discriminative movement patterns, and the F-score for personalized model and generalized model can reach 95.95% and 96.26% respectively, which indicates our approach is accurate and efficient for practical implementation.

 

Existing System:  

 

A recent studies focusing on sensor-based human   activity recognition, the analysis of accelerometer data attracts the most attention, in which most researches chose waist as the position to carry smartphones [7], [8]. However, unlike wearable devices (e.g., smartwatch and smartband), smartphones are carried in various and uncertain positions in our daily life. We may hold the phone in our hand, and then put it into our jacket or trousers pocket. Even if when we are jogging or doing other sports, the phone is always carried at our upper arm. This may be one of main reasons why it is hard to conduct human activity recognition using smartphone sensors. So far, only a few researches consider the impact of the smartphone-placement. Vision-based technique appeared earlier than the sensor-based technique. Its core processing stages mainly include data preprocessing, object segmentation, feature extraction and classifier implementation [11], [12], [13]. Though many efficient techniques have been proposed in the past few decades, vision-based HAR still remains challenging: the position and angle of the observer, the object’s body size and clothes, the color of backgrounds and the light intensity will all affect the accuracy.

 

Proposed System:

 

With the rapid development in the MEMS (Micro-electromechanical Systems), inertial sensors become smaller, lighter, less expensive and more accurate. Compared with video-based method, sensor-based method is more robust in various environments, and the devices are cheaper and lighter. Moreover, many smartphone and smart wearable devices are equipped with multiple sensors (e.g., accelerometer and gyroscope), making sensors easily-acquirable. With these advantages, sensor-based activity recognition attracts more attentions in recent years. Previous studies usually used more than one sensor to recognize human activities: Farringdom et al. [14] used a sensor jacket equipped with several accelerometers to discriminate three types of static activities (i.e., sitting, standing and lying) and two type of dynamic activities (i.e., walking and running). These previous studies have shown that HAR is a complex task, where many factors would all affect the accuracy of HAR, and their approach frameworks and evaluation procedures are different. To our knowledge, there are quite a few systematic studies like [29] for smartphone-sensor based activity recognition, and a few researches present a systematic performance evaluation in this field.

 

Conclusion:

 

Smartphone’s sensory and computing ability has been improved significantly. Naturally, smartphone has become a perfect platform to perform human activity recognition. In this paper, we provide a brief review of related work in the human activity recognition field, and mainly focus on smartphone-sensor-based approaches. From our summary, we can see two main challenges: one is variety of smartphone position or orientation, and another is gross accuracy of embedded sensors. In view of the above-mentioned problems, we proposed a detailed HAR framework (see in Figure 7) that can perform human activity recognition with high accuracy via smartphone motion sensor. Time-, frequency- and wavelet-domain features were extracted to precisely characterize a subject’s activity pattern. Then we employed two-sample K-S test to analyze motion sensor behavior, and we perform feature selection according to p-values. Our approach was evaluated on a dataset consisting of 27,681 samples from 10 subjects. To make a systematic evaluation, we implemented 4 multi-class classifiers: Random Forests (#tree = 200), Support Vector Machines (linear kernel and RBF kernel), and k Nearest Neighbor (k = 3). We compared the performance for different phone-placement settings, user spaces, and contributions of sensors. Furthermore, we investigated the efficiency of data resampling method.

 

References:

 

[1] W. Kang and Y. Han, “SmartPDR: Smartphone-Based Pedestrian Dead Reckoning for Indoor Localization,” IEEE Sensors J., vol. 15, pp. 2906-2916, May. 2015.

 

[2] K. Park, H. Shin, and H. Cha, “Smartphone-based pedestrian tracking in indoor corridor environments,” Personal and Ubiquitous Computing, vol. 17, pp. 359-370, Feb. 2013.

 

[3] D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz, “Human Activity Recognition on Smartphones Using a Multiclass Hardware-Friendly Support Vector Machine,” in Proc. Ambient Assisted Living and Home Care: 4th International Workshop, IWAAL 2012, pp. 216-223, Vitoria-Gasteiz, Spain, Dec. 3-5, 2012.

 

[4] Y. Kwon, K. Kang, and C. Bae, “Unsupervised learning for human activity recognition using smartphone sensors,” Expert Systems with Applications, vol. 41, pp. 6067-6074, Oct. 2014.

 

[5] W. -H. Lee and R. B. Lee, “Multi-sensor authentication to improve smartphone security,” in Proc. 1st Int. Conf. Information Systems Security and Privacy (CISSP), Agers, France, 2015, pp. 1-11.

 

[6] L. Li, X. Zhao, and G. Xue, “Unobservable Re-authentication for Smartphones,” in Proc. 20th Annual Network & Distributed System Security Symposium(NDSS), San Diego, California, USA, Feb 24-27, 2013.

 

[7] L. Bao and S. S. Intille. “Activity recognition from user-annotated acceleration data,” in Proc. Int. Conf. Pervasive Computing. Springer Berlin Heidelberg, 2004, pp. 1-17.

 

[8] M. Ermes, J. Parkka, J. Mantyjarvi, and I. Korhonen. “Detection of daily activities and sports with wearable sensors in controlled and uncontrolled  conditions,” IEEE Trans. Inf. Tech. in Biomed., vol. 12, no. 1, pp. 26-27, Jan. 2008.

 

[9] Y. Xue and L. Jin, “Discrimination between upstairs and downstairs based on accelerometer,” IEICE Trans. Inf. and Syst., vol. E94-D., no. 6, pp. 1173-1177, Jun. 2011.

 

[10] Z. He, Z. Liu, L. Jin, L. Zhen, and J. Huang, “Weightlessness feature—a novel feature for single tri-axial accelerometer based activity recognition,” in Proc. 19th Int. Conf. Pattern Recognition (ICPR), Tampa, FL, USA, Dec. 8-11, 2008.