Self-Intelligence with Human Activities Recognition Based in Convolutional Neural Network

2020 ◽  
Vol 17 (8) ◽  
pp. 3484-3490
Author(s):  
M. S. Roobini ◽  
Tumu Kusal Kedar ◽  
A. SivaSangari ◽  
R. Vignesh ◽  
D. Deepa ◽  
...  

Deep Learning it has been the subset assortment of Machine Learning concerned where neural system calculations enlivened by human cerebrum (what happens immediately to human) gain from enormous measure of information through a few layers for nonlinear change. The deep learning can process huge number of highlights to build the result exactness. Genuine applications on Deep Learning, Face Recognition, Hand Writing Recognition, Speech Recognition, translate starting with one human language then onto the next human language, Control Robots such as self-driving vehicles. The current framework depends on sensors and gadgets to gather time arrangement signals which are created in both time and recurrence space. To accumulate the stimulating information, each subject conveys a keen gadget for a couple of hours and plays a few exercises. In the anticipated application, five sorts of basic exercises will be actualized, including strolling, limping, working out, strolling upstairs, and strolling downstairs. Human Activity Recognition (HAR) has expanded a lot in look into field especially setting mindful figuring and sight and sound-generally on the record of its pervasiveness in human life and besides on our reliably growing computational limit. It is generally speaking adequately looked for after for a wide scope of employments like sharp homes, human direct examination, sports and even security systems. The proposed application Human Activity Recognition depends on Deep Learning which is utilized to recognize and check the human exercises from the pictures. Deep Learning Algorithms influence enormous datasets of old human exercises and gain from rich arrangement of highlights and train the models and in the long run beat the human exercises. The proposed application included Feature Detection, Feature Alignment, Feature Extraction, Feature Detection.

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2760
Author(s):  
Seungmin Oh ◽  
Akm Ashiquzzaman ◽  
Dongsu Lee ◽  
Yeonggwang Kim ◽  
Jinsul Kim

In recent years, various studies have begun to use deep learning models to conduct research in the field of human activity recognition (HAR). However, there has been a severe lag in the absolute development of such models since training deep learning models require a lot of labeled data. In fields such as HAR, it is difficult to collect data and there are high costs and efforts involved in manual labeling. The existing methods rely heavily on manual data collection and proper labeling of the data, which is done by human administrators. This often results in the data gathering process often being slow and prone to human-biased labeling. To address these problems, we proposed a new solution for the existing data gathering methods by reducing the labeling tasks conducted on new data based by using the data learned through the semi-supervised active transfer learning method. This method achieved 95.9% performance while also reducing labeling compared to the random sampling or active transfer learning methods.


Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3910 ◽  
Author(s):  
Taeho Hur ◽  
Jaehun Bang ◽  
Thien Huynh-The ◽  
Jongwon Lee ◽  
Jee-In Kim ◽  
...  

The most significant barrier to success in human activity recognition is extracting and selecting the right features. In traditional methods, the features are chosen by humans, which requires the user to have expert knowledge or to do a large amount of empirical study. Newly developed deep learning technology can automatically extract and select features. Among the various deep learning methods, convolutional neural networks (CNNs) have the advantages of local dependency and scale invariance and are suitable for temporal data such as accelerometer (ACC) signals. In this paper, we propose an efficient human activity recognition method, namely Iss2Image (Inertial sensor signal to Image), a novel encoding technique for transforming an inertial sensor signal into an image with minimum distortion and a CNN model for image-based activity classification. Iss2Image converts real number values from the X, Y, and Z axes into three color channels to precisely infer correlations among successive sensor signal values in three different dimensions. We experimentally evaluated our method using several well-known datasets and our own dataset collected from a smartphone and smartwatch. The proposed method shows higher accuracy than other state-of-the-art approaches on the tested datasets.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Fahd N. Al-Wesabi ◽  
Amani Abdulrahman Albraikan ◽  
Anwer Mustafa Hilal ◽  
Asma Abdulghani Al-Shargabi ◽  
Saleh Alhazbi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document