scholarly journals Fall Recognition System to Determine the Point of No Return in Real-Time

2021 ◽  
Vol 11 (18) ◽  
pp. 8626
Author(s):  
Bae Sun Kim ◽  
Yong Ki Son ◽  
Joonyoung Jung ◽  
Dong-Woo Lee ◽  
Hyung Cheol Shin

In this study, we collected data on human falls, occurring in four directions while walking or standing, and developed a fall recognition system based on the center of mass (COM). Fall data were collected from a lower-body motion data acquisition device comprising five inertial measurement unit sensors driven at 100 Hz and labeled based on the COM-norm. The data were learned to classify which stage of the fall a particular instance belongs to. It was confirmed that both the representative convolutional neural network learning model and the long short-term memory learning model were performed within a time of 10 ms on the embedded platform (Jetson TX2) and the recognition rate exceeded 94%. Accordingly, it is possible to verify the progress of the fall during the unbalanced and falling steps, which are classified by subdividing the critical step in which the real-time fall proceeds with the output of the fall recognition model every 10 ms. In addition, it was confirmed that a real-time fall can be judged by specifying the point of no return (PONR) near the point of entry of the falling down stage.

Author(s):  
Zhe Xiao ◽  
Xin Chen ◽  
Li Zhou ◽  
◽  
◽  
...  

Traditional optical music recognition (OMR) is an important technology that automatically recognizes scanned paper music sheets. In this study, traditional OMR is combined with robotics, and a real-time OMR system for a dulcimer musical robot is proposed. This system gives the musical robot a stronger ability to perceive and understand music. The proposed OMR system can read music scores, and the recognized information is converted into a standard electronic music file for the dulcimer musical robot, thus achieving real-time performance. During the recognition steps, we treat note groups and isolated notes separately. Specially structured note groups are identified by primitive decomposition and structural analysis. The note groups are decomposed into three fundamental elements: note stem, note head, and note beams. Isolated music symbols are recognized based on shape model descriptors. We conduct tests on real pictures taken live by a camera. The tests show that the proposed method has a higher recognition rate.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6126
Author(s):  
Tae Hyong Kim ◽  
Ahnryul Choi ◽  
Hyun Mu Heo ◽  
Hyunggun Kim ◽  
Joung Hwan Mun

Pre-impact fall detection can detect a fall before a body segment hits the ground. When it is integrated with a protective system, it can directly prevent an injury due to hitting the ground. An impact acceleration peak magnitude is one of key measurement factors that can affect the severity of an injury. It can be used as a design parameter for wearable protective devices to prevent injuries. In our study, a novel method is proposed to predict an impact acceleration magnitude after loss of balance using a single inertial measurement unit (IMU) sensor and a sequential-based deep learning model. Twenty-four healthy participants participated in this study for fall experiments. Each participant worn a single IMU sensor on the waist to collect tri-axial accelerometer and angular velocity data. A deep learning method, bi-directional long short-term memory (LSTM) regression, is applied to predict a fall’s impact acceleration magnitude prior to fall impact (a fall in five directions). To improve prediction performance, a data augmentation technique with increment of dataset is applied. Our proposed model showed a mean absolute percentage error (MAPE) of 6.69 ± 0.33% with r value of 0.93 when all three different types of data augmentation techniques are applied. Additionally, there was a significant reduction of MAPE by 45.2% when the number of training datasets was increased by 4-fold. These results show that impact acceleration magnitude can be used as an activation parameter for fall prevention such as in a wearable airbag system by optimizing deployment process to minimize fall injury in real time.


2020 ◽  
Vol 29 (12) ◽  
pp. 2050190
Author(s):  
Amel Ben Mahjoub ◽  
Mohamed Atri

Action recognition is a very effective method of computer vision areas. In the last few years, there has been a growing interest in Deep learning networks as the Long Short–Term Memory (LSTM) architectures due to their efficiency in long-term time sequence processing. In the light of these recent events in deep neural networks, there is now considerable concern about the development of an accurate action recognition approach with low complexity. This paper aims to introduce a method for learning depth activity videos based on the LSTM and the classification fusion. The first step consists in extracting compact depth video features. We start with the calculation of Depth Motion Maps (DMM) from each sequence. Then we encode and concatenate contour and texture DMM characteristics using the histogram-of-oriented-gradient and local-binary-patterns descriptors. The second step is the depth video classification based on the naive Bayes fusion approach. Training three classifiers, which are the collaborative representation classifier, the kernel-based extreme learning machine and the LSTM, is done separately to get classification scores. Finally, we fuse the classification score outputs of all classifiers with the naive Bayesian method to get a final predicted label. Our proposed method achieves a significant improvement in the recognition rate compared to previous work that has used Kinect v2 and UTD-MHAD human action datasets.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Lifang He ◽  
Gaimin Jin ◽  
Sang-Bing Tsai

This article uses Field Programmable Gate Array (FPGA) as a carrier and uses IP core to form a System on Programmable Chip (SOPC) English speech recognition system. The SOPC system uses a modular hardware system design method. Except for the independent development of the hardware acceleration module and its control module, the other modules are implemented by software or IP provided by Xilinx development tools. Hardware acceleration IP adopts a top-down design method, provides parallel operation of multiple operation components, and uses pipeline technology, which speeds up data operation, so that only one operation cycle is required to obtain an operation result. In terms of recognition algorithm, a more effective training algorithm is proposed, Genetic Continuous Hidden Markov Model (GA_CHMM), which uses genetic algorithm to directly train CHMM model. It is to find the optimal model by encoding the parameter values of the CHMM and performing operations such as selection, crossover, and mutation according to the fitness function. The optimal parameter value after decoding corresponds to the CHMM model, and then the English speech recognition is performed through the CHMM algorithm. This algorithm can save a lot of training time, thereby improving the recognition rate and speed. This paper studies the optimization of embedded system software. By studying the fixed-point software algorithm and the optimization of system storage space, the real-time response speed of the system has been reduced from about 10 seconds to an average of 220 milliseconds. Through the optimization of the CHMM algorithm, the real-time performance of the system is improved again, and the average time to complete the recognition is significantly shortened. At the same time, the system can achieve a recognition rate of over 90% when the English speech vocabulary is less than 200.


2021 ◽  
Author(s):  
Mohammed Y. Alzahrani ◽  
Alwi M Bamhdi

Abstract In recent years, the use of the internet of things (IoT) has increased dramatically, and cybersecurity concerns have grown in tandem. Cybersecurity has become a major challenge for institutions and companies of all sizes, with the spread of threats growing in number and developing at a rapid pace. Artificial intelligence (AI) in cybersecurity can to a large extent help face the challenge, since it provides a powerful framework and coordinates that allow organisations to stay one step ahead of sophisticated cyber threats. AI provides real-time feedback, helping rollover daily alerts to be investigated and analysed, effective decisions to be made and enabling quick responses. AI-based capabilities make attack detection, security and mitigation more accurate for intelligence gathering and analysis, and they enable proactive protective countermeasures to be taken to overwhelm attacks. In this study, we propose a robust system specifically to help detect botnet attacks of IoT devices. This was done by innovatively combining the model of a convolutional neural network with a long short-term memory algorithm mechanism to detect two common and serious IoT attacks (BASHLITE and Mirai) on four types of security camera. The data sets, which contained normal malicious network packets, were collected from real-time lab-connected camera devices in IoT environments. The results of the experiment showed that the proposed system achieved optimal performance, according to evaluation metrics. The proposed system gave the following weighted average results for detecting the botnet on the Provision PT-737E camera: camera precision: 88%, recall: 87% and F1 score: 83%. The results of system for classifying botnet attacks and normal packets on the Provision PT-838 camera were 89% for recall, 85% for F1 score and 94%, precision. The intelligent security system using the advanced deep learning model was successful for detecting botnet attacks that infected camera devices connected to IoT applications.


Micromachines ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 1219
Author(s):  
Qingyang Yu ◽  
Peng Zhang ◽  
Yucheng Chen

Human motion state recognition technology based on flexible, wearable sensor devices has been widely applied in the fields of human–computer interaction and health monitoring. In this study, a new type of flexible capacitive pressure sensor is designed and applied to the recognition of human motion state. The electrode layers use multi-walled carbon nanotubes (MWCNTs) as conductive materials, and polydimethylsiloxane (PDMS) with microstructures is embedded in the surface as a flexible substrate. A composite film of barium titanate (BaTiO3) with a high dielectric constant and low dielectric loss and PDMS is used as the intermediate dielectric layer. The sensor has the advantages of high sensitivity (2.39 kPa−1), wide pressure range (0–120 kPa), low pressure resolution (6.8 Pa), fast response time (16 ms), fast recovery time (8 ms), lower hysteresis, and stability. The human body motion state recognition system is designed based on a multi-layer back propagation neural network, which can collect, process, and recognize the sensor signals of different motion states (sitting, standing, walking, and running). The results indicate that the overall recognition rate of the system for the human motion state reaches 94%. This proves the feasibility of the human motion state recognition system based on the flexible wearable sensor. Furthermore, the system has high application potential in the field of wearable motion detection.


Author(s):  
Doreen Jirak ◽  
Stephan Tietz ◽  
Hassan Ali ◽  
Stefan Wermter

Abstract Recent developments of sensors that allow tracking of human movements and gestures enable rapid progress of applications in domains like medical rehabilitation or robotic control. Especially the inertial measurement unit (IMU) is an excellent device for real-time scenarios as it rapidly delivers data input. Therefore, a computational model must be able to learn gesture sequences in a fast yet robust way. We recently introduced an echo state network (ESN) framework for continuous gesture recognition (Tietz et al., 2019) including novel approaches for gesture spotting, i.e., the automatic detection of the start and end phase of a gesture. Although our results showed good classification performance, we identified significant factors which also negatively impact the performance like subgestures and gesture variability. To address these issues, we include experiments with Long Short-Term Memory (LSTM) networks, which is a state-of-the-art model for sequence processing, to compare the obtained results with our framework and to evaluate their robustness regarding pitfalls in the recognition process. In this study, we analyze the two conceptually different approaches processing continuous, variable-length gesture sequences, which shows interesting results comparing the distinct gesture accomplishments. In addition, our results demonstrate that our ESN framework achieves comparably good performance as the LSTM network but has significantly lower training times. We conclude from the present work that ESNs are viable models for continuous gesture recognition delivering reasonable performance for applications requiring real-time performance as in robotic or rehabilitation tasks. From our discussion of this comparative study, we suggest prospective improvements on both the experimental and network architecture level.


2019 ◽  
Vol 2019 ◽  
pp. 1-8 ◽  
Author(s):  
Hee-Un Kim ◽  
Tae-Suk Bae

Much navigation over the last several decades has been aided by the global navigation satellite system (GNSS). In addition, with the advent of the multi-GNSS era, more and more satellites are available for navigation purposes. However, the navigation is generally carried out by point positioning based on the pseudoranges. The real-time kinematic (RTK) and the advanced technology, namely, the network RTK (NRTK), were introduced for better positioning and navigation. Further improved navigation was also investigated by combining other sensors such as the inertial measurement unit (IMU). On the other hand, a deep learning technique has been recently evolving in many fields, including automatic navigation of the vehicles. This is because deep learning combines various sensors without complicated analytical modeling of each individual sensor. In this study, we structured the multilayer recurrent neural networks (RNN) to improve the accuracy and the stability of the GNSS absolute solutions for the autonomous vehicle navigation. Specifically, the long short-term memory (LSTM) is an especially useful algorithm for time series data such as navigation with moderate speed of platforms. From an experiment conducted in a testing area, the LSTM algorithm developed the positioning accuracy by about 40% compared to GNSS-only navigation without any external bias information. Once the bias is taken care of, the accuracy will significantly be improved up to 8 times better than the GNSS absolute positioning results. The bias terms of the solution need to be estimated within the model by optimizing the layers as well as the nodes each layer, which should be done in further research.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3170 ◽  
Author(s):  
Zhang ◽  
Yang ◽  
Qian ◽  
Zhang

In recent years, surface electromyography (sEMG) signals have been increasingly used in pattern recognition and rehabilitation. In this paper, a real-time hand gesture recognition model using sEMG is proposed. We use an armband to acquire sEMG signals and apply a sliding window approach to segment the data in extracting features. A feedforward artificial neural network (ANN) is founded and trained by the training dataset. A test method is used in which the gesture will be recognized when recognized label times reach the threshold of activation times by the ANN classifier. In the experiment, we collected real sEMG data from twelve subjects and used a set of five gestures from each subject to evaluate our model, with an average recognition rate of 98.7% and an average response time of 227.76 ms, which is only one-third of the gesture time. Therefore, the pattern recognition system might be able to recognize a gesture before the gesture is completed.


Author(s):  
Ziyang Xie ◽  
Li Li ◽  
Xu Xu

Objective We propose a method for recognizing driver distraction in real time using a wrist-worn inertial measurement unit (IMU). Background Distracted driving results in thousands of fatal vehicle accidents every year. Recognizing distraction using body-worn sensors may help mitigate driver distraction and consequently improve road safety. Methods Twenty participants performed common behaviors associated with distracted driving while operating a driving simulator. Acceleration data collected from an IMU secured to each driver’s right wrist were used to detect potential manual distractions based on 2-s long streaming data. Three deep neural network-based classifiers were compared for their ability to recognize the type of distractive behavior using F1-scores, a measure of accuracy considering both recall and precision. Results The results indicated that a convolutional long short-term memory (ConvLSTM) deep neural network outperformed a convolutional neural network (CNN) and recursive neural network with long short-term memory (LSTM) for recognizing distracted driving behaviors. The within-participant F1-scores for the ConvLSTM, CNN, and LSTM were 0.87, 0.82, and 0.82, respectively. The between-participant F1-scores for the ConvLSTM, CNN, and LSTM were 0.87, 0.76, and 0.85, respectively. Conclusion The results of this pilot study indicate that the proposed driving distraction mitigation system that uses a wrist-worn IMU and ConvLSTM deep neural network classifier may have potential for improving transportation safety.


Sign in / Sign up

Export Citation Format

Share Document