scholarly journals Helping hearing-impaired in emergency situations: A deep learning-based approach

IEEE Access ◽  
2022 ◽  
pp. 1-1
Author(s):  
Qazi Mohammad Areeb ◽  
Ms. Maryam ◽  
Mohammad Nadeem ◽  
Roobaea Alroobaea ◽  
Faisal Anwer
Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6256
Author(s):  
Boon Giin Lee ◽  
Teak-Wei Chong ◽  
Wan-Young Chung

Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life.


2021 ◽  
Vol 11 (14) ◽  
pp. 6340
Author(s):  
Michal Ptaszynski ◽  
Fumito Masui ◽  
Yuuto Fukushima ◽  
Yuuto Oikawa ◽  
Hiroshi Hayakawa ◽  
...  

In this paper, we present a Deep Learning-based system for the support of information triaging on Twitter during emergency situations, such as disasters, or other influential events, such as political elections. The system is based on the assumption that a different type of information is required right after the event and some time after the event occurs. In a preliminary study, we analyze the language behavior of Twitter users during two kinds of influential events, namely, natural disasters and political elections. In the study, we analyze the credibility of information included by users in tweets in the above-mentioned situations, by classifying the information into two kinds: Primary Information (first-hand reports) and Secondary Information (second-hand reports, retweets, etc.). We also perform sentiment analysis of the data to check user attitudes toward the occurring events. Next, we present the structure of the system and compare a number of classifiers, including the proposed one based on Convolutional Neural Networks. Finally, we validate the system by performing an in-depth analysis of information obtained after a number of additional events, including an eruption of a Japanese volcano Ontake on 27 September 2014, as well as heavy rains and typhoons that occurred in 2020. We confirm that the methods works sufficiently well even when trained on data from nearly 10 years ago, which strongly suggests that the model is well-generalized and sufficiently grasps important aspects of each type of classified information.


Author(s):  
Mohammed Abid Abrar ◽  
A. N. M. Nafiul Islam ◽  
Mohammad Muntasir Hassan ◽  
Mohammad Tariqul Islam ◽  
Celia Shahnaz ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document