Integrated and Configurable Voice Activation and Speaker Verification System for a Robotic Exoskeleton Glove

Author(s):  
Yunfei Guo ◽  
Wenda Xu ◽  
Sarthak Pradhan ◽  
Cesar Bravo ◽  
Pinhas Ben-Tzvi

Abstract Efficient human-machine interface (HMI) for exoskeletons remains an active research topic, where sample methods have been proposed including using computer vision, EEG (electroencephalogram), and voice recognition. However, some of these methods lack sufficient accuracy, security, and portability. This paper proposes a HMI referred as integrated trigger-word configurable voice activation and speaker verification system (CVASV). The CVASV system is designed for embedded systems with limited computing power that can be applied to any exoskeleton platform. The CVASV system consists of two main sections, including an API based voice activation section and a deep learning based text-independent voice verification section. These two sections are combined into a system that allows the user to configure the activation trigger-word and verify the user’s command in real-time.

Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5878 ◽  
Author(s):  
Fares Bougourzi ◽  
Riccardo Contino ◽  
Cosimo Distante ◽  
Abdelmalik Taleb-Ahmed

Since the appearance of the COVID-19 pandemic (at the end of 2019, Wuhan, China), the recognition of COVID-19 with medical imaging has become an active research topic for the machine learning and computer vision community. This paper is based on the results obtained from the 2021 COVID-19 SPGC challenge, which aims to classify volumetric CT scans into normal, COVID-19, or community-acquired pneumonia (Cap) classes. To this end, we proposed a deep-learning-based approach (CNR-IEMN) that consists of two main stages. In the first stage, we trained four deep learning architectures with a multi-tasks strategy for slice-level classification. In the second stage, we used the previously trained models with an XG-boost classifier to classify the whole CT scan into normal, COVID-19, or Cap classes. Our approach achieved a good result on the validation set, with an overall accuracy of 87.75% and 96.36%, 52.63%, and 95.83% sensitivities for COVID-19, Cap, and normal, respectively. On the other hand, our approach achieved fifth place on the three test datasets of SPGC in the COVID-19 challenge, where our approach achieved the best result for COVID-19 sensitivity. In addition, our approach achieved second place on two of the three testing sets.


2020 ◽  
Vol 2 ◽  
pp. 58-61 ◽  
Author(s):  
Syed Junaid ◽  
Asad Saeed ◽  
Zeili Yang ◽  
Thomas Micic ◽  
Rajesh Botchu

The advances in deep learning algorithms, exponential computing power, and availability of digital patient data like never before have led to the wave of interest and investment in artificial intelligence in health care. No radiology conference is complete without a substantial dedication to AI. Many radiology departments are keen to get involved but are unsure of where and how to begin. This short article provides a simple road map to aid departments to get involved with the technology, demystify key concepts, and pique an interest in the field. We have broken down the journey into seven steps; problem, team, data, kit, neural network, validation, and governance.


2020 ◽  
Author(s):  
Ying Tong ◽  
Wei Xue ◽  
Shanluo Huang ◽  
Lu Fan ◽  
Chao Zhang ◽  
...  

2020 ◽  
Author(s):  
Kong Aik Lee ◽  
Koji Okabe ◽  
Hitoshi Yamamoto ◽  
Qiongqiong Wang ◽  
Ling Guo ◽  
...  

Author(s):  
Soonshin Seo ◽  
Daniel Jun Rim ◽  
Minkyu Lim ◽  
Donghyun Lee ◽  
Hosung Park ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document