A stress recognition system using HRV parameters and machine learning techniques

Author(s):  
Giorgos Giannakakis ◽  
Kostas Marias ◽  
Manolis Tsiknakis

In order to take notes of the speech delivered by the VIPs in the short time short hand language is employed. Mainly there are two shorthand languages namely Pitman and Teeline. An automatic shorthand language recognition system is essential in order to make use of the handheld devices for speedy conversion to the original text. The paper addresses and compares the recognition of the Teeline alphabets using the Machine learning (SVM and KNN) and deep learning (CNN) techniques. The dataset has been prepared using the digital pen and the same is processed and stored using the android application. The prepared dataset is fed to the proposed system and accuracy of recognition is compared. Deep learning technique gave higher accuracy compared to machine learning techniques. MATLAB 2018b platform is used for implementation of the experimental setup.


2018 ◽  
Vol 32 (19) ◽  
pp. 1850212 ◽  
Author(s):  
Sahil Sharma ◽  
Vijay Kumar

Face recognition is a vastly researched topic in the field of computer vision. A lot of work have been done for facial recognition in two dimensions and three dimensions. The amount of work done with face recognition invariant of image processing attacks is very limited. This paper presents a total of three classes of image processing attacks on face recognition system, namely image enhancement attacks, geometric attacks and the image noise attacks. The well-known machine learning techniques have been used to train and test the face recognition system using two different databases namely Bosphorus Database and University of Milano Bicocca three-dimensional (3D) Face Database (UMBDB). Three classes of classification models, namely discriminant analysis, support vector machine and k-nearest neighbor along with ensemble techniques have been implemented. The significance of machine learning techniques has been mentioned. The visual verification has been done with multiple image processing attacks.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 518 ◽  
Author(s):  
Jacopo Sini ◽  
Antonio Costantino Marceddu ◽  
Massimo Violante

The development of autonomous driving cars is a complex activity, which poses challenges about ethics, safety, cybersecurity, and social acceptance. The latter, in particular, poses new problems since passengers are used to manually driven vehicles; hence, they need to move their trust from a person to a computer. To smooth the transition towards autonomous vehicles, a delicate calibration of the driving functions should be performed, making the automation decision closest to the passengers’ expectations. The complexity of this calibration lies in the presence of a person in the loop: different settings of a given algorithm should be evaluated by assessing the human reaction to the vehicle decisions. With this work, we for an objective method to classify the people’s reaction to vehicle decisions. By adopting machine learning techniques, it is possible to analyze the passengers’ emotions while driving with alternative vehicle calibrations. Through the analysis of these emotions, it is possible to obtain an objective metric about the comfort feeling of the passengers. As a result, we developed a proof-of-concept implementation of a simple, yet effective, emotions recognition system. It can be deployed either into real vehicles or simulators, during the driving functions calibration.


Author(s):  
Hong Lee ◽  
Brijesh Verma ◽  
Michael Li ◽  
Ashfaqur Rahman

Handwriting recognition is a process of recognizing handwritten text on a paper in the case of offline handwriting recognition and on a tablet in the case of online handwriting recognition and converting it into an editable text. In this chapter, the authors focus on offline handwriting recognition, which means that recognition system accepts a scanned handwritten page as an input and outputs an editable recognized text. Handwriting recognition has been an active research area for more than four decades, but some of the major problems still remained unsolved. Many techniques, including the machine learning techniques, have been used to improve the accuracy. This chapter focuses on describing the problems of handwriting recognition and presents the solutions using machine learning techniques for solving major problems in handwriting recognition. The chapter also reviews and presents the state of the art techniques with results and future research for improving handwriting recognition.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2326
Author(s):  
Ayesha Pervaiz ◽  
Fawad Hussain ◽  
Huma Israr ◽  
Muhammad Ali Tahir ◽  
Fawad Riasat Raja ◽  
...  

The advent of new devices, technology, machine learning techniques, and the availability of free large speech corpora results in rapid and accurate speech recognition. In the last two decades, extensive research has been initiated by researchers and different organizations to experiment with new techniques and their applications in speech processing systems. There are several speech command based applications in the area of robotics, IoT, ubiquitous computing, and different human-computer interfaces. Various researchers have worked on enhancing the efficiency of speech command based systems and used the speech command dataset. However, none of them catered to noise in the same. Noise is one of the major challenges in any speech recognition system, as real-time noise is a very versatile and unavoidable factor that affects the performance of speech recognition systems, particularly those that have not learned the noise efficiently. We thoroughly analyse the latest trends in speech recognition and evaluate the speech command dataset on different machine learning based and deep learning based techniques. A novel technique is proposed for noise robustness by augmenting noise in training data. Our proposed technique is tested on clean and noisy data along with locally generated data and achieves much better results than existing state-of-the-art techniques, thus setting a new benchmark.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Gamal Tharwat ◽  
Abdelmoty M. Ahmed ◽  
Belgacem Bouallegue

In recent years, the role of pattern recognition in systems based on human computer interaction (HCI) has spread in terms of computer vision applications and machine learning, and one of the most important of these applications is to recognize the hand gestures used in dealing with deaf people, in particular to recognize the dashed letters in surahs of the Quran. In this paper, we suggest an Arabic Alphabet Sign Language Recognition System (AArSLRS) using the vision-based approach. The proposed system consists of four stages: the stage of data processing, preprocessing of data, feature extraction, and classification. The system deals with three types of datasets: data dealing with bare hands and a dark background, data dealing with bare hands, but with a light background, and data dealing with hands wearing dark colored gloves. AArSLRS begins with obtaining an image of the alphabet gestures, then revealing the hand from the image and isolating it from the background using one of the proposed methods, after which the hand features are extracted according to the selection method used to extract them. Regarding the classification process in this system, we have used supervised learning techniques for the classification of 28-letter Arabic alphabet using 9240 images. We focused on the classification for 14 alphabetic letters that represent the first Quran surahs in the Quranic sign language (QSL). AArSLRS achieved an accuracy of 99.5% for the K-Nearest Neighbor (KNN) classifier.


Sign in / Sign up

Export Citation Format

Share Document