Interpreting Brain Waves

Author(s):  
Noran Magdy El-Kafrawy ◽  
Doaa Hegazy ◽  
Mohamed F. Tolba

BCI (Brain-Computer Interface) gives you the power to manipulate things around you just by thinking of what you want to do. It allows your thoughts to be interpreted by the computer and hence act upon it. This could be utilized in helping disabled people, remote controlling of robots or even getting personalized systems depending upon your mood. The most important part of any BCI application is interpreting the brain signalsasthere are many mental tasks to be considered. In this chapter, the authors focus on interpreting motor imagery tasks and more specifically, imagining left hand, right hand, foot and tongue. Interpreting the signal consists of two main steps: feature extraction and classification. For the feature extraction,Empirical Mode Decomposition (EMD) was used and for the classification,the Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel was used. The authors evaluated this system using the BCI competition IV dataset and reached a very promising accuracy.

2020 ◽  
Vol 8 (2) ◽  
pp. 66-71
Author(s):  
Aveen J. Mohammed ◽  
Hasan S.M. Al-Khaffaf

This paper presents a system for recognizing English fonts from character images. The distance profile is the feature of choice used in this paper. The system extracts a vector of 106 features and feeds it into a support vector machine (SVM) classifier with a radial basis function (RBF) kernel. The experiment is divided into three phases. In the first phase, the system trains the SVM with different Gamma and C parameters. In the second phase, the validation phase, we validate and select the pair of Gamma and C values that yield the best recognition rates. In the final phase, the testing phase, the images are tested and the recognition rate is reported. Experimental results based on 27,620 characters glyph images from three English fonts show a 94.82% overall recognition rate.


Author(s):  
Belindha Ayu Ardhani ◽  
Nur Chamidah ◽  
Toha Saifudin

Background: The introduction of Kartu Prakerja (Pre-employment Card) Programme, henceforth KPP, which was claimed to have launched in order to improve the quality of workforce, spurred controversy among members of the public. The discussion covered the amount of budget, the training materials and the operations brought out various reactions. Opinions could be largely divided into groups: the positive and the negative sentiments.Objective: This research aims to propose an automated sentiment analysis that focuses on KPP. The findings are expected to be useful in evaluating the services and facilities provided.Methods: In the sentiment analysis, Support Vector Machine (SVM) in text mining was used with Radial Basis Function (RBF) kernel. The data consisted of 500 tweets from July to October 2020, which were divided into two sets: 80% data for training and 20% data for testing with five-fold cross validation.Results: The results of descriptive analysis show that from the total 500 tweets, 60% were negative sentiments and 40% were positive sentiments. The classification in the testing data show that the average accuracy, sensitivity, specificity, negative sentiment prediction and positive sentiment prediction values were 85.20%; 91.68%; 75.75%; 85.03%; and 86.04%, respectively.Conclusion: The classification results show that SVM with RBF kernel performs well in the opinion classification. This method can be used to understand similar sentiment analysis in the future. In KPP case, the findings can inform the stakeholders to improve the programmes in the future. Keywords: Kartu Prakerja, Sentiment Analysis, Support Vector Machine, Text Mining, Radial Basis Function 


2020 ◽  
Vol 5 (2) ◽  
pp. 504
Author(s):  
Matthias Omotayo Oladele ◽  
Temilola Morufat Adepoju ◽  
Olaide ` Abiodun Olatoke ◽  
Oluwaseun Adewale Ojo

Yorùbá language is one of the three main languages that is been spoken in Nigeria. It is a tonal language that carries an accent on the vowel alphabets. There are twenty-five (25) alphabets in Yorùbá language with one of the alphabets a digraph (GB). Due to the difficulty in typing handwritten Yorùbá documents, there is a need to develop a handwritten recognition system that can convert the handwritten texts to digital format. This study discusses the offline Yorùbá handwritten word recognition system (OYHWR) that recognizes Yorùbá uppercase alphabets. Handwritten characters and words were obtained from different writers using the paint application and M708 graphics tablets. The characters were used for training and the words were used for testing. Pre-processing was done on the images and the geometric features of the images were extracted using zoning and gradient-based feature extraction. Geometric features are the different line types that form a particular character such as the vertical, horizontal, and diagonal lines. The geometric features used are the number of horizontal lines, number of vertical lines, number of right diagonal lines, number of left diagonal lines, total length of all horizontal lines, total length of all vertical lines, total length of all right slanting lines, total length of all left-slanting lines and the area of the skeleton. The characters are divided into 9 zones and gradient feature extraction was used to extract the horizontal and vertical components and geometric features in each zone. The words were fed into the support vector machine classifier and the performance was evaluated based on recognition accuracy. Support vector machine is a two-class classifier, hence a multiclass SVM classifier least square support vector machine (LSSVM) was used for word recognition. The one vs one strategy and RBF kernel were used and the recognition accuracy obtained from the tested words ranges between 66.7%, 83.3%, 85.7%, 87.5%, and 100%. The low recognition rate for some of the words could be as a result of the similarity in the extracted features.


Sign in / Sign up

Export Citation Format

Share Document