scholarly journals Brain Computing Interface using Deep Learning for Blind People

2019 ◽  
Vol 8 (4) ◽  
pp. 8227-8230

The developing area of research in Brain Computer Interface (BCI) is used to enhance the quality of human computer applications. It can be decoding individuals by the computer device signals converted into commands between human’s neural world and outer physical world. The brain use bodies under some circumstances to interact with the external world and also brain can be depressed of their sensing abilities namely blindness or deafness. In this study, analyze of brain’s behavior using BCI for blind people in spatial activity. The common beliefs in blind people using other senses by compensate their lack of vision. In case of BCI system can able to understand the brain’s activity even in very difficult challenge. Therefore we propose the data mining technique. In this research work, deep learning approach based on the framework of Convolution Neural Networks (CNN) with Long Short-Term Memory (LSTM) can help us to discover their brain’s activity for blind people.

2020 ◽  
Vol 6 (1) ◽  
pp. 4
Author(s):  
Puspad Kumar Sharma ◽  
Nitesh Gupta ◽  
Anurag Shrivastava

In image processing applications, one of the main preprocessing phases is image enhancement that is used to produce high quality image or enhanced image than the original input image. These enhanced images can be used in many applications such as remote sensing applications, geo-satellite images, etc. The quality of an image is affected due to several conditions such as by poor illumination, atmospheric condition, wrong lens aperture setting of the camera, noise, etc [2]. So, such degraded/low exposure images are needed to be enhanced by increasing the brightness as well as its contrast and this can be possible by the method of image enhancement. In this research work different image enhancement techniques are discussed and reviewed with their results. The aim of this study is to determine the application of deep learning approaches that have been used for image enhancement. Deep learning is a machine learning approach which is currently revolutionizing a number of disciplines including image processing and computer vision. This paper will attempt to apply deep learning to image filtering, specifically low-light image enhancement. The review given in this paper is quite efficient for future researchers to overcome problems that helps in designing efficient algorithm which enhances quality of the image.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Sofia B. Dias ◽  
Sofia J. Hadjileontiadou ◽  
José Diniz ◽  
Leontios J. Hadjileontiadis

AbstractCoronavirus (Covid-19) pandemic has imposed a complete shut-down of face-to-face teaching to universities and schools, forcing a crash course for online learning plans and technology for students and faculty. In the midst of this unprecedented crisis, video conferencing platforms (e.g., Zoom, WebEx, MS Teams) and learning management systems (LMSs), like Moodle, Blackboard and Google Classroom, are being adopted and heavily used as online learning environments (OLEs). However, as such media solely provide the platform for e-interaction, effective methods that can be used to predict the learner’s behavior in the OLEs, which should be available as supportive tools to educators and metacognitive triggers to learners. Here we show, for the first time, that Deep Learning techniques can be used to handle LMS users’ interaction data and form a novel predictive model, namely DeepLMS, that can forecast the quality of interaction (QoI) with LMS. Using Long Short-Term Memory (LSTM) networks, DeepLMS results in average testing Root Mean Square Error (RMSE) $$<0.009$$ < 0.009 , and average correlation coefficient between ground truth and predicted QoI values $$r\ge 0.97$$ r ≥ 0.97 $$(p<0.05)$$ ( p < 0.05 ) , when tested on QoI data from one database pre- and two ones during-Covid-19 pandemic. DeepLMS personalized QoI forecasting scaffolds user’s online learning engagement and provides educators with an evaluation path, additionally to the content-related assessment, enriching the overall view on the learners’ motivation and participation in the learning process.


2020 ◽  
Vol 9 (2) ◽  
pp. 1049-1054

In this paper, we have tried to predict flight delays using different machine learning and deep learning techniques. By using such a model it can be easier to predict whether the flight will be delayed or not. Factors like ‘WeatherDelay’, ‘NASDelay’, ‘Destination’, ‘Origin’ play a vital role in this model. Using machine learning algorithms like Random Forest, Support Vector Machine (SVM) and K-Nearest Neighbors (KNN), the f1-score, precision, recall, support and accuracy have been predicted. To add to the model, Long Short-Term Memory (LSTM) RNN architecture has also been employed. In the paper, the dataset from Bureau of Transportation Statistics (BTS) of the ‘Pittsburgh’ is being used. The results computed from the above mentioned algorithms have been compared. Further, the results were visualized for various airlines to find maximum delay and AUC-ROC curve has been plotted for Random Forest Algorithm. The aim of our research work is to predict the delay so as to minimize loses and increase customer satisfaction.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Narusci S. Bastos ◽  
Diana F. Adamatti ◽  
Cleo Z. Billa

Even with emerging technologies, such as Brain-Computer Interfaces (BCI) systems, understanding how our brains work is a very difficult challenge. So we propose to use a data mining technique to help us in this task. As a case of study, we analyzed the brain’s behaviour of blind people and sighted people in a spatial activity. There is a common belief that blind people compensate their lack of vision using the other senses. If an object is given to sighted people and we asked them to identify this object, probably the sense of vision will be the most determinant one. If the same experiment was repeated with blind people, they will have to use other senses to identify the object. In this work, we propose a methodology that uses decision trees (DT) to investigate the difference of how the brains of blind people and people with vision react against a spatial problem. We choose the DT algorithm because it can discover patterns in the brain signal, and its presentation is human interpretable. Our results show that using DT to analyze brain signals can help us to understand the brain’s behaviour.


Biomimetics ◽  
2021 ◽  
Vol 6 (1) ◽  
pp. 12
Author(s):  
Marvin Coto-Jiménez

Statistical parametric speech synthesis based on Hidden Markov Models has been an important technique for the production of artificial voices, due to its ability to produce results with high intelligibility and sophisticated features such as voice conversion and accent modification with a small footprint, particularly for low-resource languages where deep learning-based techniques remain unexplored. Despite the progress, the quality of the results, mainly based on Hidden Markov Models (HMM) does not reach those of the predominant approaches, based on unit selection of speech segments of deep learning. One of the proposals to improve the quality of HMM-based speech has been incorporating postfiltering stages, which pretend to increase the quality while preserving the advantages of the process. In this paper, we present a new approach to postfiltering synthesized voices with the application of discriminative postfilters, with several long short-term memory (LSTM) deep neural networks. Our motivation stems from modeling specific mapping from synthesized to natural speech on those segments corresponding to voiced or unvoiced sounds, due to the different qualities of those sounds and how HMM-based voices can present distinct degradation on each one. The paper analyses the discriminative postfilters obtained using five voices, evaluated using three objective measures, Mel cepstral distance and subjective tests. The results indicate the advantages of the discriminative postilters in comparison with the HTS voice and the non-discriminative postfilters.


2019 ◽  
Vol 2 (3) ◽  
pp. 786-797
Author(s):  
Feyza Cevik ◽  
Zeynep Hilal Kilimci

Parkinson&apos;s disease is a common neurodegenerative neurological disorder, which affects the patient&apos;s quality of life, has significant social and economic effects, and is difficult to diagnose early due to the gradual appearance of symptoms. Examining the discussion of Parkinson&amp;rsquo;s disease in social media platforms such as Twitter provides a platform where patients communicate each other in both diagnosis and treatment stage of the Parkinson&amp;rsquo;s disease. The purpose of this work is to evaluate and compare the sentiment analysis of people about Parkinson&apos;s disease by using deep learning and word embedding models. To the best of our knowledge, this is the very first study to analyze Parkinson&apos;s disease from social media by using word embedding models and deep learning algorithms. In this study, Word2Vec, GloVe, and FastText are employed as word embedding models for the purpose of enriching tweets in terms of semantic, context, and syntax. Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Long Short-Term Memory Networks (LSTMs) are implemented for the classification task. This study demonstrates the efficiency of using word embedding models and deep learning algorithms to understand the needs of patients&amp;rsquo; and provide a valuable contribution to the treatment process by analyzing sentiments of them with 93.63% accuracy performance.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Veerraju Gampala ◽  
Praful Vijay Nandankar ◽  
M. Kathiravan ◽  
S. Karunakaran ◽  
Arun Reddy Nalla ◽  
...  

Purpose The purpose of this paper is to analyze and build a deep learning model that can furnish statistics of COVID-19 and is able to forecast pandemic outbreak using Kaggle open research COVID-19 data set. As COVID-19 has an up-to-date data collection from the government, deep learning techniques can be used to predict future outbreak of coronavirus. The existing long short-term memory (LSTM) model is fine-tuned to forecast the outbreak of COVID-19 with better accuracy, and an empirical data exploration with advanced picturing has been made to comprehend the outbreak of coronavirus. Design/methodology/approach This research work presents a fine-tuned LSTM deep learning model using three hidden layers, 200 LSTM unit cells, one activation function ReLu, Adam optimizer, loss function is mean square error, the number of epochs 200 and finally one dense layer to predict one value each time. Findings LSTM is found to be more effective in forecasting future predictions. Hence, fine-tuned LSTM model predicts accurate results when applied to COVID-19 data set. Originality/value The fine-tuned LSTM model is developed and tested for the first time on COVID-19 data set to forecast outbreak of pandemic according to the authors’ knowledge.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2498 ◽  
Author(s):  
Robert D. Chambers ◽  
Nathanael C. Yoder

In this paper, we present and benchmark FilterNet, a flexible deep learning architecture for time series classification tasks, such as activity recognition via multichannel sensor data. It adapts popular convolutional neural network (CNN) and long short-term memory (LSTM) motifs which have excelled in activity recognition benchmarks, implementing them in a many-to-many architecture to markedly improve frame-by-frame accuracy, event segmentation accuracy, model size, and computational efficiency. We propose several model variants, evaluate them alongside other published models using the Opportunity benchmark dataset, demonstrate the effect of model ensembling and of altering key parameters, and quantify the quality of the models’ segmentation of discrete events. We also offer recommendations for use and suggest potential model extensions. FilterNet advances the state of the art in all measured accuracy and speed metrics when applied to the benchmarked dataset, and it can be extensively customized for other applications.


Author(s):  
Rene Avalloni de Morais ◽  
Baidya Nath Saha

Deep learning algorithms have received dramatic progress in the area of natural language processing and automatic human speech recognition. However, the accuracy of the deep learning algorithms depends on the amount and quality of the data and training deep models requires high-performance computing resources. In this backdrop, this paper adresses an end-to-end speech recognition system where we finetune Mozilla DeepSpeech architecture using two different datasets: LibriSpeech clean dataset and Harvard speech dataset. We train Long Short Term Memory (LSTM) based deep Recurrent Neural Netowrk (RNN) models in Google Colab platform and use their GPU resources. Extensive experimental results demonstrate that Mozilla DeepSpeech model could be fine-tuned for different audio datasets to recognize speeches successfully.


2020 ◽  
Vol 2020 ◽  
pp. 1-29 ◽  
Author(s):  
Dima Suleiman ◽  
Arafat Awajan

In recent years, the volume of textual data has rapidly increased, which has generated a valuable resource for extracting and analysing information. To retrieve useful knowledge within a reasonable time period, this information must be summarised. This paper reviews recent approaches for abstractive text summarisation using deep learning models. In addition, existing datasets for training and validating these approaches are reviewed, and their features and limitations are presented. The Gigaword dataset is commonly employed for single-sentence summary approaches, while the Cable News Network (CNN)/Daily Mail dataset is commonly employed for multisentence summary approaches. Furthermore, the measures that are utilised to evaluate the quality of summarisation are investigated, and Recall-Oriented Understudy for Gisting Evaluation 1 (ROUGE1), ROUGE2, and ROUGE-L are determined to be the most commonly applied metrics. The challenges that are encountered during the summarisation process and the solutions proposed in each approach are analysed. The analysis of the several approaches shows that recurrent neural networks with an attention mechanism and long short-term memory (LSTM) are the most prevalent techniques for abstractive text summarisation. The experimental results show that text summarisation with a pretrained encoder model achieved the highest values for ROUGE1, ROUGE2, and ROUGE-L (43.85, 20.34, and 39.9, respectively). Furthermore, it was determined that most abstractive text summarisation models faced challenges such as the unavailability of a golden token at testing time, out-of-vocabulary (OOV) words, summary sentence repetition, inaccurate sentences, and fake facts.


Sign in / Sign up

Export Citation Format

Share Document