A Swallowing Decoder Based on Deep Transfer Learning: AlexNet Classification of the Intracranial Electrocorticogram

Author(s):  
Hiroaki Hashimoto ◽  
Seiji Kameda ◽  
Hitoshi Maezawa ◽  
Satoru Oshino ◽  
Naoki Tani ◽  
...  

To realize a brain–machine interface to assist swallowing, neural signal decoding is indispensable. Eight participants with temporal-lobe intracranial electrode implants for epilepsy were asked to swallow during electrocorticogram (ECoG) recording. Raw ECoG signals or certain frequency bands of the ECoG power were converted into images whose vertical axis was electrode number and whose horizontal axis was time in milliseconds, which were used as training data. These data were classified with four labels (Rest, Mouth open, Water injection, and Swallowing). Deep transfer learning was carried out using AlexNet, and power in the high-[Formula: see text] band (75–150[Formula: see text]Hz) was the training set. Accuracy reached 74.01%, sensitivity reached 82.51%, and specificity reached 95.38%. However, using the raw ECoG signals, the accuracy obtained was 76.95%, comparable to that of the high-[Formula: see text] power. We demonstrated that a version of AlexNet pre-trained with visually meaningful images can be used for transfer learning of visually meaningless images made up of ECoG signals. Moreover, we could achieve high decoding accuracy using the raw ECoG signals, allowing us to dispense with the conventional extraction of high-[Formula: see text] power. Thus, the images derived from the raw ECoG signals were equivalent to those derived from the high-[Formula: see text] band for transfer deep learning.

Author(s):  
Xiaoming Li ◽  
Yan Sun ◽  
Qiang Zhang

In this paper, we focus on developing a novel method to extract sea ice cover (i.e., discrimination/classification of sea ice and open water) using Sentinel-1 (S1) cross-polarization (vertical-horizontal, VH or horizontal-vertical, HV) data in extra wide (EW) swath mode based on the machine learning algorithm support vector machine (SVM). The classification basis includes the S1 radar backscatter coefficients and texture features that are calculated from S1 data using the gray level co-occurrence matrix (GLCM). Different from previous methods where appropriate samples are manually selected to train the SVM to classify sea ice and open water, we proposed a method of unsupervised generation of the training samples based on two GLCM texture features, i.e. entropy and homogeneity, that have contrasting characteristics on sea ice and open water. We eliminate the most uncertainty of selecting training samples in machine learning and achieve automatic classification of sea ice and open water by using S1 EW data. The comparison shows good agreement between the SAR-derived sea ice cover using the proposed method and a visual inspection, of which the accuracy reaches approximately 90% - 95% based on a few cases. Besides this, compared with the analyzed sea ice cover data Ice Mapping System (IMS) based on 728 S1 EW images, the accuracy of extracted sea ice cover by using S1 data is more than 80%.


Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 380
Author(s):  
Ha-Yeong Yoon ◽  
Jung-Hwa Kim ◽  
Jin-Woo Jeong

The demand for wheelchairs has increased recently as the population of the elderly and patients with disorders increases. However, society still pays less attention to infrastructure that can threaten the wheelchair user, such as sidewalks with cracks/potholes. Although various studies have been proposed to recognize such challenges, they mainly depend on RGB images or IMU sensors, which are sensitive to outdoor conditions such as low illumination, bad weather, and unavoidable vibrations, resulting in unsatisfactory and unstable performance. In this paper, we introduce a novel system based on various convolutional neural networks (CNNs) to automatically classify the condition of sidewalks using images captured with depth and infrared modalities. Moreover, we compare the performance of training CNNs from scratch and the transfer learning approach, where the weights learned from the natural image domain (e.g., ImageNet) are fine-tuned to the depth and infrared image domain. In particular, we propose applying the ResNet-152 model pre-trained with self-supervised learning during transfer learning to leverage better image representations. Performance evaluation on the classification of the sidewalk condition was conducted with 100% and 10% of training data. The experimental results validate the effectiveness and feasibility of the proposed approach and bring future research directions.


Author(s):  
Guokai Liu ◽  
Liang Gao ◽  
Weiming Shen ◽  
Andrew Kusiak

Abstract Condition monitoring and fault diagnosis are of great interest to the manufacturing industry. Deep learning algorithms have shown promising results in equipment prognostics and health management. However, their success has been hindered by excessive training time. In addition, deep learning algorithms face the domain adaptation dilemma encountered in dynamic application environments. The emerging concept of broad learning addresses the training time and the domain adaptation issue. In this paper, a broad transfer learning algorithm is proposed for the classification of bearing faults. Data of the same frequency is used to construct one- and two-dimensional training data sets to analyze performance of the broad transfer and deep learning algorithms. A broad learning algorithm contains two main layers, an augmented feature layer and a classification layer. The broad learning algorithm with a sparse auto-encoder is employed to extract features. The optimal solution of a redefined cost function with a limited sample size to ten per class in the target domain offers the classifier of broad learning domain adaptation capability. The effectiveness of the proposed algorithm has been demonstrated on a benchmark dataset. Computational experiments have demonstrated superior efficiency and accuracy of the proposed algorithm over the deep learning algorithms tested.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Lokesh Singh ◽  
Rekh Ram Janghel ◽  
Satya Prakash Sahu

PurposeThe study aims to cope with the problems confronted in the skin lesion datasets with less training data toward the classification of melanoma. The vital, challenging issue is the insufficiency of training data that occurred while classifying the lesions as melanoma and non-melanoma.Design/methodology/approachIn this work, a transfer learning (TL) framework Transfer Constituent Support Vector Machine (TrCSVM) is designed for melanoma classification based on feature-based domain adaptation (FBDA) leveraging the support vector machine (SVM) and Transfer AdaBoost (TrAdaBoost). The working of the framework is twofold: at first, SVM is utilized for domain adaptation for learning much transferrable representation between source and target domain. In the first phase, for homogeneous domain adaptation, it augments features by transforming the data from source and target (different but related) domains in a shared-subspace. In the second phase, for heterogeneous domain adaptation, it leverages knowledge by augmenting features from source to target (different and not related) domains to a shared-subspace. Second, TrAdaBoost is utilized to adjust the weights of wrongly classified data in the newly generated source and target datasets.FindingsThe experimental results empirically prove the superiority of TrCSVM than the state-of-the-art TL methods on less-sized datasets with an accuracy of 98.82%.Originality/valueExperiments are conducted on six skin lesion datasets and performance is compared based on accuracy, precision, sensitivity, and specificity. The effectiveness of TrCSVM is evaluated on ten other datasets towards testing its generalizing behavior. Its performance is also compared with two existing TL frameworks (TrResampling, TrAdaBoost) for the classification of melanoma.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 3677-3680

Dog Breed identification is a specific application of Convolutional Neural Networks. Though the classification of Images by Convolutional Neural Network serves to be efficient method, still it has few drawbacks. Convolutional Neural Networks requires a large amount of images as training data and basic time for training the data and to achieve higher accuracy on the classification. To overcome this substantial time we use Transfer Learning. In computer vision, transfer learning refers to the use of a pre-trained models to train the CNN. By Transfer learning, a pre-trained model is trained to provide solution to classification problem which is similar to the classification problem we have. In this project we are using various pre-trained models like VGG16, Xception, InceptionV3 to train over 1400 images covering 120 breeds out of which 16 breeds of dogs were used as classes for training and obtain bottleneck features from these pre-trained models. Finally, Logistic Regression a multiclass classifier is used to identify the breed of the dog from the images and obtained 91%, 94%,95% validation accuracy for these different pre-trained models VGG16, Xception, InceptionV3.


Author(s):  
Saleh Alaraimi ◽  
Kenneth E. Okedu ◽  
Hugo Tianfield ◽  
Richard Holden ◽  
Omair Uthmani

Sign in / Sign up

Export Citation Format

Share Document