average classification accuracy
Recently Published Documents


TOTAL DOCUMENTS

46
(FIVE YEARS 31)

H-INDEX

4
(FIVE YEARS 1)

Neurology ◽  
2021 ◽  
Vol 98 (1 Supplement 1) ◽  
pp. S5.2-S6
Author(s):  
Gina Dumkrieger ◽  
Catherine Daniela Chong ◽  
Katherine Ross ◽  
Visar Berisha ◽  
Todd J. Schwedt

ObjectiveThe objective was to develop classification models differentiating persistent PTH (PPTH) and migraine using clinical data and MRI-based measures of brain structure and functional connectivity.BackgroundPTH and migraine commonly have similar phenotypes. Furthermore, migraine is a risk factor for developing PTH, sometimes making it difficult to differentiate PTH from exacerbation of migraine symptoms.Design/MethodsThirty-four individuals with migraine without history of TBI and 48 individuals with mild TBI attributed to PPTH but without history of migraine or prior frequent tension type headache were included. Subjects completed questionnaires assessing headache characteristics, mood, sensory hypersensitivities and cognitive function and underwent MRI imaging during the same day. Clinical features and structural brain measures from T1-weighted imaging, diffusion tensor imaging and functional resting-state measures were included as potential variables. A classifier using ridge logistic regression of principal components (PC) was fit. Since PCs can hinder identification of significant variables in a model, a second regression model was fit directly to the data. In the non-PC based model, input variables were selected based on lowest t-test or chi-square p-value by modality. Average accuracy was calculated using leave-one-out cross validation. The importance of variables to the classifier were examined.ResultsThe PC-based classifier achieved an average classification accuracy of 85%. The non-PC based classifier achieved an average classification accuracy of 74.4%. Both classifiers were more accurate at classifying migraine subjects than PPTH. The PC-based model incorrectly classified 9/48 (18.8%) PPTH subjects compared to 3/34 (8.8%) migraine patients, whereas the non-PC classifier incorrectly classed 16/48 (33.3%) vs 5/34 (14.7%) of migraine subjects. Important variables in the non-PC model included static and dynamic functional connectivity values, several questions from the Beck Depression Inventory, and worsening symptoms and headaches with mental activity.ConclusionsMultivariate models including clinical characteristics, functional connectivity, and brain structural data accurately classify and differentiate PPTH vs migraine.


Mathematics ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 29
Author(s):  
Jersson X. Leon-Medina ◽  
Núria Parés ◽  
Maribel Anaya ◽  
Diego A. Tibaduiza ◽  
Francesc Pozo

The classification and use of robust methodologies in sensor array applications of electronic noses (ENs) remain an open problem. Among the several steps used in the developed methodologies, data preprocessing improves the classification accuracy of this type of sensor. Data preprocessing methods, such as data transformation and data reduction, enable the treatment of data with anomalies, such as outliers and features, that do not provide quality information; in addition, they reduce the dimensionality of the data, thereby facilitating the tasks of a machine learning classifier. To help solve this problem, in this study, a machine learning methodology is introduced to improve signal processing and develop methodologies for classification when an EN is used. The proposed methodology involves a normalization stage to scale the data from the sensors, using both the well-known min−max approach and the more recent mean-centered unitary group scaling (MCUGS). Next, a manifold learning algorithm for data reduction is applied using uniform manifold approximation and projection (UMAP). The dimensionality of the data at the input of the classification machine is reduced, and an extreme learning machine (ELM) is used as a machine learning classifier algorithm. To validate the EN classification methodology, three datasets of ENs were used. The first dataset was composed of 3600 measurements of 6 volatile organic compounds performed by employing 16 metal-oxide gas sensors. The second dataset was composed of 235 measurements of 3 different qualities of wine, namely, high, average, and low, as evaluated by using an EN sensor array composed of 6 different sensors. The third dataset was composed of 309 measurements of 3 different gases obtained by using an EN sensor array of 2 sensors. A 5-fold cross-validation approach was used to evaluate the proposed methodology. A test set consisting of 25% of the data was used to validate the methodology with unseen data. The results showed a fully correct average classification accuracy of 1 when the MCUGS, UMAP, and ELM methods were used. Finally, the effect of changing the number of target dimensions on the reduction of the number of data was determined based on the highest average classification accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7560
Author(s):  
Idongesit Ekerete ◽  
Matias Garcia-Constantino ◽  
Yohanca Diaz-Skeete ◽  
Chris Nugent ◽  
James McLaughlin

The ability to monitor Sprained Ankle Rehabilitation Exercises (SPAREs) in home environments can help therapists ascertain if exercises have been performed as prescribed. Whilst wearable devices have been shown to provide advantages such as high accuracy and precision during monitoring activities, disadvantages such as limited battery life and users’ inability to remember to charge and wear the devices are often the challenges for their usage. In addition, video cameras, which are notable for high frame rates and granularity, are not privacy-friendly. Therefore, this paper proposes the use and fusion of privacy-friendly and Unobtrusive Sensing Solutions (USSs) for data collection and processing during SPAREs in home environments. The present work aims to monitor SPAREs such as dorsiflexion, plantarflexion, inversion, and eversion using radar and thermal sensors. The main contributions of this paper include (i) privacy-friendly monitoring of SPAREs in a home environment, (ii) fusion of SPAREs data from homogeneous and heterogeneous USSs, and (iii) analysis and comparison of results from single, homogeneous, and heterogeneous USSs. Experimental results indicated the advantages of using heterogeneous USSs and data fusion. Cluster-based analysis of data gleaned from the sensors indicated an average classification accuracy of 96.9% with Neural Network, AdaBoost, and Support Vector Machine, amongst others.


2021 ◽  
Vol 2112 (1) ◽  
pp. 012011
Author(s):  
Luquan Wang ◽  
Junxing Lao ◽  
Lingfeng Yang ◽  
Yaguang Zeng ◽  
Yong Chen

Abstract Primary angle closure glaucoma (PACG) is primarily diagnosed by ophthalmologists through morphological analysis of the iris in ultrasonic biomicrocopy(UBM). In recent years, Deep convolutional neural networks (CNNs) show potential for quick category definition in eye disease. According to the characteristics of iris in UBM images, we proposed a network (DenseNet and Attention gate) DA-M2Det to automatic classification iris morphology. Firstly, in the framework of M2Det network, We used the backbone of DenseNet to replace the VGG backbone of M2Det, better extraction of basic feature layers. Secondly, three scales of attention gate (AG) was added to the Thinned U-shape Module (TUM), enable the network to pay more attention to the iris region. Finally, we use the retraining method to further improve the accuracy of iris classification. The classification results of VGG-16, M2Det, ResNet-50 and DA-M2Det networks are compared experimentally. The results show that, in three different iris shapes (including arch, flat and depression), DA-M2Det achieves an average classification accuracy of 85%, which is higher than that of the other three networks. Experimental results show that DA-M2Det can accurately classify irises into three categories, assisting ophthalmologists to quickly diagnose the cause of glaucoma and accurately perform clinical treatment thereby.


2021 ◽  
Vol 7 (11) ◽  
pp. 220
Author(s):  
Filip Bajić ◽  
Josip Job

In recovering information from the chart image, the first step should be chart type classification. Throughout history, many approaches have been used, and some of them achieve results better than others. The latest articles are using a Support Vector Machine (SVM) in combination with a Convolutional Neural Network (CNN), which achieve almost perfect results with the datasets of few thousand images per class. The datasets containing chart images are primarily synthetic and lack real-world examples. To overcome the problem of small datasets, to our knowledge, this is the first report of using Siamese CNN architecture for chart type classification. Multiple network architectures are tested, and the results of different dataset sizes are compared. The network verification is conducted using Few-shot learning (FSL). Many of described advantages of Siamese CNNs are shown in examples. In the end, we show that the Siamese CNN can work with one image per class, and a 100% average classification accuracy is achieved with 50 images per class, where the CNN achieves only average classification accuracy of 43% for the same dataset.


2021 ◽  
Vol 12 (4) ◽  
pp. 79-97
Author(s):  
Zengkai Wang

Video classification has been an active research field of computer vision in last few years. Its main purpose is to produce a label that is relevant to the video given its frames. Unlike image classification, which takes still pictures as input, the input of video classification is a sequence of images. The complex spatial and temporal structures of video sequence incur understanding and computation difficulties, which should be modeled to improve the video classification performance. This work focuses on sports video classification but can be expanded into other applications. In this paper, the authors propose a novel sports video classification method by processing the video data using convolutional neural network (CNN) with spatial attention mechanism and deep bidirectional long short-term memory (BiLSTM) network with temporal attention mechanism. The method first extracts 28 frames from each input video and uses the classical pre-trained CNN to extract deep features, and the spatial attention mechanism is applied to CNN features to decide ‘where' to look. Then the BiLSTM is utilized to model the long-term temporal dependence between video frame sequences, and the temporal attention mechasim is employed to decide ‘when' to look. Finally, the label of the input video is given by the classification network. In order to evaluate the feasibility and effectiveness of the proposed method, an extensive experimental investigation was conducted on the open challenging sports video datasets of Sports8 and Olympic16; the results show that the proposed CNN-BiLSTM network with spatial temporal attention mechanism can effectively model the spatial-temporal characteristics of video sequences. The average classification accuracy of the Sports8 is 98.8%, which is 6.8% higher than the existing method. The average classification accuracy of 90.46% is achieved on Olympic16, which is about 18% higher than the existing methods. The performance of the proposed approach outperforms the state-of-the-art methods, and the experimental results demonstrate the effectiveness of the proposed approach.


2021 ◽  
Vol 11 (15) ◽  
pp. 6828
Author(s):  
Jeroen M. A. van der Burgt ◽  
Saskia M. Camps ◽  
Maria Antico ◽  
Gustavo Carneiro ◽  
Davide Fontanarosa

This work presents an algorithm based on weak supervision to automatically localize an arthroscope on 3D ultrasound (US). The ultimate goal of this application is to combine 3D US with the 2D arthroscope view during knee arthroscopy, to provide the surgeon with a comprehensive view of the surgical site. The implemented algorithm consisted of a weakly supervised neural network, which was trained on 2D US images of different phantoms mimicking the imaging conditions during knee arthroscopy. Image-based classification was performed and the resulting class activation maps were used to localize the arthroscope. The localization performance was evaluated visually by three expert reviewers and by the calculation of objective metrics. Finally, the algorithm was also tested on a human cadaver knee. The algorithm achieved an average classification accuracy of 88.6% on phantom data and 83.3% on cadaver data. The localization of the arthroscope based on the class activation maps was correct in 92–100% of all true positive classifications for both phantom and cadaver data. These results are relevant because they show feasibility of automatic arthroscope localization in 3D US volumes, which is paramount to combining multiple image modalities that are available during knee arthroscopies.


Mekatronika ◽  
2021 ◽  
Vol 3 (1) ◽  
pp. 27-31
Author(s):  
Ken-ji Ee ◽  
Ahmad Fakhri Bin Ab. Nasir ◽  
Anwar P. P. Abdul Majeed ◽  
Mohd Azraai Mohd Razman ◽  
Nur Hafieza Ismail

The animal classification system is a technology to classify the animal class (type) automatically and useful in many applications. There are many types of learning models applied to this technology recently. Nonetheless, it is worth noting that the extraction of the features and the classification of the animal features is non-trivial, particularly in the deep learning approach for a successful animal classification system. The use of Transfer Learning (TL) has been demonstrated to be a powerful tool in the extraction of essential features. However, the employment of such a method towards animal classification applications are somewhat limited. The present study aims to determine a suitable TL-conventional classifier pipeline for animal classification. The VGG16 and VGG19 were used in extracting features and then coupled with either k-Nearest Neighbour (k-NN) or Support Vector Machine (SVM) classifier. Prior to that, a total of 4000 images were gathered consisting of a total of five classes which are cows, goats, buffalos, dogs, and cats. The data was split into the ratio of 80:20 for train and test. The classifiers hyper parameters are tuned by the Grids Search approach that utilises the five-fold cross-validation technique. It was demonstrated from the study that the best TL pipeline identified is the VGG16 along with an optimised SVM, as it was able to yield an average classification accuracy of 0.975. The findings of the present investigation could facilitate animal classification application, i.e. for monitoring animals in wildlife.


Author(s):  
Cara Murphy ◽  
John Kerekes

The classification of trace chemical residues through active spectroscopic sensing is challenging due to the lack of physics-based models that can accurately predict spectra. To overcome this challenge, we leveraged the field of domain adaptation to translate data from the simulated to the measured domain for training a classifier. We developed the first 1D conditional generative adversarial network (GAN) to perform spectrum-to-spectrum translation of reflectance signatures. We applied the 1D conditional GAN to a library of simulated spectra and quantified the improvement in classification accuracy on real data using the translated spectra for training the classifier. Using the GAN-translated library, the average classification accuracy increased from 0.622 to 0.723 on real chemical reflectance data, including data from chemicals not included in the GAN training set.


2021 ◽  
Vol 11 (10) ◽  
pp. 4614
Author(s):  
Xiaofei Chao ◽  
Xiao Hu ◽  
Jingze Feng ◽  
Zhao Zhang ◽  
Meili Wang ◽  
...  

The fast and accurate identification of apple leaf diseases is beneficial for disease control and management of apple orchards. An improved network for apple leaf disease classification and a lightweight model for mobile terminal usage was designed in this paper. First, we proposed SE-DEEP block to fuse the Squeeze-and-Excitation (SE) module with the Xception network to get the SE_Xception network, where the SE module is inserted between the depth-wise convolution and point-wise convolution of the depth-wise separable convolution layer. Therefore, the feature channels from the lower layers could be directly weighted, which made the model more sensitive to the principal features of the classification task. Second, we designed a lightweight network, named SE_miniXception, by reducing the depth and width of SE_Xception. Experimental results show that the average classification accuracy of SE_Xception is 99.40%, which is 1.99% higher than Xception. The average classification accuracy of SE_miniXception is 97.01%, which is 1.60% and 1.22% higher than MobileNetV1 and ShuffleNet, respectively, while its number of parameters is less than those of MobileNet and ShuffleNet. The minimized network decreases the memory usage and FLOPs, and accelerates the recognition speed from 15 to 7 milliseconds per image. Our proposed SE-DEEP block provides a choice for improving network accuracy and our network compression scheme provides ideas to lightweight existing networks.


Sign in / Sign up

Export Citation Format

Share Document