scholarly journals Performance evaluation of linear discriminant analysis and support vector machines to classify cesarean section

2021 ◽  
Vol 5 (2 (113)) ◽  
pp. 37-43
Author(s):  
Abdul Azis Abdillah ◽  
Azwardi Azwardi ◽  
Sulaksana Permana ◽  
Iwan Susanto ◽  
Fuad Zainuri ◽  
...  

Currently the hospital is a place that is very vulnerable to the transmission of Covid-19, so giving birth in a hospital is very risky. In addition, the hospital currently only accepts cesarean deliveries, while mothers who can give birth vaginally are recommended to give birth in a midwife because the chances of being exposed to Covid-19 are much lower. In general, this study aims to examine the performance of the LDA-SVM method in predicting whether a prospective mother needs to undergo a C-section or simply give birth normally. The aims of this study are: 1) to determine the best parameters for building the detection model; 2) to determine the best accuracy from the model; 3) to compare the accuracies with the other methods. The data used in this study is the dataset of caesarian section. This data consists of the results of 80 pregnant women following C-section with the most important characteristics of labor problems in the clinical field. Based on the results of the experiments that have been carried out, several parameter values that provide the best results for building the detection model are obtained, namely σ (sigma) –5.9 for 70 % training data, σ=4, –6.1 and ‑6.6 for 80 % training data and σ=4 and 16 for 90 % training data. Besides, the results obtained show that the LDA-SVM method is able to classify the C-section method properly with an accuracy of up to 100 %. This research is also able to surpass the methods in previous studies. The results show that LDA-SVM for this case study generates an accuracy of 100.00 %. This method has great potential to be used by doctors used as an early detection to determine whether a mother needs to go through a C-section or simply give birth vaginally. So that mothers can prevent the transmission of Covid-19 in the hospital

Author(s):  
Clyde Coelho ◽  
Aditi Chattopadhyay

This paper proposes a computationally efficient methodology for classifying damage in structural hotspots. Data collected from a sensor instrumented lug joint subjected to fatigue loading was preprocessed using a linear discriminant analysis (LDA) to extract features that are relevant for classification and reduce the dimensionality of the data. The data is then reduced in the feature space by analyzing the structure of the mapped clusters and removing the data points that do not affect the construction of interclass separating hyperplanes. The reduced data set is used to train a support vector machines (SVM) based classifier and the results of the classification problem are compared to those when the entire data set is used for training. To further improve the efficiency of the classification scheme, the SVM classifiers are arranged in a binary tree format to reduce the number of comparisons that are necessary. The experimental results show that the data reduction does not reduce the ability of the classifier to distinguish between classes while providing a nearly fourfold decrease in the amount of training data processed.


2021 ◽  
Author(s):  
Ali Mobaien ◽  
Negar Kheirandish ◽  
Reza Boostani

<div>Abstract—Visual P300 mind speller is a brain-computer interface that allows an individual to type through his mind. For this goal, the subject sits in front of a screen full of characters, and when his desired one is highlighted, there will be a P300 response (a positive deflection nearly 300ms after stimulus) in his brain signals. Due to the very low signal-to noise (SNR) of the P300 in the background activities of the brain, detection of this component is challenging. Principal ERP reduction (pERP-RED) is a newly developed method that can effectively extract the underlying templates of event-related potentials (ERPs), by employing a three-step spatial filtering procedure. In this research, we investigate the performance of pERP-RED in conjunction with linear discriminant analysis (LDA) to classify P300 data. The proposed method is examined on a real P300 dataset and compared to the state-of-the-art LDA and support vector machines. The results demonstrate that the proposed method achieves higher classification accuracy in low SNRs and low numbers of training data.</div>


2021 ◽  
Author(s):  
Ali Mobaien ◽  
Negar Kheirandish ◽  
Reza Boostani

<div>Abstract—Visual P300 mind speller is a brain-computer interface that allows an individual to type through his mind. For this goal, the subject sits in front of a screen full of characters, and when his desired one is highlighted, there will be a P300 response (a positive deflection nearly 300ms after stimulus) in his brain signals. Due to the very low signal-to noise (SNR) of the P300 in the background activities of the brain, detection of this component is challenging. Principal ERP reduction (pERP-RED) is a newly developed method that can effectively extract the underlying templates of event-related potentials (ERPs), by employing a three-step spatial filtering procedure. In this research, we investigate the performance of pERP-RED in conjunction with linear discriminant analysis (LDA) to classify P300 data. The proposed method is examined on a real P300 dataset and compared to the state-of-the-art LDA and support vector machines. The results demonstrate that the proposed method achieves higher classification accuracy in low SNRs and low numbers of training data.</div>


2020 ◽  
Vol 13 (1) ◽  
pp. 23
Author(s):  
Wei Zhao ◽  
William Yamada ◽  
Tianxin Li ◽  
Matthew Digman ◽  
Troy Runge

In recent years, precision agriculture has been researched to increase crop production with less inputs, as a promising means to meet the growing demand of agriculture products. Computer vision-based crop detection with unmanned aerial vehicle (UAV)-acquired images is a critical tool for precision agriculture. However, object detection using deep learning algorithms rely on a significant amount of manually prelabeled training datasets as ground truths. Field object detection, such as bales, is especially difficult because of (1) long-period image acquisitions under different illumination conditions and seasons; (2) limited existing prelabeled data; and (3) few pretrained models and research as references. This work increases the bale detection accuracy based on limited data collection and labeling, by building an innovative algorithms pipeline. First, an object detection model is trained using 243 images captured with good illimitation conditions in fall from the crop lands. In addition, domain adaptation (DA), a kind of transfer learning, is applied for synthesizing the training data under diverse environmental conditions with automatic labels. Finally, the object detection model is optimized with the synthesized datasets. The case study shows the proposed method improves the bale detecting performance, including the recall, mean average precision (mAP), and F measure (F1 score), from averages of 0.59, 0.7, and 0.7 (the object detection) to averages of 0.93, 0.94, and 0.89 (the object detection + DA), respectively. This approach could be easily scaled to many other crop field objects and will significantly contribute to precision agriculture.


2019 ◽  
Vol 6 (5) ◽  
pp. 190001 ◽  
Author(s):  
Katherine E. Klug ◽  
Christian M. Jennings ◽  
Nicholas Lytal ◽  
Lingling An ◽  
Jeong-Yeol Yoon

A straightforward method for classifying heavy metal ions in water is proposed using statistical classification and clustering techniques from non-specific microparticle scattering data. A set of carboxylated polystyrene microparticles of sizes 0.91, 0.75 and 0.40 µm was mixed with the solutions of nine heavy metal ions and two control cations, and scattering measurements were collected at two angles optimized for scattering from non-aggregated and aggregated particles. Classification of these observations was conducted and compared among several machine learning techniques, including linear discriminant analysis, support vector machine analysis, K-means clustering and K-medians clustering. This study found the highest classification accuracy using the linear discriminant and support vector machine analysis, each reporting high classification rates for heavy metal ions with respect to the model. This may be attributed to moderate correlation between detection angle and particle size. These classification models provide reasonable discrimination between most ion species, with the highest distinction seen for Pb(II), Cd(II), Ni(II) and Co(II), followed by Fe(II) and Fe(III), potentially due to its known sorption with carboxyl groups. The support vector machine analysis was also applied to three different mixture solutions representing leaching from pipes and mine tailings, and showed good correlation with single-species data, specifically with Pb(II) and Ni(II). With more expansive training data and further processing, this method shows promise for low-cost and portable heavy metal identification and sensing.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4523 ◽  
Author(s):  
Carlos Cabo ◽  
Celestino Ordóñez ◽  
Fernando Sáchez-Lasheras ◽  
Javier Roca-Pardiñas ◽  
and Javier de Cos-Juez

We analyze the utility of multiscale supervised classification algorithms for object detection and extraction from laser scanning or photogrammetric point clouds. Only the geometric information (the point coordinates) was considered, thus making the method independent of the systems used to collect the data. A maximum of five features (input variables) was used, four of them related to the eigenvalues obtained from a principal component analysis (PCA). PCA was carried out at six scales, defined by the diameter of a sphere around each observation. Four multiclass supervised classification models were tested (linear discriminant analysis, logistic regression, support vector machines, and random forest) in two different scenarios, urban and forest, formed by artificial and natural objects, respectively. The results obtained were accurate (overall accuracy over 80% for the urban dataset, and over 93% for the forest dataset), in the range of the best results found in the literature, regardless of the classification method. For both datasets, the random forest algorithm provided the best solution/results when discrimination capacity, computing time, and the ability to estimate the relative importance of each variable are considered together.


2011 ◽  
Vol 2011 ◽  
pp. 1-28 ◽  
Author(s):  
Zhongqiang Chen ◽  
Zhanyan Liang ◽  
Yuan Zhang ◽  
Zhongrong Chen

Grayware encyclopedias collect known species to provide information for incident analysis, however, the lack of categorization and generalization capability renders them ineffective in the development of defense strategies against clustered strains. A grayware categorization framework is therefore proposed here to not only classify grayware according to diverse taxonomic features but also facilitate evaluations on grayware risk to cyberspace. Armed with Support Vector Machines, the framework builds learning models based on training data extracted automatically from grayware encyclopedias and visualizes categorization results with Self-Organizing Maps. The features used in learning models are selected with information gain and the high dimensionality of feature space is reduced by word stemming and stopword removal process. The grayware categorizations on diversified features reveal that grayware typically attempts to improve its penetration rate by resorting to multiple installation mechanisms and reduced code footprints. The framework also shows that grayware evades detection by attacking victims' security applications and resists being removed by enhancing its clotting capability with infected hosts. Our analysis further points out that species in categoriesSpywareandAdwarecontinue to dominate the grayware landscape and impose extremely critical threats to the Internet ecosystem.


Sign in / Sign up

Export Citation Format

Share Document