scholarly journals Automated Cell Selection Using Support Vector Machine for Application to Spectral Nanocytology

2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Qin Miao ◽  
Justin Derbas ◽  
Aya Eid ◽  
Hariharan Subramanian ◽  
Vadim Backman

Partial wave spectroscopy (PWS) enables quantification of the statistical properties of cell structures at the nanoscale, which has been used to identify patients harboring premalignant tumors by interrogating easily accessible sites distant from location of the lesion. Due to its high sensitivity, cells that are well preserved need to be selected from the smear images for further analysis. To date, such cell selection has been done manually. This is time-consuming, is labor-intensive, is vulnerable to bias, and has considerable inter- and intraoperator variability. In this study, we developed a classification scheme to identify and remove the corrupted cells or debris that are of no diagnostic value from raw smear images. The slide of smear sample is digitized by acquiring and stitching low-magnification transmission. Objects are then extracted from these images through segmentation algorithms. A training-set is created by manually classifying objects as suitable or unsuitable. A feature-set is created by quantifying a large number of features for each object. The training-set and feature-set are used to train a selection algorithm using Support Vector Machine (SVM) classifiers. We show that the selection algorithm achieves an error rate of 93% with a sensitivity of 95%.

2017 ◽  
Vol 116 ◽  
pp. 58-73 ◽  
Author(s):  
Chuan Liu ◽  
Wenyong Wang ◽  
Meng Wang ◽  
Fengmao Lv ◽  
Martin Konan

2013 ◽  
Vol 706-708 ◽  
pp. 613-617
Author(s):  
Fu Cheng Liu ◽  
Zhao Hui Liu ◽  
Wen Liu ◽  
Dong Sheng Liang ◽  
Kai Cui ◽  
...  

A navigation star catalog (NSC) selection algorithm via support vector machine (SVM) is proposed in this paper. The sphere spiral method is utilized to generate the sampling boresight directions by virtue of obtaining the uniform sampling data. Then the theory of regression analysis methods is adopted to extract the NSC, and an evenly distributed and small capacity NSC is obtained. Two criterions, namely a global criterion and a local criterion, are defined as the uniformity criteria to test the performance of the NSC generated. Simulations show that, compared with MFM, magnitude weighted method (MWM) and self-organizing algorithm(S-OA), the Boltzmann entropy (B.e) of SVM selection algorithm (SVM-SA) is the minimum, to 0.00207. Simultaneously, under the conditions such as the same field of view (FOV) and elimination of the hole, both the number of guide stars (NGS) and standard deviation (std) of SVM-SA is the least, respectively 7668 and 2.17. Consequently, the SVM-SA is optimal in terms of the NGS and the uniform distribution, and has also a strong adaptability.


Author(s):  
SAEID SANEI

Segmentation of natural textures has been investigated by developing a novel semi-supervised support vector machines (S3VM) algorithm with multiple constraints. Unlike conventional segmentation algorithms the proposed method does not classify the textures but classifies the uniform-texture regions and the regions of boundaries. Also the overall algorithm does not use any training set as used by all other learning algorithms such as conventional SVMs. During the process, the images are restored from high spatial frequency noise. Then various-order statistics of the textures within a sliding two-dimensional window are measured. K-mean algorithm is used to initialise the clustering procedure by labelling part of the class members and the classifier parameters. Therefore at this stage we have both the training and the working sets. A non-linear S3VM is then developed to exploit both sets to classify all the regions. The convex algorithm maximises a defined cost function by incorporating a number of constraints. The algorithm has been applied to combinations of a number of natural textures. It is demonstrated that the algorithm is robust, with negligible misclassification error. However, for complex textures there may be a minor misplacement of the edges.


2018 ◽  
Vol 57 (05/06) ◽  
pp. 253-260 ◽  
Author(s):  
J. Patel ◽  
Z. Siddiqui ◽  
A. Krishnan ◽  
T. Thyvalikakath

Background Smoking is an established risk factor for oral diseases and, therefore, dental clinicians routinely assess and record their patients' detailed smoking status. Researchers have successfully extracted smoking history from electronic health records (EHRs) using text mining methods. However, they could not retrieve patients' smoking intensity due to its limited availability in the EHR. The presence of detailed smoking information in the electronic dental record (EDR) often under a separate section allows retrieving this information with less preprocessing. Objective To determine patients' detailed smoking status based on smoking intensity from the EDR. Methods First, the authors created a reference standard of 3,296 unique patients’ smoking histories from the EDR that classified patients based on their smoking intensity. Next, they trained three machine learning classifiers (support vector machine, random forest, and naïve Bayes) using the training set (2,176) and evaluated performances on test set (1,120) using precision (P), recall (R), and F-measure (F). Finally, they applied the best classifier to classify smoking status from an additional 3,114 patients’ smoking histories. Results Support vector machine performed best to classify patients into smokers, nonsmokers, and unknowns (P, R, F: 98%); intermittent smoker (P: 95%, R: 98%, F: 96%); past smoker (P, R, F: 89%); light smoker (P, R, F: 87%); smokers with unknown intensity (P: 76%, R: 86%, F: 81%), and intermediate smoker (P: 90%, R: 88%, F: 89%). It performed moderately to differentiate heavy smokers (P: 90%, R: 44%, F: 60%). EDR could be a valuable source for obtaining patients’ detailed smoking information. Conclusion EDR data could serve as a valuable source for obtaining patients' detailed smoking information based on their smoking intensity that may not be readily available in the EHR.


2019 ◽  
Vol 47 (3) ◽  
pp. 154-170
Author(s):  
Janani Balakumar ◽  
S. Vijayarani Mohan

Purpose Owing to the huge volume of documents available on the internet, text classification becomes a necessary task to handle these documents. To achieve optimal text classification results, feature selection, an important stage, is used to curtail the dimensionality of text documents by choosing suitable features. The main purpose of this research work is to classify the personal computer documents based on their content. Design/methodology/approach This paper proposes a new algorithm for feature selection based on artificial bee colony (ABCFS) to enhance the text classification accuracy. The proposed algorithm (ABCFS) is scrutinized with the real and benchmark data sets, which is contrary to the other existing feature selection approaches such as information gain and χ2 statistic. To justify the efficiency of the proposed algorithm, the support vector machine (SVM) and improved SVM classifier are used in this paper. Findings The experiment was conducted on real and benchmark data sets. The real data set was collected in the form of documents that were stored in the personal computer, and the benchmark data set was collected from Reuters and 20 Newsgroups corpus. The results prove the performance of the proposed feature selection algorithm by enhancing the text document classification accuracy. Originality/value This paper proposes a new ABCFS algorithm for feature selection, evaluates the efficiency of the ABCFS algorithm and improves the support vector machine. In this paper, the ABCFS algorithm is used to select the features from text (unstructured) documents. Although, there is no text feature selection algorithm in the existing work, the ABCFS algorithm is used to select the data (structured) features. The proposed algorithm will classify the documents automatically based on their content.


2019 ◽  
Vol 10 (21) ◽  
pp. 5090-5098 ◽  
Author(s):  
Wei Wang ◽  
Mingcui Ding ◽  
Xiaoran Duan ◽  
Xiaolei Feng ◽  
Pengpeng Wang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document