scholarly journals Automated MS-Lesion Segmentation by K-Nearest Neighbor Classification

2008 ◽  
Author(s):  
Petronella Anbeek ◽  
Koen L. Vincken ◽  
Max A. Viergever

This paper proposes a new method for fully automated multiple sclerosis (MS) lesion segmentation in cranial magnetic resonance (MR) imaging. The algorithm uses the T1-weighted and the fluid attenuation inversion recovery scans. It is based the K-Nearest Neighbor (KNN) classification technique. The data has been acquired at the Children�s Hospital Boston (CHB) and the University of North Carolina (UNC). Manual segmentations, composed by a human expert of the CHB, were used for training of the KNN-classification. The method uses voxel location and signal intensity information for determination of the probability being a lesion per voxel, thus generating probabilistic segmentation images. By applying a threshold on the probabilistic images binary segmentations are derived. Automatic segmentations were performed on a set of testing images, and compared with manual segmentations from a CHB and a UNC expert rater. Furthermore, a combined segmentation was composed from segmentations from different algorithms, and used for evaluation. The proposed method shows good resemblance with the segmentations of the CHB rater. High specificity and lower specificity has been observed in comparison with the combined segmentations. Over- and undersegmentation can be easily corrected in this procedure by varying the threshold on the probabilistic segmentation image. The proposed method offers an automated and fully reproducible approach that accurate and applicable on standard clinical MR images.

2013 ◽  
Vol 3 ◽  
pp. 462-469 ◽  
Author(s):  
Martijn D. Steenwijk ◽  
Petra J.W. Pouwels ◽  
Marita Daams ◽  
Jan Willem van Dalen ◽  
Matthan W.A. Caan ◽  
...  

Author(s):  
Aldi Nugroho ◽  
Osvaldo Richie Riady ◽  
Alexander Calvin ◽  
Derwin Suhartono

Students are an important asset in the world of education also an institution and therefore also need to pay attention to students' graduation rates on time. The ups and downs of the percentage of students' abilities in classroom learning is one important element for assessing university accreditation. Therefore, it is necessary to monitor and evaluate teaching and learning activities using the KNN Algorithm classification. By processing student complaints data and seeing the results of previous learning can obtain important things for higher education needs. In predicting graduation rates based on complaints, this study uses the K-Nearest Neighbor classification algorithm by grouping data k = 1, k = 2, k = 3 with the smallest value possible. In experiments using the KNN method the results were clearly visible and showed quite good accuracy. From the experiment it was concluded that if there were fewer complaints from one student it could minimize the level of student non-graduates at the university and ultimately produce good accreditation.


2016 ◽  
Vol 13 (5) ◽  
Author(s):  
Malik Yousef ◽  
Waleed Khalifa ◽  
Loai AbdAllah

SummaryThe performance of many learning and data mining algorithms depends critically on suitable metrics to assess efficiency over the input space. Learning a suitable metric from examples may, therefore, be the key to successful application of these algorithms. We have demonstrated that the k-nearest neighbor (kNN) classification can be significantly improved by learning a distance metric from labeled examples. The clustering ensemble is used to define the distance between points in respect to how they co-cluster. This distance is then used within the framework of the kNN algorithm to define a classifier named ensemble clustering kNN classifier (EC-kNN). In many instances in our experiments we achieved highest accuracy while SVM failed to perform as well. In this study, we compare the performance of a two-class classifier using EC-kNN with different one-class and two-class classifiers. The comparison was applied to seven different plant microRNA species considering eight feature selection methods. In this study, the averaged results show that EC-kNN outperforms all other methods employed here and previously published results for the same data. In conclusion, this study shows that the chosen classifier shows high performance when the distance metric is carefully chosen.


2019 ◽  
Vol 1280 ◽  
pp. 022025
Author(s):  
W Uriawan ◽  
A Kodir ◽  
A R Atmadja ◽  
F Fathurrahman ◽  
M A Ramdhani

2021 ◽  
Vol 12 (2) ◽  
pp. 91
Author(s):  
Zilvanhisna Emka Fitri ◽  
Lalitya Nindita Sahenda ◽  
Pramuditha Shinta Dewi Puspitasari ◽  
Prawidya Destarianto ◽  
Dyah Laksito Rukmi ◽  
...  

Acute Respiratory Infection (ARI) is an infectious disease. One of the performance indicators of infectious disease control and handling programs is disease discovery. However, the problem that often occurs is the limited number of medical analysts, the number of patients, and the experience of medical analysts in identifying bacterial processes so that the examination is relatively longer. Based on these problems, an automatic and accurate classification system of bacteria that causes Acute Respiratory Infection (ARI) was created. The research process is preprocessing images (color conversion and contrast stretching), segmentation, feature extraction, and KNN classification. The parameters used are bacterial count, area, perimeter, and shape factor. The best training data and test data comparison is 90%: 10% of 480 data. The KNN classification method is very good for classifying bacteria. The highest level of accuracy is 91.67%, precision is 92.4%, and recall is 91.7% with three variations of K values, namely K = 3, K = 5, and K = 7.


2020 ◽  
Vol 9 (2) ◽  
pp. 277
Author(s):  
Ayu Made Surya Indra Dewi ◽  
Ida Bagus Gede Dwidasmara

Obesity or overweight is a health problem that can affect anyone. In research in several journals, it was found that obesity can be influenced by many factors, but the most dominant factors are lifestyle and diet. Obesity should not only be considered as a consequence of an unhealthy lifestyle, but obesity is a disease that can lead to other dangerous diseases. Therefore, it is important to know the level of obesity in order to take early prevention. To determine the level of obesity, a classification method is used, namely K-Nearest Neighbor (KNN) to classify the level of obesity. In this study, classification was carried out with 16 test parameters, namely Gender, Age, Height, Weight, Family History With Overweight, FAVC, FCVC, NCP, CAEC, Smoke, CH2O, SCC, FAF, TUE, CALC, Mtrans and 1 class attribute, namely Nobesity. From tests carried out using the KNN algorithm, the results obtained are 78.98% accuracy with a value of k = 2. Keywords: Obesity, KNN, Classification


2019 ◽  
Vol 6 (6) ◽  
pp. 665
Author(s):  
Aditya Hari Bawono ◽  
Ahmad Afif Supianto

<p>Klasifikasi adalah salah satu metode penting dalam kajian data mining. Salah satu metode klasifikasi yang populer dan mendasar adalah k<em>-nearest neighbor</em> (kNN). Pada kNN, hubungan antar sampel diukur berdasarkan tingkat kesamaan yang direpresentasikan sebagai jarak. Pada kasus mayoritas terutama pada data berukuran besar, akan terdapat beberapa sampel yang memiliki jarak yang sama namun amat mungkin tidak terpilih menjadi tetangga, maka pemilihan parameter k akan sangat mempengaruhi hasil klasifikasi kNN. Selain itu, pengurutan pada kNN menjadi masalah komputasi ketika dilakukan pada data berukuran besar. Dalam usaha mengatasi klasifikasi data berukuran besar dibutuhkan metode yang lebih akurat dan efisien. <em>Dependent Nearest Neighbor</em> (dNN) sebagai metode yang diajukan dalam penelitian ini tidak menggunakan parameter k dan tidak ada proses pengurutan sampel. Hasil percobaan menunjukkan bahwa dNN dapat menghasilkan efisiensi waktu sebesar 3 kali lipat lebih cepat daripada kNN. Perbandingan akurasi dNN adalah 13% lebih baik daripada kNN.</p><p><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Classification is one of the important methods of data mining. One of the most popular and basic classification methods is k-nearest neighbor (kNN). In kNN, the relationships between samples are measured by the degree of similarity represented as distance. In major cases, especially on big data, there will be some samples that have the same distance but may not be selected as neighbors, then the selection of k parameters will greatly affect the results of kNN classification. Sorting phase of kNN becomes a computation problem when it is done on big data. In the effort to overcome the classification of big data a more accurate and efficient method is required. Dependent Nearest Neighbor (dNN) as method proposed in this study did not use the k parameters and no sample at the sorting phase. The proposed method resulted in 3 times faster than kNN. The accuracy of the proposed method is13% better results than kNN.</em></p><p class="Judul2" align="left"><em> </em></p>


Sign in / Sign up

Export Citation Format

Share Document