scholarly journals Implementasi K-Nearest Neighbor untuk Klasifikasi Bunga Dengan Ekstraksi Fitur Warna RGB

2020 ◽  
Vol 7 (6) ◽  
pp. 1129
Author(s):  
Lia Farokhah

<p class="Abstrak">Era computer vision merupakan era dimana komputer dilatih untuk bisa melihat, mengidentifikasi dan mengklasifikasi seperti kecerdasan manusia. Algoritma klasifikasi berkembang dari yang paling sederhana seperti K-Nearest Neighbor (KNN) sampai Convolutional Neural Networks. KNN merupakan algoritma klasifikasi yang paling sederhana dalam mengklasifikasikan sebuah gambar kedalam sebuah label. Metode ini mudah dipahami dibandingkan metode lain karena mengklasifikasikan berdasarkan jarak terdekat dengan objek lain (tetangga). Tujuan penelitian ini untuk membuktikan kelemahan metode KNN dan ekstraksi fitur warna RGB dengan karakteristik tertentu. Percobaan pertama dilakukan terhadap dua objek dengan kemiripan bentuk tetapi dengan  warna yang  mencolok di salah satu sisi objek. Percobaan kedua terhadap dua objek yang memiliki perbedaan karakteristik bentuk meskipun memiliki kemiripan warna. Empat objek tersebut adalah bunga coltsfoot, daisy, dandelion dan matahari. Total data dalam dataset adalah 360 data. Dataset memiliki tantangan variasi sudut pandang, penerangan, dan  gangguan dalam latar. Hasil menunjukkan bahwa kolaborasi metode klasifikasi KNN dengan ekstraksi fitur warna RGB memiliki kelemahan terhadap percobaan pertama dengan akurasi 50-60% pada K=5. Percobaan kedua memiliki akurasi sekitar 90-100% pada K=5. Peningkatan akurasi, precision dan recall terjadi ketika menaikkan jumlah K yaitu dari K=1menjadi K=3 dan K=5.</p><p><strong>Kata kunci</strong>: k-nearest neighbour, RGB, kelemahan, kemiripan, bunga</p><p class="Judul2" align="left"> </p><p class="Judul2"> </p><p class="Judul"><em>IMPLEMENTATION OF K-NEAREST NEIGHBOR FOR FLOWER CLASSIFICATION WITH EXTRACTION OF RGB COLOR FEATURES</em></p><p class="Judul"><em>The era of computer vision is an era where computers are trained to be able to see, identify and classify as human intelligence. Classification algorithms develop from the simplest such as K-Nearest Neighbor (KNN) to Convolutional Neural Networks. KNN is the simplest classification algorithm in classifying an image into a label. This method is easier to understand than other methods because it classifies based on the closest distance to other objects (neighbors). The purpose of this research is to prove the weakness of the KNN method and the extraction of RGB color features for specific characteristics. The first  experiment on two objects with similar shape but with sharp color on one side of the object. The second experiment is done on two objects that have different shape characteristics even having a similar colors. The four objects are coltsfoot, daisy, dandelion and sunflower. Total data in the dataset is 360 data. The dataset has the challenge of varying viewpoints, lighting and background noise. The results show that the collaboration of the KNN classification method with RGB color feature extraction has weakness in the first experiment with the level of accuracy about 50-60% at K = 5. The second experiment has an accuracy of around 90-100% at K = 5. Increased accuracy, precision and recall occur when increasing the amount of K, from K = 1 to K = 3 and K = 5.</em></p><p class="Judul2"> </p><p class="Judul2" align="left"> </p><strong>Keywords</strong>: <em>k-nearest neighbour</em>, RGB, <em>weakness, similar, flower</em>

Author(s):  
Wahyu Wijaya Widiyanto ◽  
Eko Purwanto ◽  
Kusrini Kusrini

Proses klasifikasi kualitas mutu buah mangga dengan cara konvensional menggunakan mata manusia memiliki kelemahan di antaranya membutuhkan tenaga lebih banyak untuk memilah, anggapan mutu kualitas buah mangga antar manusia yang berbeda, tingkat konsistensi manusia dalam menilai kualitas mutu buah mangga yang tidak menjamin valid karena manusia dapat mengalami kelelahan. Penelitian ini bertujuan untuk klasifikasi kualitas mutu buah mangga ke dalam tiga kelas mutu yaitu kelas Super, A, dan B dengan computer vision dan algoritma k-Nearest Neighbor. Hasil pengujian menggunakan jumlah k tetangga 9 menunjukan tingkat akurasi sebesar 88,88%.Kata-kata kunci— Klasifikasi, GLCM, K-Nearest Neighbour, Mangga


2018 ◽  
Vol 8 (11) ◽  
pp. 2086 ◽  
Author(s):  
Antonio-Javier Gallego ◽  
Antonio Pertusa ◽  
Jorge Calvo-Zaragoza

We present a hybrid approach to improve the accuracy of Convolutional Neural Networks (CNN) without retraining the model. The proposed architecture replaces the softmax layer by a k-Nearest Neighbor (kNN) algorithm for inference. Although this is a common technique in transfer learning, we apply it to the same domain for which the network was trained. Previous works show that neural codes (neuron activations of the last hidden layers) can benefit from the inclusion of classifiers such as support vector machines or random forests. In this work, our proposed hybrid CNN + kNN architecture is evaluated using several image datasets, network topologies and label noise levels. The results show significant accuracy improvements in the inference stage with respect to the standard CNN with noisy labels, especially with relatively large datasets such as CIFAR100. We also verify that applying the ℓ 2 norm on neural codes is statistically beneficial for this approach.


2021 ◽  
Vol 10 (8) ◽  
pp. 501
Author(s):  
Ruichen Zhang ◽  
Shaofeng Bian ◽  
Houpu Li

The digital elevation model (DEM) is known as one kind of the most significant fundamental geographical data models. The theory, method and application of DEM are hot research issues in geography, especially in geomorphology, hydrology, soil and other related fields. In this paper, we improve the efficient sub-pixel convolutional neural networks (ESPCN) and propose recursive sub-pixel convolutional neural networks (RSPCN) to generate higher-resolution DEMs (HRDEMs) from low-resolution DEMs (LRDEMs). Firstly, the structure of RSPCN is described in detail based on recursion theory. This paper explores the effects of different training datasets, with the self-adaptive learning rate Adam algorithm optimizing the model. Furthermore, the adding-“zero” boundary method is introduced into the RSPCN algorithm as a data preprocessing method, which improves the RSPCN method’s accuracy and convergence. Extensive experiments are conducted to train the method till optimality. Finally, comparisons are made with other traditional interpolation methods, such as bicubic, nearest-neighbor and bilinear methods. The results show that our method has obvious improvements in both accuracy and robustness and further illustrate the feasibility of deep learning methods in the DEM data processing area.


Animals ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 50
Author(s):  
Jennifer Salau ◽  
Jan Henning Haas ◽  
Wolfgang Junge ◽  
Georg Thaller

Machine learning methods have become increasingly important in animal science, and the success of an automated application using machine learning often depends on the right choice of method for the respective problem and data set. The recognition of objects in 3D data is still a widely studied topic and especially challenging when it comes to the partition of objects into predefined segments. In this study, two machine learning approaches were utilized for the recognition of body parts of dairy cows from 3D point clouds, i.e., sets of data points in space. The low cost off-the-shelf depth sensor Microsoft Kinect V1 has been used in various studies related to dairy cows. The 3D data were gathered from a multi-Kinect recording unit which was designed to record Holstein Friesian cows from both sides in free walking from three different camera positions. For the determination of the body parts head, rump, back, legs and udder, five properties of the pixels in the depth maps (row index, column index, depth value, variance, mean curvature) were used as features in the training data set. For each camera positions, a k nearest neighbour classifier and a neural network were trained and compared afterwards. Both methods showed small Hamming losses (between 0.007 and 0.027 for k nearest neighbour (kNN) classification and between 0.045 and 0.079 for neural networks) and could be considered successful regarding the classification of pixel to body parts. However, the kNN classifier was superior, reaching overall accuracies 0.888 to 0.976 varying with the camera position. Precision and recall values associated with individual body parts ranged from 0.84 to 1 and from 0.83 to 1, respectively. Once trained, kNN classification is at runtime prone to higher costs in terms of computational time and memory compared to the neural networks. The cost vs. accuracy ratio for each methodology needs to be taken into account in the decision of which method should be implemented in the application.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 614 ◽  
Author(s):  
M Manoj krishna ◽  
M Neelima ◽  
M Harshali ◽  
M Venu Gopala Rao

The image classification is a classical problem of image processing, computer vision and machine learning fields. In this paper we study the image classification using deep learning. We use AlexNet architecture with convolutional neural networks for this purpose. Four test images are selected from the ImageNet database for the classification purpose. We cropped the images for various portion areas and conducted experiments. The results show the effectiveness of deep learning based image classification using AlexNet.  


2019 ◽  
Vol 29 (2) ◽  
pp. 393-405 ◽  
Author(s):  
Magdalena Piotrowska ◽  
Gražina Korvel ◽  
Bożena Kostek ◽  
Tomasz Ciszewski ◽  
Andrzej Cżyzewski

Abstract Automatic classification methods, such as artificial neural networks (ANNs), the k-nearest neighbor (kNN) and self-organizing maps (SOMs), are applied to allophone analysis based on recorded speech. A list of 650 words was created for that purpose, containing positionally and/or contextually conditioned allophones. For each word, a group of 16 native and non-native speakers were audio-video recorded, from which seven native speakers’ and phonology experts’ speech was selected for analyses. For the purpose of the present study, a sub-list of 103 words containing the English alveolar lateral phoneme /l/ was compiled. The list includes ‘dark’ (velarized) allophonic realizations (which occur before a consonant or at the end of the word before silence) and 52 ‘clear’ allophonic realizations (which occur before a vowel), as well as voicing variants. The recorded signals were segmented into allophones and parametrized using a set of descriptors, originating from the MPEG 7 standard, plus dedicated time-based parameters as well as modified MFCC features proposed by the authors. Classification methods such as ANNs, the kNN and the SOM were employed to automatically detect the two types of allophones. Various sets of features were tested to achieve the best performance of the automatic methods. In the final experiment, a selected set of features was used for automatic evaluation of the pronunciation of dark /l/ by non-native speakers.


2016 ◽  
Vol 13 (5) ◽  
Author(s):  
Malik Yousef ◽  
Waleed Khalifa ◽  
Loai AbdAllah

SummaryThe performance of many learning and data mining algorithms depends critically on suitable metrics to assess efficiency over the input space. Learning a suitable metric from examples may, therefore, be the key to successful application of these algorithms. We have demonstrated that the k-nearest neighbor (kNN) classification can be significantly improved by learning a distance metric from labeled examples. The clustering ensemble is used to define the distance between points in respect to how they co-cluster. This distance is then used within the framework of the kNN algorithm to define a classifier named ensemble clustering kNN classifier (EC-kNN). In many instances in our experiments we achieved highest accuracy while SVM failed to perform as well. In this study, we compare the performance of a two-class classifier using EC-kNN with different one-class and two-class classifiers. The comparison was applied to seven different plant microRNA species considering eight feature selection methods. In this study, the averaged results show that EC-kNN outperforms all other methods employed here and previously published results for the same data. In conclusion, this study shows that the chosen classifier shows high performance when the distance metric is carefully chosen.


2018 ◽  
Vol 8 (1) ◽  
pp. 1-207 ◽  
Author(s):  
Salman Khan ◽  
Hossein Rahmani ◽  
Syed Afaq Ali Shah ◽  
Mohammed Bennamoun

Sign in / Sign up

Export Citation Format

Share Document