scholarly journals Klasifikasi Untuk Menentukan Tingkat Kematangan Buah Pisang Sunpride

2016 ◽  
Vol 2 ◽  
pp. 109
Author(s):  
Gracelia Adelaida Bere ◽  
Elizabeth Nurmiyati Tamtjita ◽  
Anggraini Kusumaningrum

         YIQ (Iuma, In-phase, Quadrature) is a color space used to transmit analog TV signal. This research is conducting a possibility test on using YIQ as color features for fruit ripeness classification, which tested on Sunpride bananas. Classification is done using k-NN algorithm against YIQ values of several ripeness stage. The classification process itself consists of two steps: training and testing. In the training step, values from RGB color space of the images as training samples are converted into YIQ and extracted as features to form the classes, while in testing step, the test image went through the same conversion and feature extraction process, then classified using k-NN against the classes’ features, using k=3 and k=1. There are 120 Sunpride banana images used as test objects, and the results obtained shown that the classification performance using k=3 for Sangat Matang class is 100%, Busuk class is 66,67%, Mengkal class is 60% and Matang class is 60%. Results using k=1 for Sangat Matang class is 100%, Busuk class is 66,67%, Mengkal class is 66,7% and Matang class is 56,67%.Keywords  : Classification, YIQ Color Space, Banana Sunpride, Euclidean Distance, k-Nearest Neighbors

2021 ◽  
Vol 2 (02) ◽  
pp. 83-87
Author(s):  
Dhian Satria Yudha Kartika ◽  
Hendra Maulana

Research in digital images is expanding widely and includes several sectors. One sector currently being carried out research is in insects; specifically, butterflies are used as a dataset. A total of 890 types of butterflies divided into ten classes were used as a dataset and classified based on color. Ten types of butterflies include Danaus plexippus, Heliconius charitonius, Heliconius erato, Junonia coenia, Lycaena phlaeas, Nymphalis antiopa, Papilio cresphontes, Pieris rapae, Vanessa atalanta, Vanessa cardui. The process of extracting color features on butterfly wings uses the RGB method to become HSV color space with color quantization (CQ). The purpose of adding CQ is that the computation process is carried out faster without reducing the image's information. In the color feature extraction process, the image is converted into 3-pixel sizes and normalized. The process of normalizing the dataset has the aim that the value ranges in the dataset have the same value. The 890 butterfly dataset was classified using the Support Vector Machine (SVM) method. Based on this research process, the accuracy of the 256x160 pixel size is 72%, the 420x315 pixel is 75%, and the 768x576 pixel is 75%. The test results on a system with a 768x576 pixel get the highest results with a precision value of 74.6%, a recall of 72%, and an f-measure of 73.2% Keywords—image processing; classification; butterflies; color features; features extraction


2021 ◽  
Vol 2074 (1) ◽  
pp. 012065
Author(s):  
Liujun Lin

Abstract Traditionally, the color grading of sapphire is mainly based on the naked eye judgment of the appraiser. This judgment standard is not clear enough, and the judgment result has a greater subjective influence, which affects the accuracy of the classification. In this study, the GEM-3000 ultraviolet-visible spectrophotometer was selected, and the color features of 180 sapphire samples were extracted and classified using the CIE1976 color space of the device. The Kmeans algorithm was used to cluster analysis of 140 samples, and the separability of the color space features of different color levels was verified, and the center sample of each color level was obtained. The Euclidean distance between the centers of the remaining 40 samples is calculated, and each color grade prediction label is determined, and the sapphire color is automatically classified based on this. The experimental results show that the accuracy of sapphire color classification using the above method is 97.5%, which confirms the effect and accuracy of the artificial intelligence method in sapphire color classification.


Diagnostics ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 574
Author(s):  
Gennaro Tartarisco ◽  
Giovanni Cicceri ◽  
Davide Di Pietro ◽  
Elisa Leonardi ◽  
Stefania Aiello ◽  
...  

In the past two decades, several screening instruments were developed to detect toddlers who may be autistic both in clinical and unselected samples. Among others, the Quantitative CHecklist for Autism in Toddlers (Q-CHAT) is a quantitative and normally distributed measure of autistic traits that demonstrates good psychometric properties in different settings and cultures. Recently, machine learning (ML) has been applied to behavioral science to improve the classification performance of autism screening and diagnostic tools, but mainly in children, adolescents, and adults. In this study, we used ML to investigate the accuracy and reliability of the Q-CHAT in discriminating young autistic children from those without. Five different ML algorithms (random forest (RF), naïve Bayes (NB), support vector machine (SVM), logistic regression (LR), and K-nearest neighbors (KNN)) were applied to investigate the complete set of Q-CHAT items. Our results showed that ML achieved an overall accuracy of 90%, and the SVM was the most effective, being able to classify autism with 95% accuracy. Furthermore, using the SVM–recursive feature elimination (RFE) approach, we selected a subset of 14 items ensuring 91% accuracy, while 83% accuracy was obtained from the 3 best discriminating items in common to ours and the previously reported Q-CHAT-10. This evidence confirms the high performance and cross-cultural validity of the Q-CHAT, and supports the application of ML to create shorter and faster versions of the instrument, maintaining high classification accuracy, to be used as a quick, easy, and high-performance tool in primary-care settings.


Agriculture ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 6
Author(s):  
Ewa Ropelewska

The aim of this study was to evaluate the usefulness of the texture and geometric parameters of endocarp (pit) for distinguishing different cultivars of sweet cherries using image analysis. The textures from images converted to color channels and the geometric parameters of the endocarp (pits) of sweet cherry ‘Kordia’, ‘Lapins’, and ‘Büttner’s Red’ were calculated. For the set combining the selected textures from all color channels, the accuracy reached 100% when comparing ‘Kordia’ vs. ‘Lapins’ and ‘Kordia’ vs. ‘Büttner’s Red’ for all classifiers. The pits of ‘Kordia’ and ‘Lapins’, as well as ‘Kordia’ and ‘Büttner’s Red’ were also 100% correctly discriminated for discriminative models built separately for RGB, Lab and XYZ color spaces, G, L and Y color channels and for models combining selected textural and geometric features. For discrimination ‘Lapins’ and ‘Büttner’s Red’ pits, slightly lower accuracies were determined—up to 93% for models built based on textures selected from all color channels, 91% for the RGB color space, 92% for the Lab and XYZ color spaces, 84% for the G and L color channels, 83% for the Y channel, 94% for geometric features, and 96% for combined textural and geometric features.


2021 ◽  
Vol 13 (6) ◽  
pp. 1211
Author(s):  
Pan Fan ◽  
Guodong Lang ◽  
Bin Yan ◽  
Xiaoyan Lei ◽  
Pengju Guo ◽  
...  

In recent years, many agriculture-related problems have been evaluated with the integration of artificial intelligence techniques and remote sensing systems. The rapid and accurate identification of apple targets in an illuminated and unstructured natural orchard is still a key challenge for the picking robot’s vision system. In this paper, by combining local image features and color information, we propose a pixel patch segmentation method based on gray-centered red–green–blue (RGB) color space to address this issue. Different from the existing methods, this method presents a novel color feature selection method that accounts for the influence of illumination and shadow in apple images. By exploring both color features and local variation in apple images, the proposed method could effectively distinguish the apple fruit pixels from other pixels. Compared with the classical segmentation methods and conventional clustering algorithms as well as the popular deep-learning segmentation algorithms, the proposed method can segment apple images more accurately and effectively. The proposed method was tested on 180 apple images. It offered an average accuracy rate of 99.26%, recall rate of 98.69%, false positive rate of 0.06%, and false negative rate of 1.44%. Experimental results demonstrate the outstanding performance of the proposed method.


2021 ◽  
Vol 13 (4) ◽  
pp. 547
Author(s):  
Wenning Wang ◽  
Xuebin Liu ◽  
Xuanqin Mou

For both traditional classification and current popular deep learning methods, the limited sample classification problem is very challenging, and the lack of samples is an important factor affecting the classification performance. Our work includes two aspects. First, the unsupervised data augmentation for all hyperspectral samples not only improves the classification accuracy greatly with the newly added training samples, but also further improves the classification accuracy of the classifier by optimizing the augmented test samples. Second, an effective spectral structure extraction method is designed, and the effective spectral structure features have a better classification accuracy than the true spectral features.


2021 ◽  
Vol 13 (5) ◽  
pp. 939
Author(s):  
Yongan Xue ◽  
Jinling Zhao ◽  
Mingmei Zhang

To accurately extract cultivated land boundaries based on high-resolution remote sensing imagery, an improved watershed segmentation algorithm was proposed herein based on a combination of pre- and post-improvement procedures. Image contrast enhancement was used as the pre-improvement, while the color distance of the Commission Internationale de l´Eclairage (CIE) color space, including the Lab and Luv, was used as the regional similarity measure for region merging as the post-improvement. Furthermore, the area relative error criterion (δA), the pixel quantity error criterion (δP), and the consistency criterion (Khat) were used for evaluating the image segmentation accuracy. The region merging in Red–Green–Blue (RGB) color space was selected to compare the proposed algorithm by extracting cultivated land boundaries. The validation experiments were performed using a subset of Chinese Gaofen-2 (GF-2) remote sensing image with a coverage area of 0.12 km2. The results showed the following: (1) The contrast-enhanced image exhibited an obvious gain in terms of improving the image segmentation effect and time efficiency using the improved algorithm. The time efficiency increased by 10.31%, 60.00%, and 40.28%, respectively, in the RGB, Lab, and Luv color spaces. (2) The optimal segmentation and merging scale parameters in the RGB, Lab, and Luv color spaces were C for minimum areas of 2000, 1900, and 2000, and D for a color difference of 1000, 40, and 40. (3) The algorithm improved the time efficiency of cultivated land boundary extraction in the Lab and Luv color spaces by 35.16% and 29.58%, respectively, compared to the RGB color space. The extraction accuracy was compared to the RGB color space using the δA, δP, and Khat, that were improved by 76.92%, 62.01%, and 16.83%, respectively, in the Lab color space, while they were 55.79%, 49.67%, and 13.42% in the Luv color space. (4) Through the visual comparison, time efficiency, and segmentation accuracy, the comprehensive extraction effect using the proposed algorithm was obviously better than that of RGB color-based space algorithm. The established accuracy evaluation indicators were also proven to be consistent with the visual evaluation. (5) The proposed method has a satisfying transferability by a wider test area with a coverage area of 1 km2. In addition, the proposed method, based on the image contrast enhancement, was to perform the region merging in the CIE color space according to the simulated immersion watershed segmentation results. It is a useful attempt for the watershed segmentation algorithm to extract cultivated land boundaries, which provides a reference for enhancing the watershed algorithm.


Author(s):  
HUA YANG ◽  
MASAAKI KASHIMURA ◽  
NORIKADU ONDA ◽  
SHINJI OZAWA

This paper describes a new system for extracting and classifying bibliography regions from the color image of a book cover. The system consists of three major components: preprocessing, color space segmentation and text region extraction and classification. Preprocessing extracts the edge lines of the book and geometrically corrects and segments the input image, into the parts of front cover, spine and back cover. The same as all color image processing researches, the segmentation of color space is an essential and important step here. Instead of RGB color space, HSI color space is used in this system. The color space is segmented into achromatic and chromatic regions first; and both the achromatic and chromatic regions are segmented further to complete the color space segmentation. Then text region extraction and classification follow. After detecting fundamental features (stroke width and local label width) text regions are determined. By comparing the text regions on front cover with those on spine, all extracted text regions are classified into suitable bibliography categories: author, title, publisher and other information, without applying OCR.


2013 ◽  
Vol 2013 ◽  
pp. 1-8
Author(s):  
Teng Li ◽  
Huan Chang ◽  
Jun Wu

This paper presents a novel algorithm to numerically decompose mixed signals in a collaborative way, given supervision of the labels that each signal contains. The decomposition is formulated as an optimization problem incorporating nonnegative constraint. A nonnegative data factorization solution is presented to yield the decomposed results. It is shown that the optimization is efficient and decreases the objective function monotonically. Such a decomposition algorithm can be applied on multilabel training samples for pattern classification. The real-data experimental results show that the proposed algorithm can significantly facilitate the multilabel image classification performance with weak supervision.


Sign in / Sign up

Export Citation Format

Share Document