Neural Network based on GLCM, and CIE L*a*b* Color Space to Classify Tomatoes Maturity

Author(s):  
Eri Eli Lavindi ◽  
Edi Jaya Kusuma ◽  
Guruh Fajar Shidik ◽  
Ricardus Anggi Pramunendar ◽  
Ahmad Zainul Fanani ◽  
...  
Keyword(s):  
2019 ◽  
Vol 2019 (1) ◽  
pp. 153-158
Author(s):  
Lindsay MacDonald

We investigated how well a multilayer neural network could implement the mapping between two trichromatic color spaces, specifically from camera R,G,B to tristimulus X,Y,Z. For training the network, a set of 800,000 synthetic reflectance spectra was generated. For testing the network, a set of 8,714 real reflectance spectra was collated from instrumental measurements on textiles, paints and natural materials. Various network architectures were tested, with both linear and sigmoidal activations. Results show that over 85% of all test samples had color errors of less than 1.0 ΔE2000 units, much more accurate than could be achieved by regression.


Author(s):  
Amit Kumar Gorai ◽  
Simit Raval ◽  
Ashok Kumar Patel ◽  
Snehamoy Chatterjee ◽  
Tarini Gautam

Abstract Coal is heterogeneous in nature, and thus the characterization of coal is essential before its use for a specific purpose. Thus, the current study aims to develop a machine vision system for automated coal characterizations. The model was calibrated using 80 image samples that are captured for different coal samples in different angles. All the images were captured in RGB color space and converted into five other color spaces (HSI, CMYK, Lab, xyz, Gray) for feature extraction. The intensity component image of HSI color space was further transformed into four frequency components (discrete cosine transform, discrete wavelet transform, discrete Fourier transform, and Gabor filter) for the texture features extraction. A total of 280 image features was extracted and optimized using a step-wise linear regression-based algorithm for model development. The datasets of the optimized features were used as an input for the model, and their respective coal characteristics (analyzed in the laboratory) were used as outputs of the model. The R-squared values were found to be 0.89, 0.92, 0.92, and 0.84, respectively, for fixed carbon, ash content, volatile matter, and moisture content. The performance of the proposed artificial neural network model was also compared with the performances of performances of Gaussian process regression, support vector regression, and radial basis neural network models. The study demonstrates the potential of the machine vision system in automated coal characterization.


Author(s):  
Neeta Pradeep Gargote ◽  
Savitha Devaraj ◽  
Shravani Shahapure

Color image segmentation is probably the most important task in image analysis and understanding. A novel Human Perception Based Color Image Segmentation System is presented in this paper. This system uses a neural network architecture. The neurons here uses a multisigmoid activation function. The multisigmoid activation function is the key for segmentation. The number of steps ie. thresholds in the multisigmoid function are dependent on the number of clusters in the image. The threshold values for detecting the clusters and their labels are found automatically from the first order derivative of histograms of saturation and intensity in the HSI color space. Here the main use of neural network is to detect the number of objects automatically from an image. It labels the objects with their mean colors. The algorithm is found to be reliable and works satisfactorily on different kinds of color images.


2021 ◽  
Vol 8 (3) ◽  
pp. 619
Author(s):  
Candra Dewi ◽  
Andri Santoso ◽  
Indriati Indriati ◽  
Nadia Artha Dewi ◽  
Yoke Kusuma Arbawa

<p>Semakin meningkatnya jumlah penderita diabetes menjadi salah satu faktor penyebab semakin tingginya penderita penyakit <em>diabetic retinophaty</em>. Salah satu citra yang digunakan oleh dokter mata untuk mengidentifikasi <em>diabetic retinophaty</em> adalah foto retina. Dalam penelitian ini dilakukan pengenalan penyakit diabetic retinophaty secara otomatis menggunakan citra <em>fundus</em> retina dan algoritme <em>Convolutional Neural Network</em> (CNN) yang merupakan variasi dari algoritme Deep Learning. Kendala yang ditemukan dalam proses pengenalan adalah warna retina yang cenderung merah kekuningan sehingga ruang warna RGB tidak menghasilkan akurasi yang optimal. Oleh karena itu, dalam penelitian ini dilakukan pengujian pada berbagai ruang warna untuk mendapatkan hasil yang lebih baik. Dari hasil uji coba menggunakan 1000 data pada ruang warna RGB, HSI, YUV dan L*a*b* memberikan hasil yang kurang optimal pada data seimbang dimana akurasi terbaik masih dibawah 50%. Namun pada data tidak seimbang menghasilkan akurasi yang cukup tinggi yaitu 83,53% pada ruang warna YUV dengan pengujian pada data latih dan akurasi 74,40% dengan data uji pada semua ruang warna.</p><p> </p><p><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Increasing the number of people with diabetes is one of the factors causing the high number of people with diabetic retinopathy. One of the images used by ophthalmologists to identify diabetic retinopathy is a retinal photo. In this research, the identification of diabetic retinopathy is done automatically using retinal fundus images and the Convolutional Neural Network (CNN) algorithm, which is a variation of the Deep Learning algorithm. The obstacle found in the recognition process is the color of the retina which tends to be yellowish red so that the RGB color space does not produce optimal accuracy. Therefore, in this research, various color spaces were tested to get better results. From the results of trials using 1000 images data in the color space of RGB, HSI, YUV and L * a * b * give suboptimal results on balanced data where the best accuracy is still below 50%. However, the unbalanced data gives a fairly high accuracy of 83.53% with training data on the YUV color space and 74,40% with testing data on all color spaces.</em></p><p><em><strong><br /></strong></em></p>


Foods ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 113 ◽  
Author(s):  
Razieh Pourdarbani ◽  
Sajad Sabzi ◽  
Davood Kalantari ◽  
José Luis Hernández-Hernández ◽  
Juan Ignacio Arribas

Since different varieties of crops have specific applications, it is therefore important to properly identify each cultivar, in order to avoid fake varieties being sold as genuine, i.e., fraud. Despite that properly trained human experts might accurately identify and classify crop varieties, computer vision systems are needed since conditions such as fatigue, reproducibility, and so on, can influence the expert’s judgment and assessment. Chickpea (Cicer arietinum L.) is an important legume at the world-level and has several varieties. Three chickpea varieties with a rather similar visual appearance were studied here: Adel, Arman, and Azad chickpeas. The purpose of this paper is to present a computer vision system for the automatic classification of those chickpea varieties. First, segmentation was performed using an Hue Saturation Intensity (HSI) color space threshold. Next, color and textural (from the gray level co-occurrence matrix, GLCM) properties (features) were extracted from the chickpea sample images. Then, using the hybrid artificial neural network-cultural algorithm (ANN-CA), the sub-optimal combination of the five most effective properties (mean of the RGB color space components, mean of the HSI color space components, entropy of GLCM matrix at 90°, standard deviation of GLCM matrix at 0°, and mean third component in YCbCr color space) were selected as discriminant features. Finally, an ANN-PSO/ACO/HS majority voting (MV) ensemble methodology merging three different classifier outputs, namely the hybrid artificial neural network-particle swarm optimization (ANN-PSO), hybrid artificial neural network-ant colony optimization (ANN-ACO), and hybrid artificial neural network-harmonic search (ANN-HS), was used. Results showed that the ensemble ANN-PSO/ACO/HS-MV classifier approach reached an average classification accuracy of 99.10 ± 0.75% over the test set, after averaging 1000 random iterations.


2010 ◽  
Vol 174 ◽  
pp. 28-31 ◽  
Author(s):  
Cong Jun Cao ◽  
Qiang Jun Liu

The conversions of color spaces are core techniques of modern ICC color management and the study of color space conversion algorithm between L*a*b* and CMYK is valuable both in theory and in application. In this paper, firstly ECI2002 standard color target data are uniformly selected, including modeling data and testing data; secondly the models of color space conversions from CMYK to L*a*b* and from L*a*b* to CMYK are built based on Radial Basis Function (RBF) neural network; finally the precision of the models are evaluated. This research indicates that the RBF neural network is suitable for the color space conversions between CMYK and L*a*b*. The models’ building processes are simpler and more convenient; the network has fast training speed and good results. With the improvement of the modeling method, this method for color space conversion will have a broader application.


2020 ◽  
Vol 9 (4) ◽  
pp. 403-413
Author(s):  
Sandhopi ◽  
Lukman Zaman P.C.S.W ◽  
Yosi Kristian

Semakin berkembang motif ukiran, semakin beragam bentuk dan variasinya. Hal ini menyulitkan dalam menentukan suatu ukiran bermotif Jepara. Pada makalah ini, metode transfer learning dengan FC yang dikembangkan dimanfaatkan untuk mengidentifikasi motif khas Jepara pada suatu ukiran. Dataset dibedakan menjadi tiga color space, yaitu LUV, RGB, dan YcrCb. Selain itu, sliding window, non-max suppression, dan heat maps dimanfaatkan untuk proses penelusuran area objek ukiran dan pengidentifikasian motif Jepara. Hasil pengujian dari semua bobot menunjukkan bahwa Xception pada klasifikasi motif Jepara memiliki nilai akurasi tertinggi, yaitu 0,95, 0,95, dan 0,94 untuk masing-masing dataset color space LUV, RGB, dan YCrCb. Namun, ketika semua bobot model tersebut diterapkan pada sistem identifikasi motif Jepara, ResNet50 mampu mengungguli semua jaringan dengan nilai persentase identifikasi motif sebesar 84%, 79%, dan 80%, untuk masing-masing color space LUV, RGB, dan YCrCb. Hasil ini membuktikan bahwa sistem mampu membantu dalam proses menentukan suatu ukiran, termasuk ke dalam ukiran Jepara atau bukan, dengan mengidentifikasi motif-motif khas Jepara yang terdapat dalam ukiran.


Sign in / Sign up

Export Citation Format

Share Document