color channels
Recently Published Documents


TOTAL DOCUMENTS

109
(FIVE YEARS 40)

H-INDEX

11
(FIVE YEARS 1)

Author(s):  
Ewa Ropelewska ◽  
Wioletta Popińska ◽  
Kadir Sabanci ◽  
Muhammet Fatih Aslan

AbstractThe aim of this study was to build the discriminative models for distinguishing the different cultivars of flesh of pumpkin ‘Bambino’, ‘Butternut’, ‘Uchiki Kuri’ and ‘Orange’ based on selected textures of the outer surface of images of cubes. The novelty of research involved the use of about 2000 different textures for one image. The highest total accuracy (98%) of discrimination of pumpkin ‘Bambino’, ‘Butternut’, ‘Uchiki Kuri’ and ‘Orange’ was determined for models built based on textures selected from the color space Lab and the IBk classifier and some of the individual cultivars were classified with the correctness of 100%. The total accuracy of up to 96% was observed for color space RGB and 97.5% for color space XYZ. In the case of color channels, the total accuracies reached 91% for channel b, 89.5% for channel X, 89% for channel Z.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1644
Author(s):  
Huy D. Le ◽  
Tuyen Ngoc Le ◽  
Jing-Wein Wang ◽  
Yu-Shan Liang

In video processing, background initialization aims to obtain a scene without foreground objects. Recently, the background initialization problem has attracted the attention of researchers because of its real-world applications, such as video segmentation, computational photography, video surveillance, etc. However, the background initialization problem is still challenging because of the complex variations in illumination, intermittent motion, camera jitter, shadow, etc. This paper proposes a novel and effective background initialization method using singular spectrum analysis. Firstly, we extract the video’s color frames and split them into RGB color channels. Next, RGB color channels of the video are saved as color channel spatio-temporal data. After decomposing the color channel spatio-temporal data by singular spectrum analysis, we obtain the stable and dynamic components using different eigentriple groups. Our study indicates that the stable component contains a background image and the dynamic component includes the foreground image. Finally, the color background image is reconstructed by merging RGB color channel images obtained by reshaping the stable component data. Experimental results on the public scene background initialization databases show that our proposed method achieves a good color background image compared with state-of-the-art methods.


Water ◽  
2021 ◽  
Vol 13 (23) ◽  
pp. 3470
Author(s):  
Fayadh Alenezi ◽  
Ammar Armghan ◽  
Sachi Nandan Mohanty ◽  
Rutvij H. Jhaveri ◽  
Prayag Tiwari

A lack of adequate consideration of underwater image enhancement gives room for more research into the field. The global background light has not been adequately addressed amid the presence of backscattering. This paper presents a technique based on pixel differences between global and local patches in scene depth estimation. The pixel variance is based on green and red, green and blue, and red and blue channels besides the absolute mean intensity functions. The global background light is extracted based on a moving average of the impact of suspended light and the brightest pixels within the image color channels. We introduce the block-greedy algorithm in a novel Convolutional Neural Network (CNN) proposed to normalize different color channels’ attenuation ratios and select regions with the lowest variance. We address the discontinuity associated with underwater images by transforming both local and global pixel values. We minimize energy in the proposed CNN via a novel Markov random field to smooth edges and improve the final underwater image features. A comparison of the performance of the proposed technique against existing state-of-the-art algorithms using entropy, Underwater Color Image Quality Evaluation (UCIQE), Underwater Image Quality Measure (UIQM), Underwater Image Colorfulness Measure (UICM), and Underwater Image Sharpness Measure (UISM) indicate better performance of the proposed approach in terms of average and consistency. As it concerns to averagely, UICM has higher values in the technique than the reference methods, which explainsits higher color balance. The μ values of UCIQE, UISM, and UICM of the proposed method supersede those of the existing techniques. The proposed noted a percent improvement of 0.4%, 4.8%, 9.7%, 5.1% and 7.2% in entropy, UCIQE, UIQM, UICM and UISM respectively compared to the best existing techniques. Consequently, dehazed images have sharp, colorful, and clear features in most images when compared to those resulting from the existing state-of-the-art methods. Stable σ values explain the consistency in visual analysis in terms of sharpness of color and clarity of features in most of the proposed image results when compared with reference methods. Our own assessment shows that only weakness of the proposed technique is that it only applies to underwater images. Future research could seek to establish edge strengthening without color saturation enhancement.


2021 ◽  
Vol 2 (2) ◽  
pp. 330-338
Author(s):  
Abdullah BEYAZ

Colorimetry is of paramount importance to the agricultural industry. Colorimetry refers to the processing of agricultural products for consumer needs from a marketing point of view, and therefore the agricultural industry spends a lot of money and time classifying each product. In the past, agricultural professionals had to use program codes that are difficult to learn, and even the most basic image analysis for agricultural product classification required mastering different program libraries. Today, the LabVIEW platform offers a flexible, fast, easy-to-learn, and complete image analysis infrastructure with various useful modules. For this reason, in this study, a method analysis for color perception with a simple USB webcam and software developed for real-time color analysis on the LabVIEW platform is presented and its success in the basic color analysis is tried to be revealed. The basic application developed for this purpose in LabVIEW v2019 using NI Vision Development Module v19 and NI IMAQ v19 modules. The basic fact that is the LabVIEW application is the idea that LabVIEW can only be analyzed with expensive IEEE 1394, but it should be known that these analyzes can be done with USB webcams. For this purpose, the application includes a USB webcam driver that can be stacked seamlessly. USB Webcam and colorimeter measurement-based results of ƔR factors for each of RGB color channels are 1.161232, 0.506287, 0.432229; ƔG factors for each of RGB color channels are 0.519619, 1.025383, 1.201444; at last ƔB factors for each of RGB color channels are 0.600362, 0.714016, 1.413406, respectively.


Author(s):  
Ewa Ropelewska ◽  
Jan Piecko

AbstractThis study was aimed at developing the discriminant models for distinguishing the tomato seeds based on texture parameters of the outer surface of seeds calculated from the images (scans) converted to individual color channels R, G, B, L, a, b, X, Y, Z. The seeds of tomatoes ‘Green Zebra’, ‘Ożarowski’, ‘Pineapple’, Sacher F1 and Sandoline F1 were discriminated in pairs. The highest results were observed for models built based on sets of textures selected individually from color channels R, L and X and sets of textures selected from all color channels. In all cases, the tomato seeds ‘Green Zebra’ and ‘Ożarowski’ were discriminated with the highest average accuracy equal to 97% for the Multilayer Perceptron classifier and 96.25% for Random Forest for color channel R, 95.25% (Multilayer Perceptron) and 95% (Random Forest) for color channel L, 93% (Multilayer Perceptron) and 95% (Random Forest) for color channel X, 99.75% (Multilayer Perceptron) and 99.5% (Random Forest) for a set of textures selected from all color channels (R, G, B, L, a, b, X, Y, X). The highest average accuracies for other pairs of cultivars reached 98.25% for ‘Ożarowski’ vs. Sacher F1, 95.75% for ‘Pineapple’ vs. Sandoline F1, 97.5% for ‘Green Zebra’ vs. Sandoline F1, 97.25% for Sacher F1 vs. Sandoline F1 for models built based on textures selected from all color channels. The obtained results may be used in practice for the identification of cultivar of tomato seeds. The developed models allow to distinguish the tomato seed cultivars in an objective and fast way using digital image processing. The results confirmed the usefulness of texture parameters of the outer surface of tomato seeds for classification purposes. The discriminative models allow to obtain a very high probability and may be applied to authenticate and detect seed adulteration.


Electronics ◽  
2021 ◽  
Vol 10 (19) ◽  
pp. 2385
Author(s):  
Jiangtao Huang ◽  
Shanshan Shi ◽  
Zhouyan He ◽  
Ting Luo

This paper presents a high dynamic range (HDR) image zero watermarking method based on dual tree complex wavelet transform (DT-CWT) and quaternion. In order to be against tone mapping (TM), DT-CWT is used to transform the three RGB color channels of the HDR image for obtaining the low-pass sub-bands, respectively, since DT-CWT can extract the contour of the HDR image and the contour change of the HDR image is small after TM. The HDR image provides a wide dynamic range, and thus, three-color channel correlations are higher than inner-relationships and the quaternion is used to consider three color channels as a whole to be transformed. Quaternion fast Fourier transform (QFFT) and quaternion singular value decomposition (QSVD) are utilized to decompose the HDR image for obtaining robust features, which is fused with a binary watermark to generate a zero watermark for copyright protection. Furthermore, the binary watermark is scrambled for the security by using the Arnold transform. Experimental results denote that the proposed zero-watermarking method is robust to TM and other image processing attacks, and can protect the HDR image efficiently.


Author(s):  
Franziska Schlenker ◽  
Elena Kipf ◽  
Nadine Borst ◽  
Tobias Hutzenlaub ◽  
Roland Zengerle ◽  
...  

Author(s):  
Ewa Ropelewska ◽  
Krzysztof P. Rutkowski

AbstractThe peaches belonging to different cultivars can be characterized by differentiation in properties. The aim of this study was to evaluate the usefulness of individual parts of fruit (skin, flesh, stone and seed) for cultivar discrimination of peaches based on textures determined using image analysis. Discriminant analysis was performed using the classifiers of Bayes net, logistic, SMO, multi-class classifier and random forest based on a set of combined textures selected from all color channels R, G, B, L, a, b, X, Y, Z and for textures selected separately for RGB, Lab and XYZ color spaces. In the case of sets of textures selected from all color channels (R, G, B, L, a, b, X, Y, Z), the accuracy of 100% was observed for flesh, stones and seeds for selected classifiers. The sets of textures selected from RGB color space produced the correctness equal to 100% in the case of flesh and seeds of peaches. In the case of Lab and XYZ color spaces, slightly lower accuracies than for RGB color space were obtained and the accuracy reaching 100% was noted only for the discrimination of seeds of peaches. The research proved the usefulness of selected texture parameters of fruit flesh, stones and seeds for successful discrimination of peach cultivars with an accuracy of 100%. The distinguishing between cultivars may be important for breeders, consumers and the peach industry for ensuring adequate processing conditions and equipment parameters. The cultivar identification of fruit by human may be characterized by large errors. The molecular or chemical methods may require special equipment or be time-consuming. The image analysis may ensure objective, rapid and relatively inexpensive procedure and high accuracy for peach cultivar discrimination.


Symmetry ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 901
Author(s):  
Shen Shi ◽  
Bing Xiangli ◽  
Zengshan Yin

Color images have a wider range of applications than gray images. There are two ways to extend the traditional super-resolution reconstruction method to color images: Super resolution reconstructs each channel of the color image individually; Change the RGB color bands into YCrCb color bands, then super-resolution reconstructs the luminance component and interpolates the chrominance components.These algorithms cannot effectively utilize the property that the edges and textures are similar in the RGB channels, and the results of those methods may lead to color artifacts. Aiming to solve these problems, we propose a new super-resolution method based on cross channel prior. First, a cross channel prior is proposed to describe the similarity of gradient in RGB channels. Then, a new super-resolution method is proposed for color images via combination of the cross channel prior and the traditional super-resolution methods. Finally, the proposed method reconstructs the color channels alternately. The experimental results show that the proposed method could effectively suppress the generation of color artifacts and improve the quality of the reconstructed images.


Sign in / Sign up

Export Citation Format

Share Document