color information
Recently Published Documents


TOTAL DOCUMENTS

721
(FIVE YEARS 146)

H-INDEX

32
(FIVE YEARS 5)

Coatings ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 79
Author(s):  
Jingjing Mao ◽  
Zhihui Wu ◽  
Xinhao Feng

There always exists subjective and objective color differences between digital wood grain and real wood grain, making it difficult to replicate the color of natural timber. Therefore, we described a novel method of correcting the chromatic aberration of scanned wood grain to maximally restore the objective color information of the real wood grain. A point-to-point correction model of chromatic aberration between the scanned wood grain and the measured wood grain was established based on Circle 1 by adjusting the three channels (sR, sG, and sB) of the scanned images. A conversion of the color space was conducted using the mutual conversion formulas. The color change of the scanned images before and after the correction was evaluated through the L* a* b* color-mode-based ΔE* and the lαβ color-model-based CIQI (Color Image Quality Index) and CQE (Color Quality Enhancement). The experimental results showed that the chromatic aberration ΔE* between the scanned wood grain and the measured wood grain decreased and the colorfulness index CIQI of the scanned wood grain increased for most wood specimens after the correction. The values of ΔE* of the twenty kinds of wood specimens decreased by an average of 3.1 in Circle 1 and 2.3 in Circle 2, thus the correction model established based on Circle 1 was effective. The color of the scanned wood grain was more consistent with that of the originals after the correction, which would provide a more accurate color information for the reproductions of wood grain and had an important practical significance.


2022 ◽  
Vol 22 (1&2) ◽  
pp. 17-37
Author(s):  
Xiao Chen ◽  
Zhihao Liu ◽  
Hanwu Chen ◽  
Liang Wang

Quantum image representation has a significant impact in quantum image processing. In this paper, a bit-plane representation for log-polar quantum images (BRLQI) is proposed, which utilizes $(n+4)$ or $(n+6)$ qubits to store and process a grayscale or RGB color image of $2^n$ pixels. Compared to a quantum log-polar image (QUALPI), the storage capacity of BRLQI improves 16 times. Moreover, several quantum operations based on BRLQI are proposed, including color information complement operation, bit-planes reversing operation, bit-planes translation operation and conditional exchange operations between bit-planes. Combining the above operations, we designed an image scrambling circuit suitable for the BRLQI model. Furthermore, comparison results of the scrambling circuits indicate that those operations based on BRLQI have a lower quantum cost than QUALPI. In addition, simulation experiments illustrate that the proposed scrambling algorithm is effective and efficient.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Lina Zhang ◽  
Yu Sang ◽  
Donghai Dai

Polar harmonic transforms (PHTs) have been applied in pattern recognition and image analysis. But the current computational framework of PHTs has two main demerits. First, some significant color information may be lost during color image processing in conventional methods because they are based on RGB decomposition or graying. Second, PHTs are influenced by geometric errors and numerical integration errors, which can be seen from image reconstruction errors. This paper presents a novel computational framework of quaternion polar harmonic transforms (QPHTs), namely, accurate QPHTs (AQPHTs). First, to holistically handle color images, quaternion-based PHTs are introduced by using the algebra of quaternions. Second, the Gaussian numerical integration is adopted for geometric and numerical error reduction. When compared with CNNs (convolutional neural networks)-based methods (i.e., VGG16) on the Oxford5K dataset, our AQPHT achieves better performance of scaling invariant representation. Moreover, when evaluated on standard image retrieval benchmarks, our AQPHT using smaller dimension of feature vector achieves comparable results with CNNs-based methods and outperforms the hand craft-based methods by 9.6% w.r.t mAP on the Holidays dataset.


Author(s):  
Dawa Chyophel Lepcha ◽  
Bhawna Goyal ◽  
Ayush Dogra

In the era of rapid growth of technologies, image matting plays a key role in image and video editing along with image composition. In many significant real-world applications such as film production, it has been widely used for visual effects, virtual zoom, image translation, image editing and video editing. With recent advancements in digital cameras, both professionals and consumers have become increasingly involved in matting techniques to facilitate image editing activities. Image matting plays an important role to estimate alpha matte in the unknown region to distinguish foreground from the background region of an image using an input image and the corresponding trimap of an image which represents a foreground and unknown region. Numerous image matting techniques have been proposed recently to extract high-quality matte from image and video sequences. This paper illustrates a systematic overview of the current image and video matting techniques mostly emphasis on the current and advanced algorithms proposed recently. In general, image matting techniques have been categorized according to their underlying approaches, namely, sampling-based, propagation-based, combination of sampling and propagation-based and deep learning-based algorithms. The traditional image matting algorithms depend primarily on color information to predict alpha matte such as sampling-based, propagation-based or combination of sampling and propagation-based algorithms. However, these techniques mostly use low-level features and suffer from high-level background which tends to produce unwanted artifacts when color is same or semi-transparent in the foreground object. Image matting techniques based on deep learning have recently introduced to address the shortcomings of traditional algorithms. Rather than simply depending on the color information, it uses deep learning mechanism to estimate the alpha matte using an input image and the trimap of an image. A comprehensive survey on recent image matting algorithms and in-depth comparative analysis of these algorithms has been thoroughly discussed in this paper.


Symmetry ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2325
Author(s):  
Xinyu Hu ◽  
Qi Chen ◽  
Xuhui Ye ◽  
Daode Zhang ◽  
Yuxuan Tang ◽  
...  

Silkworm microparticle disease is a legal quarantine standard in the detection of silkworm disease all over the world. The current common detection method, the Pasteur manual microscopy method, has a low detection efficiency all over the world. The low efficiency of the current Pasteur manual microscopy detection method makes the application of machine vision technology to detect microparticle spores an important technology to advance silkworm disease research. For the problems of the low contrast, different illumination conditions and complex image background of microscopic images of the ellipsoidal symmetrical shape of silkworm microparticle spores collected in the detection solution, a region growth segmentation method based on microparticle color and grayscale information is proposed. In this method, the fuzzy contrast enhancement algorithm is used to enhance the color information of micro-particles and improve the discrimination between the micro-particles and background. In the HSV color space with stable color, the color information of micro-particles is extracted as seed points to eliminate the influence of light and reduce the interference of impurities to locate the distribution area of micro-particles accurately. Combined with the neighborhood gamma transformation, the highlight feature of the micro-particle target in the grayscale image is enhanced for region growing. Mea6nwhile, the accurate and complete micro-particle target is segmented from the complex background, which reduces the background impurity segmentation caused by a single feature in the complex background. In order to evaluate the segmentation performance, we calculate the IOU of the microparticle sample image segmented by this method with its corresponding true value image, and the experiments show that the combination of color and grayscale features using the region growth technique can accurately and completely segment the microparticle target in complex backgrounds with a segmentation accuracy IOU as high as 83.1%.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8090
Author(s):  
Joel Vidal ◽  
Chyi-Yeu Lin ◽  
Robert Martí

Recently, 6D pose estimation methods have shown robust performance on highly cluttered scenes and different illumination conditions. However, occlusions are still challenging, with recognition rates decreasing to less than 10% for half-visible objects in some datasets. In this paper, we propose to use top-down visual attention and color cues to boost performance of a state-of-the-art method on occluded scenarios. More specifically, color information is employed to detect potential points in the scene, improve feature-matching, and compute more precise fitting scores. The proposed method is evaluated on the Linemod occluded (LM-O), TUD light (TUD-L), Tejani (IC-MI) and Doumanoglou (IC-BIN) datasets, as part of the SiSo BOP benchmark, which includes challenging highly occluded cases, illumination changing scenarios, and multiple instances. The method is analyzed and discussed for different parameters, color spaces and metrics. The presented results show the validity of the proposed approach and their robustness against illumination changes and multiple instance scenarios, specially boosting the performance on relatively high occluded cases. The proposed solution provides an absolute improvement of up to 30% for levels of occlusion between 40% to 50%, outperforming other approaches with a best overall recall of 71% for the LM-O, 92% for TUD-L, 99.3% for IC-MI and 97.5% for IC-BIN.


2021 ◽  
Vol 923 (1) ◽  
pp. 16
Author(s):  
R. Li ◽  
N. R. Napolitano ◽  
C. Spiniello ◽  
C. Tortora ◽  
K. Kuijken ◽  
...  

Abstract We present 97 new high-quality strong lensing candidates found in the final ∼350 deg2 that complete the full ∼1350 deg2 area of the Kilo-Degree Survey (KiDS). Together with our previous findings, the final list of high-quality candidates from KiDS sums up to 268 systems. The new sample is assembled using a new convolutional neural network (CNN) classifier applied to r-band (best-seeing) and g, r, and i color-composited images separately. This optimizes the complementarity of the morphology and color information on the identification of strong lensing candidates. We apply the new classifiers to a sample of luminous red galaxies (LRGs) and a sample of bright galaxies (BGs) and select candidates that received a high probability to be a lens from the CNN (P CNN). In particular, setting P CNN > 0.8 for the LRGs, the one-band CNN predicts 1213 candidates, while the three-band classifier yields 1299 candidates, with only ∼30% overlap. For the BGs, in order to minimize the false positives, we adopt a more conservative threshold, P CNN > 0.9, for both CNN classifiers. This results in 3740 newly selected objects. The candidates from the two samples are visually inspected by seven coauthors to finally select 97 “high-quality” lens candidates which received mean scores larger than 6 (on a scale from 0 to 10). We finally discuss the effect of the seeing on the accuracy of CNN classification and possible avenues to increase the efficiency of multiband classifiers, in preparation of next-generation surveys from ground and space.


2021 ◽  
Vol 13 (23) ◽  
pp. 4811
Author(s):  
Rudolf Urban ◽  
Martin Štroner ◽  
Lenka Línková

Lately, affordable unmanned aerial vehicle (UAV)-lidar systems have started to appear on the market, highlighting the need for methods facilitating proper verification of their accuracy. However, the dense point cloud produced by such systems makes the identification of individual points that could be used as reference points difficult. In this paper, we propose such a method utilizing accurately georeferenced targets covered with high-reflectivity foil, which can be easily extracted from the cloud; their centers can be determined and used for the calculation of the systematic shift of the lidar point cloud. Subsequently, the lidar point cloud is cleaned of such systematic shift and compared with a dense SfM point cloud, thus yielding the residual accuracy. We successfully applied this method to the evaluation of an affordable DJI ZENMUSE L1 scanner mounted on the UAV DJI Matrice 300 and found that the accuracies of this system (3.5 cm in all directions after removal of the global georeferencing error) are better than manufacturer-declared values (10/5 cm horizontal/vertical). However, evaluation of the color information revealed a relatively high (approx. 0.2 m) systematic shift.


Sign in / Sign up

Export Citation Format

Share Document