color space conversion
Recently Published Documents


TOTAL DOCUMENTS

94
(FIVE YEARS 20)

H-INDEX

6
(FIVE YEARS 2)

2022 ◽  
pp. 004051752110672
Author(s):  
Zebin Su ◽  
Jinkai Yang ◽  
Pengfei Li ◽  
Junfeng Jing ◽  
Huanhuan Zhang

Neural networks have been widely used in color space conversion in the digital printing process. The shallow neural network easily obtains the local optimal solution when establishing multi-dimensional nonlinear mapping. In this paper, an improved high-precision deep belief network (DBN) algorithm is proposed to achieve the color space conversion from CMYK to L*a*b*. First, the PANTONE TCX color card is used as sample data, in which the CMYK value of the color block is used as input and the L*a*b* value is used as output; then, the conversion model from CMYK to L*a*b* color space is established by using DBN. To obtain better weight and threshold, DBN is optimized by a particle swarm optimization algorithm. Experimental results show that the proposed method has the highest conversion accuracy compared with Back Propagation Neural Network, Generalized Regression Neural Network, and traditional DBN color space conversion methods. It can also adapt to the actual production demand of color management in digital printing.


2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Shenming Hu ◽  
Xinze Luan ◽  
Hong Wu ◽  
Xiaoting Wang ◽  
Chunhong Yan ◽  
...  

Abstract Purpose A real-time automatic cataract-grading algorithm based on cataract video is proposed. Materials and methods In this retrospective study, we set the video of the eye lens section as the research target. A method is proposed to use YOLOv3 to assist in positioning, to automatically identify the position of the lens and classify the cataract after color space conversion. The data set is a cataract video file of 38 people's 76 eyes collected by a slit lamp. Data were collected using five random manner, the method aims to reduce the influence on the collection algorithm accuracy. The video length is within 10 s, and the classified picture data are extracted from the video file. A total of 1520 images are extracted from the image data set, and the data set is divided into training set, validation set and test set according to the ratio of 7:2:1. Results We verified it on the 76-segment clinical data test set and achieved the accuracy of 0.9400, with the AUC of 0.9880, and the F1 of 0.9388. In addition, because of the color space recognition method, the detection per frame can be completed within 29 microseconds and thus the detection efficiency has been improved significantly. Conclusion With the efficiency and effectiveness of this algorithm, the lens scan video is used as the research object, which improves the accuracy of the screening. It is closer to the actual cataract diagnosis and treatment process, and can effectively improve the cataract inspection ability of non-ophthalmologists. For cataract screening in poor areas, the accessibility of ophthalmology medical care is also increased.


Processes ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 1128
Author(s):  
Chern-Sheng Lin ◽  
Yu-Ching Pan ◽  
Yu-Xin Kuo ◽  
Ching-Kun Chen ◽  
Chuen-Lin Tien

In this study, the machine vision and artificial intelligence algorithms were used to rapidly check the degree of cooking of foods and avoid the over-cooking of foods. Using a smart induction cooker for heating, the image processing program automatically recognizes the color of the food before and after cooking. The new cooking parameters were used to identify the cooking conditions of the food when it is undercooked, cooked, and overcooked. In the research, the camera was used in combination with the software for development, and the real-time image processing technology was used to obtain the information of the color of the food, and through calculation parameters, the cooking status of the food was monitored. In the second year, using the color space conversion, a novel algorithm, and artificial intelligence, the foreground segmentation was used to separate the vegetables from the background, and the cooking ripeness, cooking unevenness, oil glossiness, and sauce absorption were calculated. The image color difference and the distribution were used to judge the cooking conditions of the food, so that the cooking system can identify whether or not to adopt partial tumbling, or to end a cooking operation. A novel artificial intelligence algorithm is used in the relative field, and the error rate can be reduced to 3%. This work will significantly help researchers working in the advanced cooking devices.


2021 ◽  
Vol 6 (3) ◽  
pp. 137-145
Author(s):  
Jia-Shing Sheu ◽  
Chun-Kang Tsai ◽  
Po-Tong Wang

In this study, a simple technology for a self-driving system called “driver assistance system” is developed based on embedded image identification. The system consists of a camera, a Raspberry Pi board, and OpenCV. The camera is used to capture lane images, and the image noise is overcome through color space conversion, grayscale, Otsu thresholding, binarization, erosion, and dilation. Subsequently, two horizontal lines parallel to the X-axis with a fixed range and interval are used to detect left and right lane lines. The intersection points between the left and right lane lines and the two horizontal lines can be obtained, and can be used to calculate the slopes of the left and right lanes. Finally, the slope change of the left and right lanes and the offset of the lane intersection are determined to detect the deviation. When the angle of lanes changes drastically, the driver receives a deviation warning. The results of this study suggest that the proposed algorithm is 1.96 times faster than the conventional algorithm.


2021 ◽  
Vol 58 (5) ◽  
pp. 0533001-533001340
Author(s):  
杨金锴 Yang Jinkai ◽  
李鹏飞 Li Pengfei ◽  
苏泽斌 Su Zebin ◽  
景军锋 Jing Junfeng

2020 ◽  
Vol 2020 (28) ◽  
pp. 193-198
Author(s):  
Hoang Le ◽  
Mahmoud Afifi ◽  
Michael S. Brown

Color space conversion is the process of converting color values in an image from one color space to another. Color space conversion is challenging because different color spaces have different sized gamuts. For example, when converting an image encoded in a medium-sized color gamut (e.g., AdobeRGB or Display-P3) to a small color gamut (e.g., sRGB), color values may need to be compressed in a many-to-one manner (i.e., multiple colors in the source gamut will map to a single color in the target gamut). If we try to convert this sRGB-encoded image back to a wider gamut color encoding, it can be challenging to recover the original colors due to the color fidelity loss. We propose a method to address this problem by embedding wide-gamut metadata inside saved images captured by a camera. Our key insight is that in the camera hardware, a captured image is converted to an intermediate wide-gamut color space (i.e., ProPhoto) as part of the processing pipeline. This wide-gamut image representation is then saved to a display color space and saved in an image format such as JPEG or HEIC. Our method proposes to include a small sub-sampling of the color values from the ProPhoto image state in the camera to the final saved JPEG/HEIC image. We demonstrate that having this additional wide-gamut metadata available during color space conversion greatly assists in constructing a color mapping function to convert between color spaces. Our experiments show our metadata-assisted color mapping method provides a notable improvement (up to 60% in terms of E) over conventional color space methods using perceptual rendering intent. In addition, we show how to extend our approach to perform adaptive color space conversion based spatially over the image for additional improvements.


Author(s):  
Sherif Sherif ◽  
Jordan Kralev ◽  
Tsonyo Slavov

Objects detection from a cluttered scene is one of the main tasks in computer vision. A lot of research has focused on the optimization of this process by using machine learning, where creating algorithms with specific instructions for solving a problem is not applicable. Most of embedded systems for detection object are based on algorithms using monochrome (intensity) images. Therefore, in the article are created models for color space conversion from images and the main stages of the object detection algorithm are discussed, as well as the functions through which this is done in MATLAB.


Sign in / Sign up

Export Citation Format

Share Document