scholarly journals Diurnal and nocturnal cloud segmentation of ASI images using enhancement fully convolutional networks

2019 ◽  
Author(s):  
Chaojun Shi ◽  
Yatong Zhou ◽  
Bo Qiu ◽  
Jingfei He ◽  
Mu Ding ◽  
...  

Abstract. Cloud segmentation plays a very important role in the astronomical observatory site selection. At present, few researchers segment cloud in the nocturnal All Sky Imager (ASI) images. This paper proposes a new automatic cloud segmentation algorithm which utilizes the advantages of deep learning fully convolutional networks (FCN) to segment cloud pixels from diurnal and nocturnal ASI images, named enhancement fully convolutional networks (EFCN). Firstly, all the ASI images in the data set from the Key Laboratory of Optical Astronomy at the National Astronomical Observatories of Chinese Academy of Sciences (CAS) are converted from the red-green-blue (RGB) color space to hue-saturation-intensity (HSI) color space. Secondly, the channel of the HSI color space is enhanced by the histogram equalization. Thirdly, all the ASI images are converted from the HSI color space to RGB color space. Then after 100000 times iterative training based on the ASI images in the training set, the optimum associated parameters of the EFCN-8s model are obtained. Finally, we use the trained EFCN-8s to segment the cloud pixels of the ASI image in the test set. In the experiments our proposed EFCN-8s was compared with other four algorithms (OTSU, FCN-8s, EFCN-32s, and EFCN-16s) using four evaluation metrics. Experiments show that the EFCN-8s is much more accurate in the cloud segmentation for diurnal and nocturnal ASI images than other four algorithms.

2019 ◽  
Vol 12 (9) ◽  
pp. 4713-4724
Author(s):  
Chaojun Shi ◽  
Yatong Zhou ◽  
Bo Qiu ◽  
Jingfei He ◽  
Mu Ding ◽  
...  

Abstract. Cloud segmentation plays a very important role in astronomical observatory site selection. At present, few researchers segment cloud in nocturnal all-sky imager (ASI) images. This paper proposes a new automatic cloud segmentation algorithm that utilizes the advantages of deep-learning fully convolutional networks (FCNs) to segment cloud pixels from diurnal and nocturnal ASI images; it is called the enhancement fully convolutional network (EFCN). Firstly, all the ASI images in the data set from the Key Laboratory of Optical Astronomy at the National Astronomical Observatories of Chinese Academy of Sciences (CAS) are converted from the red–green–blue (RGB) color space to hue saturation intensity (HSI) color space. Secondly, the I channel of the HSI color space is enhanced by histogram equalization. Thirdly, all the ASI images are converted from the HSI color space to RGB color space. Then after 100 000 iterative trainings based on the ASI images in the training set, the optimum associated parameters of the EFCN-8s model are obtained. Finally, we use the trained EFCN-8s to segment the cloud pixels of the ASI image in the test set. In the experiments our proposed EFCN-8s was compared with four other algorithms (OTSU, FCN-8s, EFCN-32s, and EFCN-16s) using four evaluation metrics. Experiments show that the EFCN-8s is much more accurate in cloud segmentation for diurnal and nocturnal ASI images than the other four algorithms.


Author(s):  
Akira Taguchi

There are many color systems. Some systems are correspond to the human visual system, such as the Munsell color system. Other systems are formulated to ease data processing in machines, such as RGB color space. At first, Munsell color system is introduced in this paper. Next, RGB color system and hue-saturation-intensity (HSI) color system which is derived from RGB color systems are reviewed. HSI color system is important, because HSI color system is closely related to Munsell color system. We introduce the advantage and drawbacks of the conventional HSI color space. Furthermore, the improved HSI color system is introduced. The second half of this paper, we introduce a lot of color image enhancement methods based on the histogram equalization or the differential histogram equalization. Since hue preserving is necessary for color image processing, intensity processing methods by using both intensity and saturation in HSI color space are reviewed. Finally, hue preserving color image enhancement methods in RGB color system are explained.


Author(s):  
HUA YANG ◽  
MASAAKI KASHIMURA ◽  
NORIKADU ONDA ◽  
SHINJI OZAWA

This paper describes a new system for extracting and classifying bibliography regions from the color image of a book cover. The system consists of three major components: preprocessing, color space segmentation and text region extraction and classification. Preprocessing extracts the edge lines of the book and geometrically corrects and segments the input image, into the parts of front cover, spine and back cover. The same as all color image processing researches, the segmentation of color space is an essential and important step here. Instead of RGB color space, HSI color space is used in this system. The color space is segmented into achromatic and chromatic regions first; and both the achromatic and chromatic regions are segmented further to complete the color space segmentation. Then text region extraction and classification follow. After detecting fundamental features (stroke width and local label width) text regions are determined. By comparing the text regions on front cover with those on spine, all extracted text regions are classified into suitable bibliography categories: author, title, publisher and other information, without applying OCR.


2021 ◽  
Vol 7 (8) ◽  
pp. 150
Author(s):  
Kohei Inoue ◽  
Minyao Jiang ◽  
Kenji Hara

This paper proposes a method for improving saturation in the context of hue-preserving color image enhancement. The proposed method handles colors in an RGB color space, which has the form of a cube, and enhances the contrast of a given image by histogram manipulation, such as histogram equalization and histogram specification, of the intensity image. Then, the color corresponding to a target intensity is determined in a hue-preserving manner, where a gamut problem should be taken into account. We first project any color onto a surface in the RGB color space, which bisects the RGB color cube, to increase the saturation without a gamut problem. Then, we adjust the intensity of the saturation-enhanced color to the target intensity given by the histogram manipulation. The experimental results demonstrate that the proposed method achieves higher saturation than that given by related methods for hue-preserving color image enhancement.


2016 ◽  
Vol 2016 ◽  
pp. 1-17 ◽  
Author(s):  
Li Zhou ◽  
Du Yan Bi ◽  
Lin Yuan He

Foggy images taken in the bad weather inevitably suffer from contrast loss and color distortion. Existing defogging methods merely resort to digging out an accurate scene transmission in ignorance of their unpleasing distortion and high complexity. Different from previous works, we propose a simple but powerful method based on histogram equalization and the physical degradation model. By revising two constraints in a variational histogram equalization framework, the intensity component of a fog-free image can be estimated in HSI color space, since the airlight is inferred through a color attenuation prior in advance. To cut down the time consumption, a general variation filter is proposed to obtain a numerical solution from the revised framework. After getting the estimated intensity component, it is easy to infer the saturation component from the physical degradation model in saturation channel. Accordingly, the fog-free image can be restored with the estimated intensity and saturation components. In the end, the proposed method is tested on several foggy images and assessed by two no-reference indexes. Experimental results reveal that our method is relatively superior to three groups of relevant and state-of-the-art defogging methods.


2010 ◽  
Vol 26-28 ◽  
pp. 48-54
Author(s):  
Jin Ling Wei ◽  
Jun Meng ◽  
Wei Song

According to the analysis of every feature element’s grey images in RGB color space and HSI color space, each of the elements represents different information of the color image. From the analysis of the Histogram of color images, the value range of hue H basically keeps stable, which is proved by experiments to be the most stable and representative one. Finally we illustrated by application instances that the method of recognition and tracking of the objective moving robot based on hue character H is applicable.


Sensors ◽  
2019 ◽  
Vol 20 (1) ◽  
pp. 141
Author(s):  
Jianguang Li ◽  
Wen Li ◽  
Cong Jin ◽  
Lijuan Yang ◽  
Hui He

The segmentation of buildings in remote-sensing (RS) images plays an important role in monitoring landscape changes. Quantification of these changes can be used to balance economic and environmental benefits and most importantly, to support the sustainable urban development. Deep learning has been upgrading the techniques for RS image analysis. However, it requires a large-scale data set for hyper-parameter optimization. To address this issue, the concept of “one view per city” is proposed and it explores the use of one RS image for parameter settings with the purpose of handling the rest images of the same city by the trained model. The proposal of this concept comes from the observation that buildings of a same city in single-source RS images demonstrate similar intensity distributions. To verify the feasibility, a proof-of-concept study is conducted and five fully convolutional networks are evaluated on five cities in the Inria Aerial Image Labeling database. Experimental results suggest that the concept can be explored to decrease the number of images for model training and it enables us to achieve competitive performance in buildings segmentation with decreased time consumption. Based on model optimization and universal image representation, it is full of potential to improve the segmentation performance, to enhance the generalization capacity, and to extend the application of the concept in RS image analysis.


2021 ◽  
Author(s):  
HAIBIN SUN ◽  
haiwei liu

Abstract To improve the visual effect and quality of haze images after fog removal, a model for color correction and repair of haze images under hue-saturation-intensity (HSI) color space combined with machine learning is proposed. First, the haze image imaging model is constructed according to the atmospheric scattering theory. Second, based on HSI color space, the color enhancement and fog removal of the haze image model is proposed, and a haze image-transmittancegallery is constructed. Third, the visual dictionary of the transmittance graph is obtained by training the k-means clustering algorithm based on density parameter optimization and support vector machine algorithm based on genetic algorithm optimization. Fourth, based on the visual dictionary and the atmospheric scattering model, the haze image is repaired and defogged, and the subjective visual effects and objective evaluation indexes of color enhancement and fog removal of haze images are compared. It is concluded that the algorithm can effectively guarantee the detail and clarity of the image after defogging.


2018 ◽  
Vol 8 (12) ◽  
pp. 2670 ◽  
Author(s):  
Hao Guo ◽  
Guo Wei ◽  
Jubai An

Damping Bragg scattering from the ocean surface is the basic underlying principle of synthetic aperture radar (SAR) oil slick detection, and they produce dark spots on SAR images. Dark spot detection is the first step in oil spill detection, which affects the accuracy of oil spill detection. However, some natural phenomena (such as waves, ocean currents, and low wind belts, as well as human factors) may change the backscatter intensity on the surface of the sea, resulting in uneven intensity, high noise, and blurred boundaries of oil slicks or lookalikes. In this paper, Segnet is used as a semantic segmentation model to detect dark spots in oil spill areas. The proposed method is applied to a data set of 4200 from five original SAR images of an oil spill. The effectiveness of the method is demonstrated through the comparison with fully convolutional networks (FCN), an initiator of semantic segmentation models, and some other segmentation methods. It is here observed that the proposed method can not only accurately identify the dark spots in SAR images, but also show a higher robustness under high noise and fuzzy boundary conditions.


Sign in / Sign up

Export Citation Format

Share Document