scholarly journals Red Tide Detection Method for HY−1D Coastal Zone Imager Based on U−Net Convolutional Neural Network

2021 ◽  
Vol 14 (1) ◽  
pp. 88
Author(s):  
Xin Zhao ◽  
Rongjie Liu ◽  
Yi Ma ◽  
Yanfang Xiao ◽  
Jing Ding ◽  
...  

Existing red tide detection methods have mainly been developed for ocean color satellite data with low spatial resolution and high spectral resolution. Higher spatial resolution satellite images are required for red tides with fine scale and scattered distribution. However, red tide detection methods for ocean color satellite data cannot be directly applied to medium–high spatial resolution satellite data owing to the shortage of red tide responsive bands. Therefore, a new red tide detection method for medium–high spatial resolution satellite data is required. This study proposes the red tide detection U−Net (RDU−Net) model by considering the HY−1D Coastal Zone Imager (HY−1D CZI) as an example. RDU−Net employs the channel attention model to derive the inter−channel relationship of red tide information in order to reduce the influence of the marine environment on red tide detection. Moreover, the boundary and binary cross entropy (BBCE) loss function, which incorporates the boundary loss, is used to obtain clear and accurate red tide boundaries. In addition, a multi−feature dataset including the HY−1D CZI radiance and Normalized Difference Vegetation Index (NDVI) is employed to enhance the spectral difference between red tides and seawater and thus improve the accuracy of red tide detection. Experimental results show that RDU−Net can detect red tides accurately without a precedent threshold. Precision and Recall of 87.47% and 86.62%, respectively, are achieved, while the F1−score and Kappa are 0.87. Compared with the existing method, the F1−score is improved by 0.07–0.21. Furthermore, the proposed method can detect red tides accurately even under interference from clouds and fog, and it shows good performance in the case of red tide edges and scattered distribution areas. Moreover, it shows good applicability and can be successfully applied to other satellite data with high spatial resolution and large bandwidth, such as GF−1 Wide Field of View 2 (WFV2) images.

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4447
Author(s):  
Jisun Shin ◽  
Young-Heon Jo ◽  
Joo-Hyung Ryu ◽  
Boo-Keun Khim ◽  
Soo Mee Kim

Red tides caused by Margalefidinium polykrikoides occur continuously along the southern coast of Korea, where there are many aquaculture cages, and therefore, prompt monitoring of bloom water is required to prevent considerable damage. Satellite-based ocean-color sensors are widely used for detecting red tide blooms, but their low spatial resolution restricts coastal observations. Contrarily, terrestrial sensors with a high spatial resolution are good candidate sensors, despite the lack of spectral resolution and bands for red tide detection. In this study, we developed a U-Net deep learning model for detecting M. polykrikoides blooms along the southern coast of Korea from PlanetScope imagery with a high spatial resolution of 3 m. The U-Net model was trained with four different datasets that were constructed with randomly or non-randomly chosen patches consisting of different ratios of red tide and non-red tide pixels. The qualitative and quantitative assessments of the conventional red tide index (RTI) and four U-Net models suggest that the U-Net model, which was trained with a dataset of non-randomly chosen patches including non-red tide patches, outperformed RTI in terms of sensitivity, precision, and F-measure level, accounting for an increase of 19.84%, 44.84%, and 28.52%, respectively. The M. polykrikoides map derived from U-Net provides the most reasonable red tide patterns in all water areas. Combining high spatial resolution images and deep learning approaches represents a good solution for the monitoring of red tides over coastal regions.


2019 ◽  
Vol 90 (sp1) ◽  
pp. 120
Author(s):  
Rong-Jie Liu ◽  
Jie Zhang ◽  
Bin-Ge Cui ◽  
Yi Ma ◽  
Ping-Jian Song ◽  
...  

2021 ◽  
Vol 13 (10) ◽  
pp. 1944
Author(s):  
Xiaoming Liu ◽  
Menghua Wang

The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite has been a reliable source of ocean color data products, including five moderate (M) bands and one imagery (I) band normalized water-leaving radiance spectra nLw(λ). The spatial resolutions of the M-band and I-band nLw(λ) are 750 m and 375 m, respectively. With the technique of convolutional neural network (CNN), the M-band nLw(λ) imagery can be super-resolved from 750 m to 375 m spatial resolution by leveraging the high spatial resolution features of I1-band nLw(λ) data. However, it is also important to enhance the spatial resolution of VIIRS-derived chlorophyll-a (Chl-a) concentration and the water diffuse attenuation coefficient at the wavelength of 490 nm (Kd(490)), as well as other biological and biogeochemical products. In this study, we describe our effort to derive high-resolution Kd(490) and Chl-a data based on super-resolved nLw(λ) images at the VIIRS five M-bands. To improve the network performance over extremely turbid coastal oceans and inland waters, the networks are retrained with a training dataset including ocean color data from the Bohai Sea, Baltic Sea, and La Plata River Estuary, covering water types from clear open oceans to moderately turbid and highly turbid waters. The evaluation results show that the super-resolved Kd(490) image is much sharper than the original one, and has more detailed fine spatial structures. A similar enhancement of finer structures is also found in the super-resolved Chl-a images. Chl-a filaments are much sharper and thinner in the super-resolved image, and some of the very fine spatial features that are not shown in the original images appear in the super-resolved Chl-a imageries. The networks are also applied to four other coastal and inland water regions. The results show that super-resolution occurs mainly on pixels of Chl-a and Kd(490) features, especially on the feature edges and locations with a large spatial gradient. The biases between the original M-band images and super-resolved high-resolution images are small for both Chl-a and Kd(490) in moderately to extremely turbid coastal oceans and inland waters, indicating that the super-resolution process does not change the mean values of the original images.


1998 ◽  
Vol 16 (3) ◽  
pp. 331-341 ◽  
Author(s):  
J. Massons ◽  
D. Domingo ◽  
J. Lorente

Abstract. A cloud-detection method was used to retrieve cloudy pixels from Meteosat images. High spatial resolution (one pixel), monthly averaged cloud-cover distribution was obtained for a 1-year period. The seasonal cycle of cloud amount was analyzed. Cloud parameters obtained include the total cloud amount and the percentage of occurrence of clouds at three altitudes. Hourly variations of cloud cover are also analyzed. Cloud properties determined are coherent with those obtained in previous studies.Key words. Cloud cover · Meteosat


2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
Liang Huang ◽  
Qiuzhi Peng ◽  
Xueqin Yu

In order to improve the change detection accuracy of multitemporal high spatial resolution remote-sensing (HSRRS) images, a change detection method of multitemporal remote-sensing images based on saliency detection and spatial intuitionistic fuzzy C-means (SIFCM) clustering is proposed. Firstly, the cluster-based saliency cue method is used to obtain the saliency maps of two temporal remote-sensing images; then, the saliency difference is obtained by subtracting the saliency maps of two temporal remote-sensing images; finally, the SIFCM clustering algorithm is used to classify the saliency difference image to obtain the change regions and unchange regions. Two data sets of multitemporal high spatial resolution remote-sensing images are selected as the experimental data. The detection accuracy of the proposed method is 96.17% and 97.89%. The results show that the proposed method is a feasible and better performance multitemporal remote-sensing image change detection method.


2019 ◽  
Vol 12 (1) ◽  
pp. 44 ◽  
Author(s):  
Haojie Ma ◽  
Yalan Liu ◽  
Yuhuan Ren ◽  
Jingxian Yu

An important and effective method for the preliminary mitigation and relief of an earthquake is the rapid estimation of building damage via high spatial resolution remote sensing technology. Traditional object detection methods only use artificially designed shallow features on post-earthquake remote sensing images, which are uncertain and complex background environment and time-consuming feature selection. The satisfactory results from them are often difficult. Therefore, this study aims to apply the object detection method You Only Look Once (YOLOv3) based on the convolutional neural network (CNN) to locate collapsed buildings from post-earthquake remote sensing images. Moreover, YOLOv3 was improved to obtain more effective detection results. First, we replaced the Darknet53 CNN in YOLOv3 with the lightweight CNN ShuffleNet v2. Second, the prediction box center point, XY loss, and prediction box width and height, WH loss, in the loss function was replaced with the generalized intersection over union (GIoU) loss. Experiments performed using the improved YOLOv3 model, with high spatial resolution aerial remote sensing images at resolutions of 0.5 m after the Yushu and Wenchuan earthquakes, show a significant reduction in the number of parameters, detection speed of up to 29.23 f/s, and target precision of 90.89%. Compared with the general YOLOv3, the detection speed improved by 5.21 f/s and its precision improved by 5.24%. Moreover, the improved model had stronger noise immunity capabilities, which indicates a significant improvement in the model’s generalization. Therefore, this improved YOLOv3 model is effective for the detection of collapsed buildings in post-earthquake high-resolution remote sensing images.


Sign in / Sign up

Export Citation Format

Share Document