scholarly journals Evaluating cloud liquid detection using cloud radar Doppler spectra in a pre-trained artificial neural network against Cloudnet liquid detection

2021 ◽  
Author(s):  
Heike Kalesse-Los ◽  
Willi Schimmel ◽  
Edward Luke ◽  
Patric Seifert

Abstract. Detection of liquid-containing cloud layers in thick mixed-phase clouds or multi-layer cloud situations from ground-basedremote sensing instruments still pose observational challenges yet improvements are crucial since the existence of multi-layerliquid layers in mixed-phase cloud situations influences cloud radiative effects, cloud life time, and precipitation formationprocesses. Hydrometeor target classifications such as Cloudnet that require a lidar signal for the classification of liquid arelimited to the maximum height of lidar signal penetration and thus often lead to underestimations of liquid-containing cloudlayers. Here we evaluate the Cloudnet liquid detection against the approach of Luke et al. (2010) which extracts morphologicalfeatures in cloud-penetrating cloud radar Doppler spectra measurements in a artificial neural network (ANN) approach toclassify liquid beyond full lidar signal attenuation based on the simulation of the two lidar parameters particle backscattercoefficient and particle depolarization ratio. We show that the ANN of Luke et al. (2010) which was trained in Arctic conditionscan successfully be applied to observations in the mid-latitudes obtained during the seven-week long ACCEPT field experimentin Cabauw, the Netherlands, 2014. In a sensitivity study covering the whole duration of the ACCEPT campaign, different liquid-detectionthresholds for ANN-predicted lidar variables are applied and evaluated against the Cloudnet target classification.Independent validation of the liquid mask from the standard Cloudnet target classification against the ANN-based techniqueis realized by comparisons to observations of microwave radiometer liquid water path, ceilometer liquid-layer base altitude,and radiosonde relative humidity. Four conclusions were drawn from the investigation: First, it was found that the thresholdselection criteria of liquid-related lidar backscatter and depolarization alone control the liquid detection considerably. Second,nevertheless, all threshold values used in the ANN-framework were found to outperform the Cloudnet target classification fordeep or multi-layer cloud situations where the lidar signal is fully attenuated within low liquid layers and the cloud reflectivityin higher cloud layers was sufficiently high to be detectable by the cloud radar. Third, in convective situations for whichlidar data is available and for which the imprint of cloud microphysics on the radar Doppler spectrum is decreased, Cloudnetoutperforms the ANN retrieval. Fourth, in high-level clouds both approaches (Cloudnet and the ANN technique), are limited.

2000 ◽  
Vol 25 (4) ◽  
pp. 325-325
Author(s):  
J.L.N. Roodenburg ◽  
H.J. Van Staveren ◽  
N.L.P. Van Veen ◽  
O.C. Speelman ◽  
J.M. Nauta ◽  
...  

2004 ◽  
Vol 171 (4S) ◽  
pp. 502-503
Author(s):  
Mohamed A. Gomha ◽  
Khaled Z. Sheir ◽  
Saeed Showky ◽  
Khaled Madbouly ◽  
Emad Elsobky ◽  
...  

1998 ◽  
Vol 49 (7) ◽  
pp. 717-722 ◽  
Author(s):  
M C M de Carvalho ◽  
M S Dougherty ◽  
A S Fowkes ◽  
M R Wardman

2020 ◽  
Vol 39 (6) ◽  
pp. 8463-8475
Author(s):  
Palanivel Srinivasan ◽  
Manivannan Doraipandian

Rare event detections are performed using spatial domain and frequency domain-based procedures. Omnipresent surveillance camera footages are increasing exponentially due course the time. Monitoring all the events manually is an insignificant and more time-consuming process. Therefore, an automated rare event detection contrivance is required to make this process manageable. In this work, a Context-Free Grammar (CFG) is developed for detecting rare events from a video stream and Artificial Neural Network (ANN) is used to train CFG. A set of dedicated algorithms are used to perform frame split process, edge detection, background subtraction and convert the processed data into CFG. The developed CFG is converted into nodes and edges to form a graph. The graph is given to the input layer of an ANN to classify normal and rare event classes. Graph derived from CFG using input video stream is used to train ANN Further the performance of developed Artificial Neural Network Based Context-Free Grammar – Rare Event Detection (ACFG-RED) is compared with other existing techniques and performance metrics such as accuracy, precision, sensitivity, recall, average processing time and average processing power are used for performance estimation and analyzed. Better performance metrics values have been observed for the ANN-CFG model compared with other techniques. The developed model will provide a better solution in detecting rare events using video streams.


Sign in / Sign up

Export Citation Format

Share Document