scholarly journals Automated multi-classifier recognition of atmospheric turbulent structures obtained by Doppler lidar

2020 ◽  
Vol 223 ◽  
pp. 03013
Author(s):  
Anton Sokolov ◽  
Egor Dmitriev ◽  
Ioannis Cheliotis ◽  
Hervé Delbarre ◽  
Elsa Dieudonne ◽  
...  

We present algorithms and results of automated processing of LiDAR measurements obtained during VEGILOT measuring campaign in Paris in autumn 2014 in order to study horizontal turbulent atmospheric regimes on urban scales. To process images obtained by horizontal atmospheric scanning using Doppler LiDAR, the method is proposed based on texture analysis and classification using supervised machine learning algorithms. The results of the parallel classification by various classifiers were combined using the majority voting strategy. The obtained estimates of accuracy demonstrate the efficiency of the proposed method for solving the problem of remote sensing of regional-scale turbulent patterns in the atmosphere.

2021 ◽  
Vol 13 (13) ◽  
pp. 2433
Author(s):  
Shu Yang ◽  
Fengchao Peng ◽  
Sibylle von Löwis ◽  
Guðrún Nína Petersen ◽  
David Christian Finger

Doppler lidars are used worldwide for wind monitoring and recently also for the detection of aerosols. Automatic algorithms that classify the lidar signals retrieved from lidar measurements are very useful for the users. In this study, we explore the value of machine learning to classify backscattered signals from Doppler lidars using data from Iceland. We combined supervised and unsupervised machine learning algorithms with conventional lidar data processing methods and trained two models to filter noise signals and classify Doppler lidar observations into different classes, including clouds, aerosols and rain. The results reveal a high accuracy for noise identification and aerosols and clouds classification. However, precipitation detection is underestimated. The method was tested on data sets from two instruments during different weather conditions, including three dust storms during the summer of 2019. Our results reveal that this method can provide an efficient, accurate and real-time classification of lidar measurements. Accordingly, we conclude that machine learning can open new opportunities for lidar data end-users, such as aviation safety operators, to monitor dust in the vicinity of airports.


2020 ◽  
Author(s):  
Ioannis Cheliotis ◽  
Elsa Dieudonné ◽  
Hervé Delbarre ◽  
Anton Sokolov ◽  
Egor Dmitriev ◽  
...  

Abstract. Turbulent structures can be observed using horizontal scans from single Doppler lidar or radar systems. Despite the ability to detect the structures manually on the images, this method would be time-consuming on large datasets, thus limiting the possibilities to perform studies of the turbulent structures properties over more than a few days. In order to overcome this problem, an automated classification method was developed, based on the observations recorded by a scanning Doppler lidar (LEOSPHERE WLS100) and installed atop a 75-m tower in Paris city centre (France) during a 2-months campaign (September-October 2014). The lidar recorded 4577 quasi-horizontal scans for which the turbulent component of the radial wind speed was determined using the velocity azimuth display method. Three turbulent structures types were identified by visual examination of the wind fields: unaligned thermals, rolls and streaks. A learning ensemble of 150 turbulent patterns was classified manually relying on in-situ and satellite data. The differences between the three types of structures were highlighted by enhancing the contrast of the images and computing four texture parameters (correlation, contrast, homogeneity and energy) that were provided to the supervised machine learning algorithm (quadratic discriminate analysis). Using the 10-fold cross validation method, the classification error was estimated to be about 9.2 % for the training ensemble and 3.3 % in particular for streaks. The trained algorithm applied to the whole scan ensemble detected turbulent structures on 54 % of the scans, among which 34 % were coherent turbulent structures (rolls, streaks).


Author(s):  
Dilara Gerdan ◽  
Abdullah Beyaz ◽  
Mustafa Vatandaş

Colour is an essential parameter at product quality control stages, and finally, it is necessary for the consumer marketing decision. It is possible to damage the products during the process from collection to storage. Also, it is a well-known condition, cold environmental conditions protect fruits from deformations negative effects, but most of the time, most of the consumers keep the fruits at room temperature in open packs during the consumption process. Also, this condition affects the product storage time. In this study, it is aimed that to determine the behaviours of the fruits in room temperature and humidity conditions. For this aim the colour change of the damaged pears were determined, in another term, colour change value from red to green and yellow to blue at the damaged pears were determined with lightness values by using image analysis technique and analysed with data mining methods. For this purpose, 100 “Akça” pear and 100 “Deveci” local pear cultivar used for experiments. Fruits were equally damaged by using a pendulum mechanism. The damaged fruits were kept at room temperature. Colour change areas on fruits were evaluated with X-rite Ci60 spectrophotometer, and the hardness of fruits was measured by using a fruit penetrometer. The colour (L, a, b) and ΔE values were analysed for the fruit cultivars. The relationship between fruit hardness and colour change were also demonstrated. The predictions were done supervised machine learning algorithms (Decision Tree and Neural Networks with Meta-Learning Techniques; Majority Voting and Random Forest) by using KNIME Analytics software. The classifier performance (accuracy, error, F-Measure, Cohen's Kappa, recall, precision, true positive (TP), false positive (FP), true negative (TN), false negative (FN) values were given at the conclusion section of the research. The best prediction were found at the Majority Voting method (MAVL) 98.458 % success given with 70% partitioning.


2021 ◽  
Vol 11 (23) ◽  
pp. 11199
Author(s):  
Irati Rasines ◽  
Miguel Prada ◽  
Viacheslav Bobrov ◽  
Dhruv Agrawal ◽  
Leire Martinez ◽  
...  

This study aims to evaluate different combinations of features and algorithms to be used in the control of a prosthetic hand wherein both the configuration of the fingers and the gripping forces can be controlled. This requires identifying machine learning algorithms and feature sets to detect both intended force variation and hand gestures in EMG signals recorded from upper-limb amputees. However, despite the decades of research into pattern recognition techniques, each new problem requires researchers to find a suitable classification algorithm, as there is no such thing as a universal ’best’ solution. Consideration of different techniques and data representation represents a fundamental practice in order to achieve maximally effective results. To this end, we employ a publicly-available database recorded from amputees to evaluate different combinations of features and classifiers. Analysis of data from 9 different individuals shows that both for classic features and for time-dependent power spectrum descriptors (TD-PSD) the proposed logarithmically scaled version of the current window plus previous window achieves the highest classification accuracy. Using linear discriminant analysis (LDA) as a classifier and applying a majority-voting strategy to stabilize the individual window classification, we obtain 88% accuracy with classic features and 89% with TD-PSD features.


2020 ◽  
Vol 13 (12) ◽  
pp. 6579-6592
Author(s):  
Ioannis Cheliotis ◽  
Elsa Dieudonné ◽  
Hervé Delbarre ◽  
Anton Sokolov ◽  
Egor Dmitriev ◽  
...  

Abstract. Medium-to-large fluctuations and coherent structures (mlf-cs's) can be observed using horizontal scans from single Doppler lidar or radar systems. Despite the ability to detect the structures visually on the images, this method would be time-consuming on large datasets, thus limiting the possibilities to perform studies of the structures properties over more than a few days. In order to overcome this problem, an automated classification method was developed, based on the observations recorded by a scanning Doppler lidar (Leosphere WLS100) installed atop a 75 m tower in Paris's city centre (France) during a 2-month campaign (September–October 2014). The mlf-cs's of the radial wind speed are estimated using the velocity–azimuth display method over 4577 quasi-horizontal scans. Three structure types were identified by visual examination of the wind fields: unaligned thermals, rolls and streaks. A learning ensemble of 150 mlf-cs patterns was classified manually relying on in situ and satellite data. The differences between the three types of structures were highlighted by enhancing the contrast of the images and computing four texture parameters (correlation, contrast, homogeneity and energy) that were provided to the supervised machine-learning algorithm, namely the quadratic discriminant analysis. The algorithm was able to classify successfully about 91 % of the cases based solely on the texture analysis parameters. The algorithm performed best for the streak structures with a classification error equivalent to 3.3 %. The trained algorithm applied to the whole scan ensemble detected structures on 54 % of the scans, among which 34 % were coherent structures (rolls and streaks).


2020 ◽  
Author(s):  
Ghazal Farhani ◽  
Robert J. Sica ◽  
Mark Joseph Daley

Abstract. While it is relatively straightforward to automate the processing of lidar signals, it is more difficult to choose periods of "good" measurements to process. Groups use various ad hoc procedures involving either very simple (e.g. signal-to-noise ratio) or more complex procedures (e.g. Wing et al., 2018) to perform a task which is easy to train humans to perform but is time consuming. Here, we use machine learning techniques to train the machine to sort the measurements before processing. The presented methods is generic and can be applied to most lidars. We test the techniques using measurements from the Purple Crow Lidar (PCL) system located in London, Canada. The PCL has over 200,000 raw scans in Rayleigh and Raman channels available for classification. We classify raw (level-0) lidar measurements as "clear" sky scans with strong lidar returns, "bad" scans, and scans which are significantly influenced by clouds or aerosol loads. We examined different supervised machine learning algorithms including the random forest, the support vector machine, and the gradient boosting trees, all of which can successfully classify scans. The algorithms where trained using about 1500 scans for each PCL channel, selected randomly from different nights of measurements in different years. The success rate of identification, for all the channels is above 95 %. We also used the t-distributed Stochastic Embedding (t-SNE) method, which is an unsupervised algorithm, to cluster our lidar scans. Because the t-SNE is a data driven method in which no labelling of training set is needed, it is an attractive algorithm to find anomalies in lidar scans. The method has been tested on several nights of measurements from the PCL measurements.The t-SNE can successfully cluster the PCL data scans into meaningful categories. To demonstrate the use of the technique, we have used the algorithm to identify stratospheric aerosol layers due to wildfires.


Author(s):  
N. Tatar ◽  
M. Saadatseresht ◽  
H. Arefi ◽  
A. Hadavand

In this paper a new object-based framework to detect shadow areas in high resolution satellite images is proposed. To produce shadow map in pixel level state of the art supervised machine learning algorithms are employed. Automatic ground truth generation based on Otsu thresholding on shadow and non-shadow indices is used to train the classifiers. It is followed by segmenting the image scene and create image objects. To detect shadow objects, a majority voting on pixel-based shadow detection result is designed. GeoEye-1 multi-spectral image over an urban area in Qom city of Iran is used in the experiments. Results shows the superiority of our proposed method over traditional pixel-based, visually and quantitatively.


2021 ◽  
Vol 14 (1) ◽  
pp. 391-402
Author(s):  
Ghazal Farhani ◽  
Robert J. Sica ◽  
Mark Joseph Daley

Abstract. While it is relatively straightforward to automate the processing of lidar signals, it is more difficult to choose periods of “good” measurements to process. Groups use various ad hoc procedures involving either very simple (e.g. signal-to-noise ratio) or more complex procedures (e.g. Wing et al., 2018) to perform a task that is easy to train humans to perform but is time-consuming. Here, we use machine learning techniques to train the machine to sort the measurements before processing. The presented method is generic and can be applied to most lidars. We test the techniques using measurements from the Purple Crow Lidar (PCL) system located in London, Canada. The PCL has over 200 000 raw profiles in Rayleigh and Raman channels available for classification. We classify raw (level-0) lidar measurements as “clear” sky profiles with strong lidar returns, “bad” profiles, and profiles which are significantly influenced by clouds or aerosol loads. We examined different supervised machine learning algorithms including the random forest, the support vector machine, and the gradient boosting trees, all of which can successfully classify profiles. The algorithms were trained using about 1500 profiles for each PCL channel, selected randomly from different nights of measurements in different years. The success rate of identification for all the channels is above 95 %. We also used the t-distributed stochastic embedding (t-SNE) method, which is an unsupervised algorithm, to cluster our lidar profiles. Because the t-SNE is a data-driven method in which no labelling of the training set is needed, it is an attractive algorithm to find anomalies in lidar profiles. The method has been tested on several nights of measurements from the PCL measurements. The t-SNE can successfully cluster the PCL data profiles into meaningful categories. To demonstrate the use of the technique, we have used the algorithm to identify stratospheric aerosol layers due to wildfires.


2020 ◽  
Vol 14 (2) ◽  
pp. 140-159
Author(s):  
Anthony-Paul Cooper ◽  
Emmanuel Awuni Kolog ◽  
Erkki Sutinen

This article builds on previous research around the exploration of the content of church-related tweets. It does so by exploring whether the qualitative thematic coding of such tweets can, in part, be automated by the use of machine learning. It compares three supervised machine learning algorithms to understand how useful each algorithm is at a classification task, based on a dataset of human-coded church-related tweets. The study finds that one such algorithm, Naïve-Bayes, performs better than the other algorithms considered, returning Precision, Recall and F-measure values which each exceed an acceptable threshold of 70%. This has far-reaching consequences at a time where the high volume of social media data, in this case, Twitter data, means that the resource-intensity of manual coding approaches can act as a barrier to understanding how the online community interacts with, and talks about, church. The findings presented in this article offer a way forward for scholars of digital theology to better understand the content of online church discourse.


Sign in / Sign up

Export Citation Format

Share Document