scholarly journals Detecting turbulent structures on single Doppler lidar large datasets: an automated classification method for horizontal scans

Author(s):  
Ioannis Cheliotis ◽  
Elsa Dieudonné ◽  
Hervé Delbarre ◽  
Anton Sokolov ◽  
Egor Dmitriev ◽  
...  

Abstract. Turbulent structures can be observed using horizontal scans from single Doppler lidar or radar systems. Despite the ability to detect the structures manually on the images, this method would be time-consuming on large datasets, thus limiting the possibilities to perform studies of the turbulent structures properties over more than a few days. In order to overcome this problem, an automated classification method was developed, based on the observations recorded by a scanning Doppler lidar (LEOSPHERE WLS100) and installed atop a 75-m tower in Paris city centre (France) during a 2-months campaign (September-October 2014). The lidar recorded 4577 quasi-horizontal scans for which the turbulent component of the radial wind speed was determined using the velocity azimuth display method. Three turbulent structures types were identified by visual examination of the wind fields: unaligned thermals, rolls and streaks. A learning ensemble of 150 turbulent patterns was classified manually relying on in-situ and satellite data. The differences between the three types of structures were highlighted by enhancing the contrast of the images and computing four texture parameters (correlation, contrast, homogeneity and energy) that were provided to the supervised machine learning algorithm (quadratic discriminate analysis). Using the 10-fold cross validation method, the classification error was estimated to be about 9.2 % for the training ensemble and 3.3 % in particular for streaks. The trained algorithm applied to the whole scan ensemble detected turbulent structures on 54 % of the scans, among which 34 % were coherent turbulent structures (rolls, streaks).

2020 ◽  
Vol 13 (12) ◽  
pp. 6579-6592
Author(s):  
Ioannis Cheliotis ◽  
Elsa Dieudonné ◽  
Hervé Delbarre ◽  
Anton Sokolov ◽  
Egor Dmitriev ◽  
...  

Abstract. Medium-to-large fluctuations and coherent structures (mlf-cs's) can be observed using horizontal scans from single Doppler lidar or radar systems. Despite the ability to detect the structures visually on the images, this method would be time-consuming on large datasets, thus limiting the possibilities to perform studies of the structures properties over more than a few days. In order to overcome this problem, an automated classification method was developed, based on the observations recorded by a scanning Doppler lidar (Leosphere WLS100) installed atop a 75 m tower in Paris's city centre (France) during a 2-month campaign (September–October 2014). The mlf-cs's of the radial wind speed are estimated using the velocity–azimuth display method over 4577 quasi-horizontal scans. Three structure types were identified by visual examination of the wind fields: unaligned thermals, rolls and streaks. A learning ensemble of 150 mlf-cs patterns was classified manually relying on in situ and satellite data. The differences between the three types of structures were highlighted by enhancing the contrast of the images and computing four texture parameters (correlation, contrast, homogeneity and energy) that were provided to the supervised machine-learning algorithm, namely the quadratic discriminant analysis. The algorithm was able to classify successfully about 91 % of the cases based solely on the texture analysis parameters. The algorithm performed best for the streak structures with a classification error equivalent to 3.3 %. The trained algorithm applied to the whole scan ensemble detected structures on 54 % of the scans, among which 34 % were coherent structures (rolls and streaks).


2020 ◽  
Vol 223 ◽  
pp. 03013
Author(s):  
Anton Sokolov ◽  
Egor Dmitriev ◽  
Ioannis Cheliotis ◽  
Hervé Delbarre ◽  
Elsa Dieudonne ◽  
...  

We present algorithms and results of automated processing of LiDAR measurements obtained during VEGILOT measuring campaign in Paris in autumn 2014 in order to study horizontal turbulent atmospheric regimes on urban scales. To process images obtained by horizontal atmospheric scanning using Doppler LiDAR, the method is proposed based on texture analysis and classification using supervised machine learning algorithms. The results of the parallel classification by various classifiers were combined using the majority voting strategy. The obtained estimates of accuracy demonstrate the efficiency of the proposed method for solving the problem of remote sensing of regional-scale turbulent patterns in the atmosphere.


2020 ◽  
Author(s):  
Ioannis Cheliotis ◽  
Elsa Dieudonné ◽  
Hervé Delbarre ◽  
Anton Sokolov ◽  
Egor Dmitriev ◽  
...  

<p>Pulsed Doppler wind lidars (PDWL) have been extensively used in order to study the atmospheric turbulence. Their ability to scan large areas in a short period of time is a substantial advantage over in-situ measurements. Furthermore, PDWL are capable to scan horizontally as well as vertically thus providing observations throughout the atmospheric boundary layer (ABL). By analysing PDWL observations it is possible to identify large turbulent structures in the ABL such as thermals, rolls and streaks. Even though several studies have been carried out to analyse such turbulent structures, these studies examine peculiar cases spanning over short periods of time.</p><p>For this study we analysed the turbulent structures (thermals, rolls, streaks) over Paris during a two-months period (4 September – 6 October 2014, VEGILOT campaign) observed with a PDWL installed on a 70 m tower in Paris city centre. The turbulent radial wind field was reconstructed from the radial wind field of the horizontal surface scans (1° elevation angle) by using the velocity azimuth display method. The VEGILOT campaign provided 4577 horizontal surface scans, hence for the classification of the turbulent structures we developed an automatic method based on texture analysis and machine learning of the turbulent radial wind fields. Thirty characteristic cases of each turbulent structure types were selected at the learning step after an extensive examination of the meteorological parameters. Rolls cases were selected at the same time that cloud streets were visible on satellite images, streaks cases were selected during high wind shear development near the surface and thermals case were selected when solar radiation measurements in the area were high. In addition, sixty cases of “others”, representing any other type of turbulence, were added to the training ensemble. The analysis of errors estimated by the cross-validation shows that the K-nearest neighbours’ algorithm was able to classify accurately 96.3% of these 150 cases. Subsequently the algorithm was applied to the whole dataset of 4577 scans. The results show 52% of the scans classified as containing turbulent structures with 33% being coherent turbulent structures (22% streaks, 11% rolls).</p><p>Based on this classification, the physical parameters associated with the different types of turbulent structures were determined, e.g. structure size, ABL height, synoptic wind speed, vertical wind speed. Range height indicator and line of sight scans provided vertical observations that illustrate the presence of vertical motions during the observation of turbulent structures. The structure sizes were retrieved from the spectral analysis in the transverse direction relative to the synoptic wind, and are in agreement with the commonly observed sizes (a few 100 m for streaks, a few km for rolls).</p>


In this paper the comparative study of two supervised machine learning techniques for classification problems has been done. Due to the real-time processing ability of neural network, it is having numerous applications in many fields. SVM is also very popular supervised learning algorithm because of its good generalization power. This paper presents the thorough study of the presented classification algorithm and their comparative study of accuracy and speed which would help other researchers to develop novel algorithms for applications. The comparative study showed that the performance of SVM is better when dealing with multidimensions and continuous features. The selection and settings of the kernel function are essential for SVM optimality


2019 ◽  
Vol 23 (1) ◽  
pp. 12-21 ◽  
Author(s):  
Shikha N. Khera ◽  
Divya

Information technology (IT) industry in India has been facing a systemic issue of high attrition in the past few years, resulting in monetary and knowledge-based loses to the companies. The aim of this research is to develop a model to predict employee attrition and provide the organizations opportunities to address any issue and improve retention. Predictive model was developed based on supervised machine learning algorithm, support vector machine (SVM). Archival employee data (consisting of 22 input features) were collected from Human Resource databases of three IT companies in India, including their employment status (response variable) at the time of collection. Accuracy results from the confusion matrix for the SVM model showed that the model has an accuracy of 85 per cent. Also, results show that the model performs better in predicting who will leave the firm as compared to predicting who will not leave the company.


2020 ◽  
Vol 36 (6) ◽  
pp. 439-442
Author(s):  
Alissa Jell ◽  
Christina Kuttler ◽  
Daniel Ostler ◽  
Norbert Hüser

<b><i>Introduction:</i></b> Esophageal motility disorders have a severe impact on patients’ quality of life. While high-resolution manometry (HRM) is the gold standard in the diagnosis of esophageal motility disorders, intermittently occurring muscular deficiencies often remain undiscovered if they do not lead to an intense level of discomfort or cause suffering in patients. Ambulatory long-term HRM allows us to study the circadian (dys)function of the esophagus in a unique way. With the prolonged examination period of 24 h, however, there is an immense increase in data which requires personnel and time for evaluation not available in clinical routine. Artificial intelligence (AI) might contribute here by performing an autonomous analysis. <b><i>Methods:</i></b> On the basis of 40 previously performed and manually tagged long-term HRM in patients with suspected temporary esophageal motility disorders, we implemented a supervised machine learning algorithm for automated swallow detection and classification. <b><i>Results:</i></b> For a set of 24 h of long-term HRM by means of this algorithm, the evaluation time could be reduced from 3 days to a core evaluation time of 11 min for automated swallow detection and clustering plus an additional 10–20 min of evaluation time, depending on the complexity and diversity of motility disorders in the examined patient. In 12.5% of patients with suggested esophageal motility disorders, AI-enabled long-term HRM was able to reveal new and relevant findings for subsequent therapy. <b><i>Conclusion:</i></b> This new approach paves the way to the clinical use of long-term HRM in patients with temporary esophageal motility disorders and might serve as an ideal and clinically relevant application of AI.


Friction ◽  
2021 ◽  
Author(s):  
Vigneashwara Pandiyan ◽  
Josef Prost ◽  
Georg Vorlaufer ◽  
Markus Varga ◽  
Kilian Wasmer

AbstractFunctional surfaces in relative contact and motion are prone to wear and tear, resulting in loss of efficiency and performance of the workpieces/machines. Wear occurs in the form of adhesion, abrasion, scuffing, galling, and scoring between contacts. However, the rate of the wear phenomenon depends primarily on the physical properties and the surrounding environment. Monitoring the integrity of surfaces by offline inspections leads to significant wasted machine time. A potential alternate option to offline inspection currently practiced in industries is the analysis of sensors signatures capable of capturing the wear state and correlating it with the wear phenomenon, followed by in situ classification using a state-of-the-art machine learning (ML) algorithm. Though this technique is better than offline inspection, it possesses inherent disadvantages for training the ML models. Ideally, supervised training of ML models requires the datasets considered for the classification to be of equal weightage to avoid biasing. The collection of such a dataset is very cumbersome and expensive in practice, as in real industrial applications, the malfunction period is minimal compared to normal operation. Furthermore, classification models would not classify new wear phenomena from the normal regime if they are unfamiliar. As a promising alternative, in this work, we propose a methodology able to differentiate the abnormal regimes, i.e., wear phenomenon regimes, from the normal regime. This is carried out by familiarizing the ML algorithms only with the distribution of the acoustic emission (AE) signals captured using a microphone related to the normal regime. As a result, the ML algorithms would be able to detect whether some overlaps exist with the learnt distributions when a new, unseen signal arrives. To achieve this goal, a generative convolutional neural network (CNN) architecture based on variational auto encoder (VAE) is built and trained. During the validation procedure of the proposed CNN architectures, we were capable of identifying acoustics signals corresponding to the normal and abnormal wear regime with an accuracy of 97% and 80%. Hence, our approach shows very promising results for in situ and real-time condition monitoring or even wear prediction in tribological applications.


Genes ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 527
Author(s):  
Eran Elhaik ◽  
Dan Graur

In the last 15 years or so, soft selective sweep mechanisms have been catapulted from a curiosity of little evolutionary importance to a ubiquitous mechanism claimed to explain most adaptive evolution and, in some cases, most evolution. This transformation was aided by a series of articles by Daniel Schrider and Andrew Kern. Within this series, a paper entitled “Soft sweeps are the dominant mode of adaptation in the human genome” (Schrider and Kern, Mol. Biol. Evolut. 2017, 34(8), 1863–1877) attracted a great deal of attention, in particular in conjunction with another paper (Kern and Hahn, Mol. Biol. Evolut. 2018, 35(6), 1366–1371), for purporting to discredit the Neutral Theory of Molecular Evolution (Kimura 1968). Here, we address an alleged novelty in Schrider and Kern’s paper, i.e., the claim that their study involved an artificial intelligence technique called supervised machine learning (SML). SML is predicated upon the existence of a training dataset in which the correspondence between the input and output is known empirically to be true. Curiously, Schrider and Kern did not possess a training dataset of genomic segments known a priori to have evolved either neutrally or through soft or hard selective sweeps. Thus, their claim of using SML is thoroughly and utterly misleading. In the absence of legitimate training datasets, Schrider and Kern used: (1) simulations that employ many manipulatable variables and (2) a system of data cherry-picking rivaling the worst excesses in the literature. These two factors, in addition to the lack of negative controls and the irreproducibility of their results due to incomplete methodological detail, lead us to conclude that all evolutionary inferences derived from so-called SML algorithms (e.g., S/HIC) should be taken with a huge shovel of salt.


Sign in / Sign up

Export Citation Format

Share Document