scholarly journals Detecting turbulent structures on single Doppler lidar large datasets: an automated classification method for horizontal scans

2020 ◽  
Vol 13 (12) ◽  
pp. 6579-6592
Author(s):  
Ioannis Cheliotis ◽  
Elsa Dieudonné ◽  
Hervé Delbarre ◽  
Anton Sokolov ◽  
Egor Dmitriev ◽  
...  

Abstract. Medium-to-large fluctuations and coherent structures (mlf-cs's) can be observed using horizontal scans from single Doppler lidar or radar systems. Despite the ability to detect the structures visually on the images, this method would be time-consuming on large datasets, thus limiting the possibilities to perform studies of the structures properties over more than a few days. In order to overcome this problem, an automated classification method was developed, based on the observations recorded by a scanning Doppler lidar (Leosphere WLS100) installed atop a 75 m tower in Paris's city centre (France) during a 2-month campaign (September–October 2014). The mlf-cs's of the radial wind speed are estimated using the velocity–azimuth display method over 4577 quasi-horizontal scans. Three structure types were identified by visual examination of the wind fields: unaligned thermals, rolls and streaks. A learning ensemble of 150 mlf-cs patterns was classified manually relying on in situ and satellite data. The differences between the three types of structures were highlighted by enhancing the contrast of the images and computing four texture parameters (correlation, contrast, homogeneity and energy) that were provided to the supervised machine-learning algorithm, namely the quadratic discriminant analysis. The algorithm was able to classify successfully about 91 % of the cases based solely on the texture analysis parameters. The algorithm performed best for the streak structures with a classification error equivalent to 3.3 %. The trained algorithm applied to the whole scan ensemble detected structures on 54 % of the scans, among which 34 % were coherent structures (rolls and streaks).

2020 ◽  
Author(s):  
Ioannis Cheliotis ◽  
Elsa Dieudonné ◽  
Hervé Delbarre ◽  
Anton Sokolov ◽  
Egor Dmitriev ◽  
...  

Abstract. Turbulent structures can be observed using horizontal scans from single Doppler lidar or radar systems. Despite the ability to detect the structures manually on the images, this method would be time-consuming on large datasets, thus limiting the possibilities to perform studies of the turbulent structures properties over more than a few days. In order to overcome this problem, an automated classification method was developed, based on the observations recorded by a scanning Doppler lidar (LEOSPHERE WLS100) and installed atop a 75-m tower in Paris city centre (France) during a 2-months campaign (September-October 2014). The lidar recorded 4577 quasi-horizontal scans for which the turbulent component of the radial wind speed was determined using the velocity azimuth display method. Three turbulent structures types were identified by visual examination of the wind fields: unaligned thermals, rolls and streaks. A learning ensemble of 150 turbulent patterns was classified manually relying on in-situ and satellite data. The differences between the three types of structures were highlighted by enhancing the contrast of the images and computing four texture parameters (correlation, contrast, homogeneity and energy) that were provided to the supervised machine learning algorithm (quadratic discriminate analysis). Using the 10-fold cross validation method, the classification error was estimated to be about 9.2 % for the training ensemble and 3.3 % in particular for streaks. The trained algorithm applied to the whole scan ensemble detected turbulent structures on 54 % of the scans, among which 34 % were coherent turbulent structures (rolls, streaks).


2020 ◽  
Vol 223 ◽  
pp. 03013
Author(s):  
Anton Sokolov ◽  
Egor Dmitriev ◽  
Ioannis Cheliotis ◽  
Hervé Delbarre ◽  
Elsa Dieudonne ◽  
...  

We present algorithms and results of automated processing of LiDAR measurements obtained during VEGILOT measuring campaign in Paris in autumn 2014 in order to study horizontal turbulent atmospheric regimes on urban scales. To process images obtained by horizontal atmospheric scanning using Doppler LiDAR, the method is proposed based on texture analysis and classification using supervised machine learning algorithms. The results of the parallel classification by various classifiers were combined using the majority voting strategy. The obtained estimates of accuracy demonstrate the efficiency of the proposed method for solving the problem of remote sensing of regional-scale turbulent patterns in the atmosphere.


In this paper the comparative study of two supervised machine learning techniques for classification problems has been done. Due to the real-time processing ability of neural network, it is having numerous applications in many fields. SVM is also very popular supervised learning algorithm because of its good generalization power. This paper presents the thorough study of the presented classification algorithm and their comparative study of accuracy and speed which would help other researchers to develop novel algorithms for applications. The comparative study showed that the performance of SVM is better when dealing with multidimensions and continuous features. The selection and settings of the kernel function are essential for SVM optimality


2019 ◽  
Vol 23 (1) ◽  
pp. 12-21 ◽  
Author(s):  
Shikha N. Khera ◽  
Divya

Information technology (IT) industry in India has been facing a systemic issue of high attrition in the past few years, resulting in monetary and knowledge-based loses to the companies. The aim of this research is to develop a model to predict employee attrition and provide the organizations opportunities to address any issue and improve retention. Predictive model was developed based on supervised machine learning algorithm, support vector machine (SVM). Archival employee data (consisting of 22 input features) were collected from Human Resource databases of three IT companies in India, including their employment status (response variable) at the time of collection. Accuracy results from the confusion matrix for the SVM model showed that the model has an accuracy of 85 per cent. Also, results show that the model performs better in predicting who will leave the firm as compared to predicting who will not leave the company.


2020 ◽  
Vol 36 (6) ◽  
pp. 439-442
Author(s):  
Alissa Jell ◽  
Christina Kuttler ◽  
Daniel Ostler ◽  
Norbert Hüser

<b><i>Introduction:</i></b> Esophageal motility disorders have a severe impact on patients’ quality of life. While high-resolution manometry (HRM) is the gold standard in the diagnosis of esophageal motility disorders, intermittently occurring muscular deficiencies often remain undiscovered if they do not lead to an intense level of discomfort or cause suffering in patients. Ambulatory long-term HRM allows us to study the circadian (dys)function of the esophagus in a unique way. With the prolonged examination period of 24 h, however, there is an immense increase in data which requires personnel and time for evaluation not available in clinical routine. Artificial intelligence (AI) might contribute here by performing an autonomous analysis. <b><i>Methods:</i></b> On the basis of 40 previously performed and manually tagged long-term HRM in patients with suspected temporary esophageal motility disorders, we implemented a supervised machine learning algorithm for automated swallow detection and classification. <b><i>Results:</i></b> For a set of 24 h of long-term HRM by means of this algorithm, the evaluation time could be reduced from 3 days to a core evaluation time of 11 min for automated swallow detection and clustering plus an additional 10–20 min of evaluation time, depending on the complexity and diversity of motility disorders in the examined patient. In 12.5% of patients with suggested esophageal motility disorders, AI-enabled long-term HRM was able to reveal new and relevant findings for subsequent therapy. <b><i>Conclusion:</i></b> This new approach paves the way to the clinical use of long-term HRM in patients with temporary esophageal motility disorders and might serve as an ideal and clinically relevant application of AI.


Friction ◽  
2021 ◽  
Author(s):  
Vigneashwara Pandiyan ◽  
Josef Prost ◽  
Georg Vorlaufer ◽  
Markus Varga ◽  
Kilian Wasmer

AbstractFunctional surfaces in relative contact and motion are prone to wear and tear, resulting in loss of efficiency and performance of the workpieces/machines. Wear occurs in the form of adhesion, abrasion, scuffing, galling, and scoring between contacts. However, the rate of the wear phenomenon depends primarily on the physical properties and the surrounding environment. Monitoring the integrity of surfaces by offline inspections leads to significant wasted machine time. A potential alternate option to offline inspection currently practiced in industries is the analysis of sensors signatures capable of capturing the wear state and correlating it with the wear phenomenon, followed by in situ classification using a state-of-the-art machine learning (ML) algorithm. Though this technique is better than offline inspection, it possesses inherent disadvantages for training the ML models. Ideally, supervised training of ML models requires the datasets considered for the classification to be of equal weightage to avoid biasing. The collection of such a dataset is very cumbersome and expensive in practice, as in real industrial applications, the malfunction period is minimal compared to normal operation. Furthermore, classification models would not classify new wear phenomena from the normal regime if they are unfamiliar. As a promising alternative, in this work, we propose a methodology able to differentiate the abnormal regimes, i.e., wear phenomenon regimes, from the normal regime. This is carried out by familiarizing the ML algorithms only with the distribution of the acoustic emission (AE) signals captured using a microphone related to the normal regime. As a result, the ML algorithms would be able to detect whether some overlaps exist with the learnt distributions when a new, unseen signal arrives. To achieve this goal, a generative convolutional neural network (CNN) architecture based on variational auto encoder (VAE) is built and trained. During the validation procedure of the proposed CNN architectures, we were capable of identifying acoustics signals corresponding to the normal and abnormal wear regime with an accuracy of 97% and 80%. Hence, our approach shows very promising results for in situ and real-time condition monitoring or even wear prediction in tribological applications.


Genes ◽  
2021 ◽  
Vol 12 (4) ◽  
pp. 527
Author(s):  
Eran Elhaik ◽  
Dan Graur

In the last 15 years or so, soft selective sweep mechanisms have been catapulted from a curiosity of little evolutionary importance to a ubiquitous mechanism claimed to explain most adaptive evolution and, in some cases, most evolution. This transformation was aided by a series of articles by Daniel Schrider and Andrew Kern. Within this series, a paper entitled “Soft sweeps are the dominant mode of adaptation in the human genome” (Schrider and Kern, Mol. Biol. Evolut. 2017, 34(8), 1863–1877) attracted a great deal of attention, in particular in conjunction with another paper (Kern and Hahn, Mol. Biol. Evolut. 2018, 35(6), 1366–1371), for purporting to discredit the Neutral Theory of Molecular Evolution (Kimura 1968). Here, we address an alleged novelty in Schrider and Kern’s paper, i.e., the claim that their study involved an artificial intelligence technique called supervised machine learning (SML). SML is predicated upon the existence of a training dataset in which the correspondence between the input and output is known empirically to be true. Curiously, Schrider and Kern did not possess a training dataset of genomic segments known a priori to have evolved either neutrally or through soft or hard selective sweeps. Thus, their claim of using SML is thoroughly and utterly misleading. In the absence of legitimate training datasets, Schrider and Kern used: (1) simulations that employ many manipulatable variables and (2) a system of data cherry-picking rivaling the worst excesses in the literature. These two factors, in addition to the lack of negative controls and the irreproducibility of their results due to incomplete methodological detail, lead us to conclude that all evolutionary inferences derived from so-called SML algorithms (e.g., S/HIC) should be taken with a huge shovel of salt.


Hypertension ◽  
2021 ◽  
Vol 78 (5) ◽  
pp. 1595-1604
Author(s):  
Fabrizio Buffolo ◽  
Jacopo Burrello ◽  
Alessio Burrello ◽  
Daniel Heinrich ◽  
Christian Adolf ◽  
...  

Primary aldosteronism (PA) is the cause of arterial hypertension in 4% to 6% of patients, and 30% of patients with PA are affected by unilateral and surgically curable forms. Current guidelines recommend screening for PA ≈50% of patients with hypertension on the basis of individual factors, while some experts suggest screening all patients with hypertension. To define the risk of PA and tailor the diagnostic workup to the individual risk of each patient, we developed a conventional scoring system and supervised machine learning algorithms using a retrospective cohort of 4059 patients with hypertension. On the basis of 6 widely available parameters, we developed a numerical score and 308 machine learning-based models, selecting the one with the highest diagnostic performance. After validation, we obtained high predictive performance with our score (optimized sensitivity of 90.7% for PA and 92.3% for unilateral PA [UPA]). The machine learning-based model provided the highest performance, with an area under the curve of 0.834 for PA and 0.905 for diagnosis of UPA, with optimized sensitivity of 96.6% for PA, and 100.0% for UPA, at validation. The application of the predicting tools allowed the identification of a subgroup of patients with very low risk of PA (0.6% for both models) and null probability of having UPA. In conclusion, this score and the machine learning algorithm can accurately predict the individual pretest probability of PA in patients with hypertension and circumvent screening in up to 32.7% of patients using a machine learning-based model, without omitting patients with surgically curable UPA.


Sign in / Sign up

Export Citation Format

Share Document