scholarly journals Can machine learning improve the model representation of TKE dissipation rate in the boundary layer for complex terrain?

2020 ◽  
Author(s):  
Nicola Bodini ◽  
Julie K. Lundquist ◽  
Mike Optis

Abstract. Current turbulence parameterizations in numerical weather prediction models at the mesoscale assume a local equilibrium between production and dissipation of turbulence. As this assumption does not hold at fine horizontal resolutions, improved ways to represent turbulent kinetic energy (TKE) dissipation rate (ε) are needed. Here, we use a 6-week data set of turbulence measurements from 184 sonic anemometers in complex terrain at the Perdigão field campaign to suggest improved representations of dissipation rate. First, we demonstrate that a widely used Mellor, Yamada, Nakanishi, and Niino (MYNN) parameterization of TKE dissipation rate leads to a large inaccuracy and bias in the representation of ε. Next, we assess the potential of machine-learning techniques to predict TKE dissipation rate from a set of atmospheric and terrain-related features. We train and test several machine-learning algorithms using the data at Perdigão, and we find that multivariate polynomial regressions and random forests can eliminate the bias MYNN currently shows in representing ε, while also reducing the average error by up to 30 %. Of all the variables included in the algorithms, TKE is the variable responsible for most of the variability of ε, and a strong positive correlation exists between the two. These results suggest further consideration of machine-learning techniques to enhance parameterizations of turbulence in numerical weather prediction models.

2020 ◽  
Vol 13 (9) ◽  
pp. 4271-4285
Author(s):  
Nicola Bodini ◽  
Julie K. Lundquist ◽  
Mike Optis

Abstract. Current turbulence parameterizations in numerical weather prediction models at the mesoscale assume a local equilibrium between production and dissipation of turbulence. As this assumption does not hold at fine horizontal resolutions, improved ways to represent turbulent kinetic energy (TKE) dissipation rate (ϵ) are needed. Here, we use a 6-week data set of turbulence measurements from 184 sonic anemometers in complex terrain at the Perdigão field campaign to suggest improved representations of dissipation rate. First, we demonstrate that the widely used Mellor, Yamada, Nakanishi, and Niino (MYNN) parameterization of TKE dissipation rate leads to a large inaccuracy and bias in the representation of ϵ. Next, we assess the potential of machine-learning techniques to predict TKE dissipation rate from a set of atmospheric and terrain-related features. We train and test several machine-learning algorithms using the data at Perdigão, and we find that the models eliminate the bias MYNN currently shows in representing ϵ, while also reducing the average error by up to almost 40 %. Of all the variables included in the algorithms, TKE is the variable responsible for most of the variability of ϵ, and a strong positive correlation exists between the two. These results suggest further consideration of machine-learning techniques to enhance parameterizations of turbulence in numerical weather prediction models.


2020 ◽  
Vol 16 ◽  
Author(s):  
Nitigya Sambyal ◽  
Poonam Saini ◽  
Rupali Syal

Background and Introduction: Diabetes mellitus is a metabolic disorder that has emerged as a serious public health issue worldwide. According to the World Health Organization (WHO), without interventions, the number of diabetic incidences is expected to be at least 629 million by 2045. Uncontrolled diabetes gradually leads to progressive damage to eyes, heart, kidneys, blood vessels and nerves. Method: The paper presents a critical review of existing statistical and Artificial Intelligence (AI) based machine learning techniques with respect to DM complications namely retinopathy, neuropathy and nephropathy. The statistical and machine learning analytic techniques are used to structure the subsequent content review. Result: It has been inferred that statistical analysis can help only in inferential and descriptive analysis whereas, AI based machine learning models can even provide actionable prediction models for faster and accurate diagnose of complications associated with DM. Conclusion: The integration of AI based analytics techniques like machine learning and deep learning in clinical medicine will result in improved disease management through faster disease detection and cost reduction for disease treatment.


Atmosphere ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 89
Author(s):  
Harel. B. Muskatel ◽  
Ulrich Blahak ◽  
Pavel Khain ◽  
Yoav Levi ◽  
Qiang Fu

Parametrization of radiation transfer through clouds is an important factor in the ability of Numerical Weather Prediction models to correctly describe the weather evolution. Here we present a practical parameterization of both liquid droplets and ice optical properties in the longwave and shortwave radiation. An advanced spectral averaging method is used to calculate the extinction coefficient, single scattering albedo, forward scattered fraction and asymmetry factor (bext, v, f, g), taking into account the nonlinear effects of light attenuation in the spectral averaging. An ensemble of particle size distributions was used for the ice optical properties calculations, which enables the effective size range to be extended up to 570 μm and thus be applicable for larger hydrometeor categories such as snow, graupel, and rain. The new parameterization was applied both in the COSMO limited-area model and in ICON global model and was evaluated by using the COSMO model to simulate stratiform ice and water clouds. Numerical weather prediction models usually determine the asymmetry factor as a function of effective size. For the first time in an operational numerical weather prediction (NWP) model, the asymmetry factor is parametrized as a function of aspect ratio. The method is generalized and is available on-line to be readily applied to any optical properties dataset and spectral intervals of a wide range of radiation transfer models and applications.


2005 ◽  
Vol 32 (14-15) ◽  
pp. 1841-1863 ◽  
Author(s):  
Mark S. Roulston ◽  
Jerome Ellepola ◽  
Jost von Hardenberg ◽  
Leonard A. Smith

2020 ◽  
Author(s):  
Georgios Kantidakis ◽  
Hein Putter ◽  
Carlo Lancia ◽  
Jacob de Boer ◽  
Andries E Braat ◽  
...  

Abstract Background: Predicting survival of recipients after liver transplantation is regarded as one of the most important challenges in contemporary medicine. Hence, improving on current prediction models is of great interest.Nowadays, there is a strong discussion in the medical field about machine learning (ML) and whether it has greater potential than traditional regression models when dealing with complex data. Criticism to ML is related to unsuitable performance measures and lack of interpretability which is important for clinicians.Methods: In this paper, ML techniques such as random forests and neural networks are applied to large data of 62294 patients from the United States with 97 predictors selected on clinical/statistical grounds, over more than 600, to predict survival from transplantation. Of particular interest is also the identification of potential risk factors. A comparison is performed between 3 different Cox models (with all variables, backward selection and LASSO) and 3 machine learning techniques: a random survival forest and 2 partial logistic artificial neural networks (PLANNs). For PLANNs, novel extensions to their original specification are tested. Emphasis is given on the advantages and pitfalls of each method and on the interpretability of the ML techniques.Results: Well-established predictive measures are employed from the survival field (C-index, Brier score and Integrated Brier Score) and the strongest prognostic factors are identified for each model. Clinical endpoint is overall graft-survival defined as the time between transplantation and the date of graft-failure or death. The random survival forest shows slightly better predictive performance than Cox models based on the C-index. Neural networks show better performance than both Cox models and random survival forest based on the Integrated Brier Score at 10 years.Conclusion: In this work, it is shown that machine learning techniques can be a useful tool for both prediction and interpretation in the survival context. From the ML techniques examined here, PLANN with 1 hidden layer predicts survival probabilities the most accurately, being as calibrated as the Cox model with all variables.


Sign in / Sign up

Export Citation Format

Share Document