probability distribution functions
Recently Published Documents


TOTAL DOCUMENTS

371
(FIVE YEARS 70)

H-INDEX

32
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Thibault Vaillant de Guélis ◽  
Gérard Ancellet ◽  
Anne Garnier ◽  
Laurent C.-Labonnote ◽  
Jacques Pelon ◽  
...  

Abstract. The features detected in monolayer atmospheric columns sounded by the Cloud and Aerosol Lidar with Orthogonal Polarization (CALIOP) and classified as cloud or aerosol layers by the CALIOP version 4 (V4) cloud and aerosol discrimination (CAD) algorithm are reassessed using perfectly collocated brightness temperatures measured by the Imaging Infrared Radiometer (IIR) onboard the same satellite. Using the IIR’s three wavelength measurements of layers that are confidently classified by the CALIOP CAD algorithm, we calculate two-dimensional (2-D) probability distribution functions (PDFs) of IIR brightness temperature differences (BTDs) for different cloud and aerosol types. We then compare these PDFs with 1-D radiative transfer simulations for ice and water clouds and dust and marine aerosols. Using these IIR 2-D BTD signature PDFs, we develop and deploy a new IIR-based CAD algorithm and compare the classifications obtained to the results reported by the CALIOP-only V4 CAD algorithm. IIR observations are shown to be able to identify clouds with a good accuracy. The IIR cloud identifications agree very well with layers classified as confident clouds by the V4 CAD algorithm (88 %). More importantly, simultaneous use of IIR information reduces the ambiguity in a notable fraction of "not confident" V4 cloud classifications. 28 % and 14 % of the ambiguous V4 cloud classifications are confirmed thanks to the IIR observations in the tropics and in the midlatitudes respectively. IIR observations are of relatively little help in deriving high confidence classifications for most aerosols, as the low altitudes and small optical depths of aerosol layers yield IIR signatures that are similar to those from clear skies. However, misclassifications of aerosol layers, such as dense dust or elevated smoke layers, by the V4 CAD algorithm can be corrected to cloud layer classification by including IIR information. 10 %, 16 %, and 6 % of the ambiguous V4 dust, polluted dust, and tropospheric elevated smoke respectively are found to be misclassified cloud layers by the IIR measurements.


2021 ◽  
Vol 9 (12) ◽  
pp. 1322
Author(s):  
Aikaterini P. Kyprioti ◽  
Ehsan Adeli ◽  
Alexandros A. Taflanidis ◽  
Joannes J. Westerink ◽  
Hendrik L. Tolman

During landfalling tropical storms, predictions of the expected storm surge are critical for guiding evacuation and emergency response/preparedness decisions, both at regional and national levels. Forecast errors related to storm track, intensity, and size impact these predictions and, thus, should be explicitly accounted for. The Probabilistic tropical storm Surge (P-Surge) model is the established approach from the National Weather Service (NWS) to achieve this objective. Historical forecast errors are utilized to specify probability distribution functions for different storm features, quantifying, ultimately, the uncertainty in the National Hurricane Center advisories. Surge statistics are estimated by using the predictions across a storm ensemble generated by sampling features from the aforementioned probability distribution functions. P-Surge relies, currently, on a full factorial sampling scheme to create this storm ensemble, combining representative values for each of the storm features. This work investigates an alternative formulation that can be viewed as a seamless extension to the current NHC framework, adopting a quasi-Monte Carlo (QMC) sampling implementation with ultimate goal to reduce the computational burden and provide surge predictions with the same degree of statistical reliability, while using a smaller number of sample storms. The definition of forecast errors adopted here directly follows published NWS practices, while different uncertainty levels are considered in the examined case studies, in order to offer a comprehensive validation. This validation, considering different historical storms, clearly demonstrates the advantages QMC can offer.


2021 ◽  
pp. 1-58
Author(s):  
Tianbao Zhao ◽  
Aiguo Dai

AbstractDrought is projected to become more severe and widespread as global warming continues in the 21st century, but hydroclimatic changes and their drivers are not well examined in the latest projections from the Phase Six of the Coupled Model Inetercomparison Project (CMIP6). Here, precipitation (P), evapotranspiration (E), soil moisture (SM), and runoff (R) from 25 CMIP6 models, together with self-calibrated Palmer Drought Severity Index with Penman-Monteith potential evapotranspiration (scPDSIpm), are analyzed to quantify hydroclimatic and drought changes in the 21st century and the underlying causes. Results confirm consistent drying in these hydroclimatic metrics across most of the Americas (including the Amazon), Europe and the Mediterranean region, southern Africa, and Australia; although the drying magnitude differs, with the drying being more severe and widespread in surface SM than in total SM. Global drought frequency based on surface SM and scPDSIpm increases by ~25%–100% (50%–200%) under the SSP2-4.5 (SSP5-8.5) scenario in the 21st century together with large increases in drought duration and areas, which result from a decrease in the mean and flattening of the probability distribution functions of SM and scPDSIpm; while the R-based drought changes are relatively small. Changes in both P and E contribute to the SM change, whereas scPDSIpm decreases result from ubiquitous PET increases and P decreases over subtropical areas. The R changes are determined primarily by P changes, while the PET change explains most of the E increase. Inter-model spreads in surface SM and R changes are large, leading to large uncertainties in the drought projections.


Vibration ◽  
2021 ◽  
Vol 4 (4) ◽  
pp. 787-804
Author(s):  
Zahra Sotoudeh ◽  
Tyler Lyman ◽  
Leslie Montes Lucano ◽  
Natallia Urieva

In this paper, we use the Monte Carlo simulation to study aeroelastic behavior caused by non-random uncertain free-stream velocity. For sampling, we use the interval process method. Each family of samples is defined by a correlation function and upper and lower bounds. By using this sampling method, there is no need for constructing precise probability distribution functions; therefore, this method is suitable for practical engineering applications. We studied the aeroelastic behavior of an airfoil and a high aspect-ratio wing.


Author(s):  
Amr Khaled Khamees ◽  
Almoataz Y. Abdelaziz ◽  
Ziad M. Ali ◽  
Mosleh M. Alharthi ◽  
Sherif S.M. Ghoneim ◽  
...  

2021 ◽  
Vol 507 (4) ◽  
pp. 4764-4778 ◽  
Author(s):  
Christopher T Garling ◽  
Annika H G Peter ◽  
Christopher S Kochanek ◽  
David J Sand ◽  
Denija Crnojević

ABSTRACT We present results from a resolved stellar population search for dwarf satellite galaxies of six nearby (D < 5 Mpc), sub-Milky Way mass hosts using deep (m ∼ 27 mag) optical imaging from the Large Binocular Telescope. We perform image simulations to quantify our detection efficiency for dwarfs over a large range in luminosity and size, and develop a fast catalogue-based emulator that includes a treatment of unresolved photometric blending. We discover no new dwarf satellites, but we recover two previously known dwarfs (DDO 113 and LV J1228+4358) with MV < −12 that lie in our survey volume. We preview a new theoretical framework to predict satellite luminosity functions using analytical probability distribution functions and apply it to our sample, finding that we predict one fewer classical dwarf and one more faint dwarf (MV ∼ −7.5) than we find in our observational sample (i.e. the observational sample is slightly top-heavy). However, the overall number of dwarfs in the observational sample (2) is in good agreement with the theoretical expectations. Interestingly, DDO 113 shows signs of environmental quenching and LV J1228+4358 is tidally disrupting, suggesting that low-mass hosts may affect their satellites more severely than previously believed.


Electricity ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 300-315
Author(s):  
Marco Bosi ◽  
Albert Miguel Sanchez ◽  
Francisco Javier Pajares ◽  
Lorenzo Peretto

This paper presents a study and proposes a new methodology to analyze, evaluate and reduce the overall uncertainty of instrumentations for EMC measurements. For the scope of this work, the front end of a commercial EMI receiver is chosen and variations due to tolerances, temperature and frequency response of the system are evaluated. This paper illustrates in detail how to treat each block composing the model by analyzing each discrete component, and how to evaluate their influence on the measurand. Since a model can have hundreds or even thousands of parameters, the probability distribution functions (PDFs) of some variable might be unknown. So, a method that allows to obtain in a fast and easy way the uncertainty of the measurement despite having so many variables, to then being able to evaluate the influence of each component on the measurand, is necessary for a correct design. In this way, it will be possible to indicate which discrete components have the most influence on the measurand and thus set the maximum tolerances allowed and being able to design a cost-effective solution. Furthermore, this works presents a methodology which can easily be extended and applied to estimate and compute the uncertainty for electromagnetic interferences, energy storage systems (ESS), energy production, electric machines, electric transports and power plants in general.


2021 ◽  
Vol 9 (3) ◽  
pp. 587-606
Author(s):  
Saeid Pourmand ◽  
Ashkan Shabbak ◽  
Mojtaba Ganjali

Due to the extensive use of high-dimensional data and its application in a wide range of scientifc felds of research, dimensionality reduction has become a major part of the preprocessing step in machine learning. Feature selection is one procedure for reducing dimensionality. In this process, instead of using the whole set of features, a subset is selected to be used in the learning model. Feature selection (FS) methods are divided into three main categories: flters, wrappers, and embedded approaches. Filter methods only depend on the characteristics of the data, and do not rely on the learning model at hand. Divergence functions as measures of evaluating the differences between probability distribution functions can be used as flter methods of feature selection. In this paper, the performances of a few divergence functions such as Jensen-Shannon (JS) divergence and Exponential divergence (EXP) are compared with those of some of the most-known flter feature selection methods such as Information Gain (IG) and Chi-Squared (CHI). This comparison was made through accuracy rate and F1-score of classifcation models after implementing these feature selection methods.


Sign in / Sign up

Export Citation Format

Share Document