Auto-extraction of stratified interface in the underground space based on Bayesian detection algorithm with statistical fitting of probability density by actual data

2019 ◽  
Vol 46 ◽  
pp. 101430
Author(s):  
Shanquan Gui ◽  
Renzhou Gui
2019 ◽  
Vol 892 ◽  
pp. 284-291
Author(s):  
Ahmed S.A. Badawi ◽  
Nurul Fadzlin Hasbullah ◽  
Siti Hajar Yusoff ◽  
Sheroz Khan ◽  
Aisha Hashim ◽  
...  

The need of clean and renewable energy, as well as the power shortage in Gaza strip with few wind energy studies conducted in Palestine, provide the importance of this paper. Probability density function is commonly used to represent wind speed frequency distributions for the evaluation of wind energy potential in a specific area. This study shows the analysis of the climatology of the wind profile over the State of Palestine; the selections of the suitable probability density function decrease the wind power estimation error percentage. A selection of probability density function is used to model average daily wind speed data recorded at for 10 years in Gaza strip. Weibull probability distribution function has been estimated for Gaza based on average wind speed for 10 years. This assessment is done by analyzing wind data using Weibull probability function to find out the characteristics of wind energy conversion. The wind speed data measured from January 1996 to December 2005 in Gaza is used as a sample of actual data to this study. The main aim is to use the Weibull representative wind data for Gaza strip to show how statistical model for Gaza Strip over ten years. Weibull parameters determine by author depend on the pervious study using seven numerical methods, Weibull shape factor parameter is 1.7848, scale factor parameter is 4.3642 ms-1, average wind speed for Gaza strip based on 10 years actual data is 2.95 ms-1 per a day so the behavior of wind velocity based on probability density function show that we can produce energy in Gaza strip.


2019 ◽  
Vol 28 (3) ◽  
pp. 1257-1267 ◽  
Author(s):  
Priya Kucheria ◽  
McKay Moore Sohlberg ◽  
Jason Prideaux ◽  
Stephen Fickas

PurposeAn important predictor of postsecondary academic success is an individual's reading comprehension skills. Postsecondary readers apply a wide range of behavioral strategies to process text for learning purposes. Currently, no tools exist to detect a reader's use of strategies. The primary aim of this study was to develop Read, Understand, Learn, & Excel, an automated tool designed to detect reading strategy use and explore its accuracy in detecting strategies when students read digital, expository text.MethodAn iterative design was used to develop the computer algorithm for detecting 9 reading strategies. Twelve undergraduate students read 2 expository texts that were equated for length and complexity. A human observer documented the strategies employed by each reader, whereas the computer used digital sequences to detect the same strategies. Data were then coded and analyzed to determine agreement between the 2 sources of strategy detection (i.e., the computer and the observer).ResultsAgreement between the computer- and human-coded strategies was 75% or higher for 6 out of the 9 strategies. Only 3 out of the 9 strategies–previewing content, evaluating amount of remaining text, and periodic review and/or iterative summarizing–had less than 60% agreement.ConclusionRead, Understand, Learn, & Excel provides proof of concept that a reader's approach to engaging with academic text can be objectively and automatically captured. Clinical implications and suggestions to improve the sensitivity of the code are discussed.Supplemental Materialhttps://doi.org/10.23641/asha.8204786


2020 ◽  
pp. 9-13
Author(s):  
A. V. Lapko ◽  
V. A. Lapko

An original technique has been justified for the fast bandwidths selection of kernel functions in a nonparametric estimate of the multidimensional probability density of the Rosenblatt–Parzen type. The proposed method makes it possible to significantly increase the computational efficiency of the optimization procedure for kernel probability density estimates in the conditions of large-volume statistical data in comparison with traditional approaches. The basis of the proposed approach is the analysis of the optimal parameter formula for the bandwidths of a multidimensional kernel probability density estimate. Dependencies between the nonlinear functional on the probability density and its derivatives up to the second order inclusive of the antikurtosis coefficients of random variables are found. The bandwidths for each random variable are represented as the product of an undefined parameter and their mean square deviation. The influence of the error in restoring the established functional dependencies on the approximation properties of the kernel probability density estimation is determined. The obtained results are implemented as a method of synthesis and analysis of a fast bandwidths selection of the kernel estimation of the two-dimensional probability density of independent random variables. This method uses data on the quantitative characteristics of a family of lognormal distribution laws.


1999 ◽  
Vol 5 (2) ◽  
pp. 29-35
Author(s):  
Hiroyuki Ikuse ◽  
Shuji Hashimoto ◽  
Masafumi Yamamoto ◽  
Katsuhide Matsumura

Sign in / Sign up

Export Citation Format

Share Document