log likelihood
Recently Published Documents


TOTAL DOCUMENTS

703
(FIVE YEARS 150)

H-INDEX

33
(FIVE YEARS 4)

2021 ◽  
Vol 4 ◽  
pp. 1-8
Author(s):  
Esther Akoto Amoako

Abstract. Crime has an inherent geographical quality and when a crime occurs, it happens within a particular space making spatiality essential component in crime studies. To prevent and respond to crimes, it is first essential to identify the factors that trigger crimes and then design policy and strategy based on each factor. This project investigates the spatial dimension of violent crime rates in the city of Detroit for 2019. Crime data were obtained from the City of Detroit Data Portal and demographic data relating to social disorganization theory were obtained from the Census Bureau. In the presence of spatial spill over and spatial dependence, the assumptions of classical statistics are violated, and Ordinary Least Squares estimations are inefficient in explaining spatial dimensions of crime. This paper uses explanatory variables relating to the social disorganization theory of crime and spatial autoregressive models to determine the predictors of violent crime in the City for the period. Using GeoDa 1.18 and ArcGIS Desktop 10.7.1 software package, Spatial Lag Models (SLM) and Spatial Error Models were carried out to determine which model has high performance in identifying predictors of violent crime. SLM outperformed SEM in terms of efficiency with (AIC:5268.52; Breusch-Pagan test: 9.8402; R2: 16% & Log Likelihood: −2627.26) > SEM (AIC: 5275.24; Breusch-Pagan test: 9.7601; R2: 15% & Log Likelihood: −2630.6194). Strong support is found for the spatial disorganization theory of crime. High percent ethnic heterogeneity (% black) and high college graduates are the strongest predictors of violent crime in the study area.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yong Chen

An improved nonlinear weighted extreme gradient boosting (XGBoost) technique is developed to forecast length of stay for patients with imbalance data. The algorithm first chooses an effective technique for fitting the duration of stay and determining the distribution law and then optimizes the negative log likelihood loss function using a heuristic nonlinear weighting method based on sample percentage. Theoretical and practical results reveal that, when compared to existing algorithms, the XGBoost method based on nonlinear weighting may achieve higher classification accuracy and better prediction performance, which is beneficial in treating more patients with fewer hospital beds.


Energies ◽  
2021 ◽  
Vol 14 (22) ◽  
pp. 7630
Author(s):  
Joonho Seon ◽  
Youngghyu Sun ◽  
Soohyun Kim ◽  
Jinyoung Kim

In this paper, a time-lapse image method is proposed to improve the classification accuracy for multistate appliances with complex patterns based on nonintrusive load monitoring (NILM). A log-likelihood ratio detector with a maxima algorithm was applied to construct a real-time event detection of home appliances. Moreover, a novel image-combining method was employed to extract information from the data based on the Gramian angular field (GAF) and recurrence plot (RP) transformations. From the simulation results, it was confirmed that the classification accuracy can be increased by up to 30% with the proposed method compared with the conventional approaches in classifying multistate appliances.


2021 ◽  
Author(s):  
Mehmet Niyazi Cankaya ◽  
Roberto Vila

Abstract The maximum logq likelihood estimation method is a generalization of the known maximum log likelihood method to overcome the problem for modeling non-identical observations ( inliers and outliers). The parameter $q$ is a tuning constant to manage the modeling capability. Weibull is a flexible and popular distribution for problems in engineering. In this study, this method is used to estimate the parameters of Weibull distribution when non-identical observations exist. Since the main idea is based on modeling capability of objective function p(x; ʘ) = logq [f(x; ʘ)], we observe that the finiteness of score functions cannot play a role in the robust estimation for inliers . The properties of Weibull distribution are examined. In the numerical experiment, the parameters of Weibull distribution are estimated by logq and its special form, log , likelihood methods if the different designs of contamination into underlying Weibull distribution are applied. The optimization is performed via genetic algorithm. The modeling competence of p(x; ʘ) and insensitiveness to non-identical observations are observed by Monte Carlo simulation. The value of $q$ can be chosen by use of the mean squared error in simulation and the $p$ -value of Kolmogorov - Smirnov test statistic used for evaluation of fitting competence. Thus, we can overcome the problem about determining of the value of $q$ for real data sets.


Author(s):  
Yana Lyakhova ◽  
Evgeny Alexandrovich Polyakov ◽  
Alexey N Rubtsov

Abstract In recent years, there has been an intensive research on how to exploit the quantum laws of nature in the machine learning. Models have been put forward which employ spins, photons, and cold atoms. In this work we study the possibility of using the lattice fermions to learn the classical data. We propose an alternative to the quantum Boltzmann Machine, the so-called Spin-Fermion Machine (SFM), in which the spins represent the degrees of freedom of the observable data (to be learned), and the fermions represent the correlations between the data. The coupling is linear in spins and quadratic in fermions. The fermions are allowed to tunnel between the lattice sites. The training of SFM can be eciently implemented since there are closed expressions for the log- likelihood gradient. We nd that SFM is more powerful than the classical Restricted Boltzmann Machine (RBM) with the same number of physical degrees of freedom. The reason is that SFM has additional freedom due to the rotation of the Fermi sea. We show examples for several data sets.


2021 ◽  
Vol 3 (2) ◽  
pp. 95-108
Author(s):  
Auwalu Ibrahim ◽  
Ahmad Abubakar Suleiman ◽  
Usman Aliyu Abdullahi ◽  
Suleiman Abubakar Suleiman

Groundwater is the water present beneath the earth’s surface in soil pore spaces and the fractures of rock formations. Establishing a probability distribution that provides a good fit to groundwater quality has recently become a topic of interest in the fields of hydrology, meteorology among others. In this paper, three groundwater datasets including calcium, magnesium, and chloride are fitted to the normal, lognormal, gamma, Weibull, logistic, and log-logistic distributions to select the best groundwater model. The measures of goodness of fits such as the Akaike information criterion (AIC), Bayesian information criterion (BIC), and log-likelihood are computed to compare the fitted models. The results show that the gamma distribution gives better fits for calcium and magnesium datasets while the lognormal distribution provides a better fit for the chloride dataset than other competing models. This research describes an application of probability distributions and the best-fitted distribution to a practical problem involving groundwater data analysis. By assuming the distribution of data, analysts can utilize the characteristics of the distribution to make predictions on outcomes.


2021 ◽  
pp. 002202212110447
Author(s):  
Renzhong Peng ◽  
Chongguang Zhu ◽  
Weiping Wu

As acculturation research has become more interdisciplinary and dynamic over the last 20 years, it is necessary to explore its emerging trends. We collected 10,039 research articles on acculturation research from 2000 to 2020 from the Web of Science (WoS) database and utilized the CiteSpace tool to visualize emerging trends. During the data analysis, we extracted noun phrases from the abstracts of the retrieved articles to identify clusters, and the log-likelihood ratio (LLR) algorithm was used to generate cluster labels in the co-citation network. Based on the size of the clusters, the five largest clusters were chosen and analyzed: “Asian cultural value,” “Suicide attempt,” “Unhealthy behavior,” “Host country identification,” and “Emerging adulthood”. These findings may help researchers and scholars gain useful insight and explore topics related to the research trends in acculturation.


2021 ◽  
Vol 81 (9) ◽  
Author(s):  
P. Adhikari ◽  
R. Ajaj ◽  
M. Alpízar-Venegas ◽  
P.-A. Amaudruz ◽  
D. J. Auty ◽  
...  

AbstractThe DEAP-3600 detector searches for the scintillation signal from dark matter particles scattering on a 3.3 tonne liquid argon target. The largest background comes from $$^{39}\text{ Ar }$$ 39 Ar beta decays and is suppressed using pulse-shape discrimination (PSD). We use two types of PSD estimator: the prompt-fraction, which considers the fraction of the scintillation signal in a narrow and a wide time window around the event peak, and the log-likelihood-ratio, which compares the observed photon arrival times to a signal and a background model. We furthermore use two algorithms to determine the number of photons detected at a given time: (1) simply dividing the charge of each PMT pulse by the mean single-photoelectron charge, and (2) a likelihood analysis that considers the probability to detect a certain number of photons at a given time, based on a model for the scintillation pulse shape and for afterpulsing in the light detectors. The prompt-fraction performs approximately as well as the log-likelihood-ratio PSD algorithm if the photon detection times are not biased by detector effects. We explain this result using a model for the information carried by scintillation photons as a function of the time when they are detected.


Sign in / Sign up

Export Citation Format

Share Document