likelihood function
Recently Published Documents


TOTAL DOCUMENTS

998
(FIVE YEARS 315)

H-INDEX

45
(FIVE YEARS 7)

2022 ◽  
Vol 15 (3) ◽  
pp. 1-32
Author(s):  
Nikolaos Alachiotis ◽  
Panagiotis Skrimponis ◽  
Manolis Pissadakis ◽  
Dionisios Pnevmatikatos

Disaggregated computer architectures eliminate resource fragmentation in next-generation datacenters by enabling virtual machines to employ resources such as CPUs, memory, and accelerators that are physically located on different servers. While this paves the way for highly compute- and/or memory-intensive applications to potentially deploy all CPUs and/or memory resources in a datacenter, it poses a major challenge to the efficient deployment of hardware accelerators: input/output data can reside on different servers than the ones hosting accelerator resources, thereby requiring time- and energy-consuming remote data transfers that diminish the gains of hardware acceleration. Targeting a disaggregated datacenter architecture similar to the IBM dReDBox disaggregated datacenter prototype, the present work explores the potential of deploying custom acceleration units adjacently to the disaggregated-memory controller on memory bricks (in dReDBox terminology), which is implemented on FPGA technology, to reduce data movement and improve performance and energy efficiency when reconstructing large phylogenies (evolutionary relationships among organisms). A fundamental computational kernel is the Phylogenetic Likelihood Function (PLF), which dominates the total execution time (up to 95%) of widely used maximum-likelihood methods. Numerous efforts to boost PLF performance over the years focused on accelerating computation; since the PLF is a data-intensive, memory-bound operation, performance remains limited by data movement, and memory disaggregation only exacerbates the problem. We describe two near-memory processing models, one that addresses the problem of workload distribution to memory bricks, which is particularly tailored toward larger genomes (e.g., plants and mammals), and one that reduces overall memory requirements through memory-side data interpolation transparently to the application, thereby allowing the phylogeny size to scale to a larger number of organisms without requiring additional memory.


Author(s):  
Isaac Sim ◽  
Young Ghyu Sun ◽  
Soo Hyun Kim ◽  
SangWoon Lee ◽  
Cheong Ghil Kim ◽  
...  

In this letter, we study a scenario based on degenerate unmixing estimation technique (DUET) that separates original signals from mixture of FHSS signals with two antennas. We have shown that the assumptions for separating mixed signals in DUET can be applied to drone based digital signage recognition signals and proposed the DUET-based separation scheme (DBSS) to classify the mixed recognition drone signals by extracting the delay and attenuation components of the mixture signal through the likelihood function and the short-term Fourier transform (STFT). In addition, we propose an iterative algorithm for signal separation with the conventional DUET scheme. Numerical results showed that the proposed algorithm is more separation-efficient compared to baseline schemes. DBSS can separate all signals within about 0.56 seconds when there are fewer than nine signage signals.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 462
Author(s):  
Hong Anh Nguyen ◽  
Van Khang Nguyen ◽  
Klaus Witrisal

Ultra-Wide Bandwidth (UWB) and mm-wave radio systems can resolve specular multipath components (SMCs) from estimated channel impulse response measurements. A geometric model can describe the delays, angles-of-arrival, and angles-of-departure of these SMCs, allowing for a prediction of these channel features. For the modeling of the amplitudes of the SMCs, a data-driven approach has been proposed recently, using Gaussian Process Regression (GPR) to map and predict the SMC amplitudes. In this paper, the applicability of the proposed multipath-resolved, GPR-based channel model is analyzed by studying features of the propagation channel from a set of channel measurements. The features analyzed include the energy capture of the modeled SMCs, the number of resolvable SMCs, and the ranging information that could be extracted from the SMCs. The second contribution of the paper concerns the potential applicability of the channel model for a multipath-resolved, single-anchor positioning system. The predicted channel knowledge is used to evaluate the measurement likelihood function at candidate positions throughout the environment. It is shown that the environmental awareness created by the multipath-resolved, GPR-based channel model yields higher robustness against position estimation outliers.


2022 ◽  
Vol 2022 ◽  
pp. 1-12
Author(s):  
Wei Zhou

In this paper, a stochastic traffic assignment model for networks is proposed for the study of discrete dynamic Bayesian algorithms. In this paper, we study a feasible method and theoretical system for implementing traffic engineering in networks based on Bayesian algorithm theory. We study the implementation of traffic assignment engineering in conjunction with the network stochastic model: first, we study the Bayesian algorithm theoretical model of control layer stripping in the network based on the discrete dynamic Bayesian algorithm theory and analyze the resource-sharing mechanism in different queuing rules; second, we study the extraction and evaluation theory of traffic assignment for the global view obtained by the control layer of the network and establish the Bayesian algorithm analysis model based on the traffic assignment; subsequently, the routing of bandwidth guarantee and delay guarantee in the network is studied based on Bayesian algorithm model and Bayesian algorithm network random traffic allocation theory. In this paper, a Bayesian algorithm estimation model based on Bayesian algorithm theory is constructed based on network random observed traffic assignment as input data. The model assumes that the roadway traffic distribution follows the network random principle, and based on this assumption, the likelihood function of the roadway online traffic under the network random condition is derived; the prior distribution of the roadway traffic is derived based on the maximum entropy principle; the posterior distribution of the roadway traffic is solved by combining the likelihood function and the prior distribution. The corresponding algorithm is designed for the model with roadway traffic as input, and the reliability of the algorithm is verified in the arithmetic example.


2022 ◽  
Author(s):  
Angélica Maria Tortola Ribeiro ◽  
Paulo Justiniano Ribeiro ◽  
Wagner Hugo Bonat

Abstract We propose a covariance specification for modeling spatially continuous multivariate data. This model is based on a reformulation of Kronecker’s product of covariance matrices for Gaussian random fields. We illustrate the case with the Matérn function used for specifying marginal covariances. The structure holds for other choices of covariance functions with parameters varying in their usual domains, which makes the estimation process more accessible. The reduced computational time and flexible generalization for increasing number of variables, make it an attractive alternative for modelling spatially continuous data. Theoretical results for the likelihood function and the derivatives of the covariance matrix are presented. The proposed model is fitted to the literature’s soil250 dataset, and adequacy measures, forecast errors and estimation times are compared with the ones obtained based on classical models. Furthermore, the model is fitted to the classic meuse dataset to illustrate the model’s flexibility in a four-variate analysis. A simulation study is performed considering different parametric scenarios to evaluate the asymptotic properties of the maximum likelihood estimators. The satisfactory results, its simpler structure and the reduced estimation time make the proposed model a candidate approach for multivariate analysis of spatial data.


2022 ◽  
Vol 19 (1) ◽  
pp. 2-24
Author(s):  
Mohamed Abd Elhamed Sabry ◽  
Hiba Zeyada Muhammed ◽  
Mostafa Shaaban ◽  
Abd El Hady Nabih

In this paper, the likelihood function for parameter estimation based on double ranked set sampling (DRSS) schemes is introduced. The proposed likelihood function is used for the estimation of the Weibull distribution parameters. The maximum likelihood estimators (MLEs) are investigated and compared to the corresponding ones based on simple random sampling (SRS) and ranked set sampling (RSS) schemes. A Monte Carlo simulation is conducted and the absolute relative biases, mean square errors, and efficiencies are compared for the different schemes. It is found that, the MLEs based on DRSS is more efficient than MLE using SRS and RSS for estimating the two parameters of the Weibull distribution (WD).


2021 ◽  
Vol 15 (1) ◽  
pp. 280-288
Author(s):  
Mahdi Rezapour ◽  
Khaled Ksaibati

Background: Kernel-based methods have gained popularity as employed model residual’s distribution might not be defined by any classical parametric distribution. Kernel-based method has been extended to estimate conditional densities instead of conditional distributions when data incorporate both discrete and continuous attributes. The method often has been based on smoothing parameters to use optimal values for various attributes. Thus, in case of an explanatory variable being independent of the dependent variable, that attribute would be dropped in the nonparametric method by assigning a large smoothing parameter, giving them uniform distributions so their variances to the model’s variance would be minimal. Objectives: The objective of this study was to identify factors to the severity of pedestrian crashes based on an unbiased method. Especially, this study was conducted to evaluate the applicability of kernel-based techniques of semi- and nonparametric methods on the crash dataset by means of confusion techniques. Methods: In this study, two non- and semi-parametric kernel-based methods were implemented to model the severity of pedestrian crashes. The estimation of the semi-parametric densities is based on the adoptive local smoothing and maximization of the quasi-likelihood function, which is similar somehow to the likelihood of the binary logit model. On the other hand, the nonparametric method is based on the selection of optimal smoothing parameters in estimation of the conditional probability density function to minimize mean integrated squared error (MISE). The performances of those models are evaluated by their prediction power. To have a benchmark for comparison, the standard logistic regression was also employed. Although those methods have been employed in other fields, this is one of the earliest studies that employed those techniques in the context of traffic safety. Results: The results highlighted that the nonparametric kernel-based method outperforms the semi-parametric (single-index model) and the standard logit model based on the confusion matrices. To have a vision about the bandwidth selection method for removal of the irrelevant attributes in nonparametric approach, we added some noisy predictors to the models and a comparison was made. Extensive discussion has been made in the content of this study regarding the methodological approach of the models. Conclusion: To summarize, alcohol and drug involvement, driving on non-level grade, and bad lighting conditions are some of the factors that increase the likelihood of pedestrian crash severity. This is one of the earliest studies that implemented the methods in the context of transportation problems. The nonparametric method is especially recommended to be used in the field of traffic safety when there are uncertainties regarding the importance of predictors as the technique would automatically drop unimportant predictors.


2021 ◽  
Vol 10 (3) ◽  
pp. 413-422
Author(s):  
Nur Azizah ◽  
Sugito Sugito ◽  
Hasbi Yasin

Hospital service facilities cannot be separated from queuing events. Queues are an unavoidable part of life, but they can be minimized with a good system. The purpose of this study was to find out how the queuing system at Dr. Kariadi. Bayesian method is used to combine previous research and this research in order to obtain new information. The sample distribution and prior distribution obtained from previous studies are combined with the sample likelihood function to obtain a posterior distribution. After calculating the posterior distribution, it was found that the queuing model in the outpatient installation at Dr. Kariadi Semarang is (G/G/c): (GD/∞/∞) where each polyclinic has met steady state conditions and the level of busyness is greater than the unemployment rate so that the queuing system at Dr. Kariadi is categorized as good, except in internal medicine poly. 


2021 ◽  
Vol 10 (3) ◽  
pp. 337-345
Author(s):  
Dini Febriani ◽  
Sugito Sugito ◽  
Alan Prahutama

The growth rate of the traffic that is high resulting in congestion on the road network system. One of the government's efforts in addressing the issue with the build highways to reduce congestion, especially in large cities. One of the queuing phenomena that often occurs in the city of Semarang is the queue at the Toll Gate Muktiharjo, that the queue of vehicles coming to make toll payment. This study aims to determine how the service system at the Toll Gate Muktiharjo. This can be known by getting a queue system model and a measure of system performance from the distribution of arrival and service. The distribution of arrival and service are determined by finding the posterior distribution using the Bayesian method. The bayesian method combine the likelihood function of the sample and the prior distribution. The likelihood function is a negative binomial. The prior distribution used a uniform discrete. Based on the calculations and analysis, it can be concluded that the queueing system model at the Toll Gate Muktiharjo is a (Beta/Beta/5):(GD/∞/∞). The queue simulation obtained that the service system Toll Gate Muktiharjo is optimal based on the size of the system performance because busy probability is higher than jobless probability.  


2021 ◽  
Vol 14 (1) ◽  
pp. 26
Author(s):  
Weixin Li ◽  
Ming Li ◽  
Lei Zuo ◽  
Hao Sun ◽  
Hongmeng Chen ◽  
...  

Traditional forward-looking super-resolution methods mainly concentrate on enhancing the resolution with ground clutter or no clutter scenes. However, sea clutter exists in the sea-surface target imaging, as well as ground clutter when the imaging scene is a seacoast.Meanwhile, restoring the contour information of the target has an important effect, for example, in the autonomous landing on a ship. This paper aims to realize the forward-looking imaging of a sea-surface target. In this paper, a multi-prior Bayesian method, which considers the environment and fuses the contour information and the sparsity of the sea-surface target, is proposed. Firstly, due to the imaging environment in which more than one kind of clutter exists, we introduce the Gaussian mixture model (GMM) as the prior information to describe the interference of the clutter and noise. Secondly, we fuse the total variation (TV) prior and Laplace prior, and propose a multi-prior to model the contour information and sparsity of the target. Third, we introduce the latent variable to simplify the logarithm likelihood function. Finally, to solve the optimal parameters, the maximum posterior-expectation maximization (MAP-EM) method is utilized. Experimental results illustrate that the multi-prior Bayesian method can enhance the azimuth resolution, and preserve the contour information of the sea-surface target.


Sign in / Sign up

Export Citation Format

Share Document