scholarly journals Generative Visual Dialogue System via Weighted Likelihood Estimation

Author(s):  
Heming Zhang ◽  
Shalini Ghosh ◽  
Larry Heck ◽  
Stephen Walsh ◽  
Junting Zhang ◽  
...  

The key challenge of generative Visual Dialogue (VD) systems is to respond to human queries with informative answers in natural and contiguous conversation flow. Traditional Maximum Likelihood Estimation-based methods only learn from positive responses but ignore the negative responses, and consequently tend to yield safe or generic responses. To address this issue, we propose a novel training scheme in conjunction with weighted likelihood estimation method. Furthermore, an adaptive multi-modal reasoning module is designed, to accommodate various dialogue scenarios automatically and select relevant information accordingly. The experimental results on the VisDial benchmark demonstrate the superiority of our proposed algorithm over other state-of-the-art approaches, with an improvement of 5.81% on recall@10.

2021 ◽  
Vol 12 ◽  
Author(s):  
Xuemei Xue ◽  
Jing Lu ◽  
Jiwei Zhang

In this paper, a new item-weighted scheme is proposed to assess examinees’ growth in longitudinal analysis. A multidimensional Rasch model for measuring learning and change (MRMLC) and its polytomous extension is used to fit the longitudinal item response data. In fact, the new item-weighted likelihood estimation method is not only suitable for complex longitudinal IRT models, but also it can be used to estimate the unidimensional IRT models. For example, the combination of the two-parameter logistic (2PL) model and the partial credit model (PCM, Masters, 1982) with a varying number of categories. Two simulation studies are carried out to further illustrate the advantages of the item-weighted likelihood estimation method compared to the traditional Maximum a Posteriori (MAP) estimation method, Maximum likelihood estimation method (MLE), Warm’s (1989) weighted likelihood estimation (WLE) method, and type-weighted maximum likelihood estimation (TWLE) method. Simulation results indicate that the improved item-weighted likelihood estimation method better recover examinees’ true ability level for both complex longitudinal IRT models and unidimensional IRT models compared to the existing likelihood estimation (MLE, WLE and TWLE) methods and MAP estimation method, with smaller bias, root-mean-square errors, and root-mean-square difference especially at the low-and high-ability levels.


2014 ◽  
Vol 530-531 ◽  
pp. 768-772
Author(s):  
Guo Ping Tan ◽  
Lin Feng Tan ◽  
Lei Cao ◽  
Mei Yan Ju

For the study of the applications of partial network coding based real-time multicast protocol (PNCRM) in Mobile Ad hoc networks, the researches should be developed in the probability distribution of delay. In this paper, NS2 is used to obtain the delay of data packets through simulations. Because the delay does not obey the strict normal distribution, the maximum likelihood estimate method based on the lognormal distribution is used to process the data. Using MATLAB to obtain the actual distribution of the natural logarithm of delay, then drawing the delay distribution with the maximum likelihood estimation method based on the lognormal distribution, the conclusion that the distributions obtained by the above mentioned methods are basically consistent can be obtained. So the delay distribution of PNCRM meets the lognormal distribution and the characteristic of delay probability distribution can be estimated.


2020 ◽  
Vol 9 (1) ◽  
pp. 61-81
Author(s):  
Lazhar BENKHELIFA

A new lifetime model, with four positive parameters, called the Weibull Birnbaum-Saunders distribution is proposed. The proposed model extends the Birnbaum-Saunders distribution and provides great flexibility in modeling data in practice. Some mathematical properties of the new distribution are obtained including expansions for the cumulative and density functions, moments, generating function, mean deviations, order statistics and reliability. Estimation of the model parameters is carried out by the maximum likelihood estimation method. A simulation study is presented to show the performance of the maximum likelihood estimates of the model parameters. The flexibility of the new model is examined by applying it to two real data sets.


Author(s):  
Shuguang Song ◽  
Hanlin Liu ◽  
Mimi Zhang ◽  
Min Xie

In this paper, we propose and study a new bivariate Weibull model, called Bi-levelWeibullModel, which arises when one failure occurs after the other. Under some specific regularity conditions, the reliability function of the second event can be above the reliability function of the first event, and is always above the reliability function of the transformed first event, which is a univariate Weibull random variable. This model is motivated by a common physical feature that arises fromseveral real applications. The two marginal distributions are a Weibull distribution and a generalized three-parameter Weibull mixture distribution. Some useful properties of the model are derived, and we also present the maximum likelihood estimation method. A real example is provided to illustrate the application of the model.


2006 ◽  
Vol 3 (4) ◽  
pp. 1603-1627 ◽  
Author(s):  
W. Wang ◽  
P. H. A. J. M. van Gelder ◽  
J. K. Vrijling ◽  
X. Chen

Abstract. The Lo's R/S tests (Lo, 1991), GPH test (Geweke and Porter-Hudak, 1983) and the maximum likelihood estimation method implemented in S-Plus (S-MLE) are evaluated through intensive Mote Carlo simulations for detecting the existence of long-memory. It is shown that, it is difficult to find an appropriate lag q for Lo's test for different AR and ARFIMA processes, which makes the use of Lo's test very tricky. In general, the GPH test outperforms the Lo's test, but for cases where there is strong autocorrelations (e.g., AR(1) processes with φ=0.97 or even 0.99), the GPH test is totally useless, even for time series of large data size. Although S-MLE method does not provide a statistic test for the existence of long-memory, the estimates of d given by S-MLE seems to give a good indication of whether or not the long-memory is present. Data size has a significant impact on the power of all the three methods. Generally, the power of Lo's test and GPH test increases with the increase of data size, and the estimates of d with GPH test and S-MLE converge with the increase of data size. According to the results with the Lo's R/S test (Lo, 1991), GPH test (Geweke and Porter-Hudak, 1983) and the S-MLE method, all daily flow series exhibit long-memory. The intensity of long-memory in daily streamflow processes has only a very weak positive relationship with the scale of watershed.


2019 ◽  
Vol 9 (22) ◽  
pp. 4921 ◽  
Author(s):  
Chen ◽  
Wu ◽  
Liu ◽  
Wang

Time-of-flight (ToF)-based 3-D target localization is a very challenging topic because of the pseudo-targets introduced by ToF measurement errors in traditional ToF-based methods. Although the influence of errors in ToF measurement can be reduced by the probability-based ToF method, the accuracy of localization is not very high. This paper proposes a new 3-D target localization method, Iterative Maximum Weighted Likelihood Estimation (IMWLE), that takes into account the spatial distribution of pseudo-targets. In our method, each pseudo-target is initially assigned an equal weight. At each iteration, Maximum Weighted Likelihood Estimation (MWLE) is adopted to fit a Gaussian distribution to all pseudo-target positions and assign new weight factors to them. The weight factors of the pseudo-targets, which are far from the target, are reduced to minimize their influence on localization. Therefore, IMWLE can reduce the influence of pseudo-targets that are far from the target and improve the accuracy of localization. The experiments were carried out in a water tank to test the performance of the IMWLE method. Results revealed that the estimated target area can be narrowed down to the target using IMWLE and a point estimate of target location can also be obtained, which shows that IMWLE has a higher degree of accuracy than the probability-based ToF method.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Mohammed Haiek ◽  
Youness El Ansari ◽  
Nabil Ben Said Amrani ◽  
Driss Sarsri

In this paper, we propose a stochastic model to describe over time the evolution of stress in a bolted mechanical structure depending on different thicknesses of a joint elastic piece. First, the studied structure and the experiment numerical simulation are presented. Next, we validate statistically our proposed stochastic model, and we use the maximum likelihood estimation method based on Euler–Maruyama scheme to estimate the parameters of this model. Thereafter, we use the estimated model to compare the stresses, the peak times, and extinction times for different thicknesses of the elastic piece. Some numerical simulations are carried out to illustrate different results.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Yifan Sun ◽  
Xiang Xu

As a widely used inertial device, a MEMS triaxial accelerometer has zero-bias error, nonorthogonal error, and scale-factor error due to technical defects. Raw readings without calibration might seriously affect the accuracy of inertial navigation system. Therefore, it is necessary to conduct calibration processing before using a MEMS triaxial accelerometer. This paper presents a MEMS triaxial accelerometer calibration method based on the maximum likelihood estimation method. The error of the MEMS triaxial accelerometer comes into question, and the optimal estimation function is established. The calibration parameters are obtained by the Newton iteration method, which is more efficient and accurate. Compared with the least square method, which estimates the parameters of the suboptimal estimation function established under the condition of assuming that the mean of the random noise is zero, the parameters calibrated by the maximum likelihood estimation method are more accurate and stable. Moreover, the proposed method has low computation, which is more functional. Simulation and experimental results using the consumer low-cost MEMS triaxial accelerometer are presented to support the abovementioned superiorities of the maximum likelihood estimation method. The proposed method has the potential to be applied to other triaxial inertial sensors.


2014 ◽  
Vol 1070-1072 ◽  
pp. 2073-2078
Author(s):  
Xiu Ji ◽  
Hui Wang ◽  
Chuan Qi Zhao ◽  
Xu Ting Yan

It is difficult to estimate the parameters of Weibull distribution model using maximum likelihood estimation based on particle swarm optimization (PSO) theory for which is easy to fall into premature and needs more variables, ant colony algorithm theory was introduced into maximum likelihood method, and a parameter estimation method based on ant colony algorithm theory was proposed, an example was simulated to verify the feasibility and effectiveness of this method by comparing with ant colony algorithm and PSO.This template explains and demonstrates how to prepare your camera-ready paper for Trans Tech Publications. The best is to read these instructions and follow the outline of this text.


Sign in / Sign up

Export Citation Format

Share Document