The Research Based on Bayesian Behavior Recognition Technology

2014 ◽  
Vol 543-547 ◽  
pp. 2167-2170 ◽  
Author(s):  
Jiang Wu ◽  
Dong Wang

The prior distribution of the incidence of crime is based on a large-scale of criminals' investigation of historical data on their psychology problems and the new sample data is based on the measured people's sampled data of investigation on their psychological problems. The incidence of crime of the measured people is the posterior distribution of the measured that need to be predicted. With the application of Bayesian statistical methods we could compute the incidence of the crime of the measured and provide a basis for us to judge whether the suspect is a criminal. The paper has verified the feasibility of the method and pointed out its limitations and applied condition.

Author(s):  
Edward P. Herbst ◽  
Frank Schorfheide

This chapter talks about the most widely used method to generate draws from posterior distributions of a DSGE model: the random walk MH (RWMH) algorithm. The DSGE model likelihood function in combination with the prior distribution leads to a posterior distribution that has a fairly regular elliptical shape. In turn, the draws from a simple RWMH algorithm can be used to obtain an accurate numerical approximation of posterior moments. However, in many other applications, particularly those involving medium- and large-scale DSGE models, the posterior distributions could be very non-elliptical. Irregularly shaped posterior distributions are often caused by identification problems or misspecification. In lieu of the difficulties caused by irregularly shaped posterior surfaces, the chapter reviews various alternative MH samplers, which use alternative proposal distributions.


2021 ◽  
Vol 13 (5) ◽  
pp. 2950
Author(s):  
Su-Kyung Sung ◽  
Eun-Seok Lee ◽  
Byeong-Seok Shin

Climate change increases the frequency of localized heavy rains and typhoons. As a result, mountain disasters, such as landslides and earthworks, continue to occur, causing damage to roads and residential areas downstream. Moreover, large-scale civil engineering works, including dam construction, cause rapid changes in the terrain, which harm the stability of residential areas. Disasters, such as landslides and earthenware, occur extensively, and there are limitations in the field of investigation; thus, there are many studies being conducted to model terrain geometrically and to observe changes in terrain according to external factors. However, conventional topography methods are expressed in a way that can only be interpreted by people with specialized knowledge. Therefore, there is a lack of consideration for three-dimensional visualization that helps non-experts understand. We need a way to express changes in terrain in real time and to make it intuitive for non-experts to understand. In conventional height-based terrain modeling and simulation, there is a problem in which some of the sampled data are irregularly distorted and do not show the exact terrain shape. The proposed method utilizes a hierarchical vertex cohesion map to correct inaccurately modeled terrain caused by uniform height sampling, and to compensate for geometric errors using Hausdorff distances, while not considering only the elevation difference of the terrain. The mesh reconstruction, which triangulates the three-vertex placed at each location and makes it the smallest unit of 3D model data, can be done at high speed on graphics processing units (GPUs). Our experiments confirm that it is possible to express changes in terrain accurately and quickly compared with existing methods. These functions can improve the sustainability of residential spaces by predicting the damage caused by mountainous disasters or civil engineering works around the city and make it easy for non-experts to understand.


2017 ◽  
Vol 27 (9) ◽  
pp. 2872-2882 ◽  
Author(s):  
Zhuozhao Zhan ◽  
Geertruida H de Bock ◽  
Edwin R van den Heuvel

Clinical trials may apply or use a sequential introduction of a new treatment to determine its efficacy or effectiveness with respect to a control treatment. The reasons for choosing a particular switch design have different origins. For instance, they may be implemented for ethical or logistic reasons or for studying disease-modifying effects. Large-scale pragmatic trials with complex interventions often use stepped wedge designs (SWDs), where all participants start at the control group, and during the trial, the control treatment is switched to the new intervention at different moments. They typically use cross-sectional data and cluster randomization. On the other hand, new drugs for inhibition of cognitive decline in Alzheimer’s or Parkinson’s disease typically use delayed start designs (DSDs). Here, participants start in a parallel group design and at a certain moment in the trial, (part of) the control group switches to the new treatment. The studies are longitudinal in nature, and individuals are being randomized. Statistical methods for these unidirectional switch designs (USD) are quite complex and incomparable, and they have been developed by various authors under different terminologies, model specifications, and assumptions. This imposes unnecessary barriers for researchers to compare results or choose the most appropriate method for their own needs. This paper provides an overview of past and current statistical developments for the USDs (SWD and DSD). All designs are formulated in a unified framework of treatment patterns to make comparisons between switch designs easier. The focus is primarily on statistical models, methods of estimation, sample size calculation, and optimal designs for estimation of the treatment effect. Other relevant open issues are being discussed as well to provide suggestions for future research in USDs.


2019 ◽  
Author(s):  
Johnny van Doorn ◽  
Dora Matzke ◽  
Eric-Jan Wagenmakers

Sir Ronald Fisher's venerable experiment "The Lady Tasting Tea'' is revisited from a Bayesian perspective. We demonstrate how a similar tasting experiment, conducted in a classroom setting, can familiarize students with several key concepts of Bayesian inference, such as the prior distribution, the posterior distribution, the Bayes factor, and sequential analysis.


2021 ◽  
Vol 13 (19) ◽  
pp. 10741
Author(s):  
Ovidija Eičaitė ◽  
Gitana Alenčikienė ◽  
Ingrida Pauliukaitytė ◽  
Alvija Šalaševičienė

More than half of food waste is generated at the household level, and therefore, it is important to tackle and attempt to solve the problem of consumer food waste. This study aimed to identify factors differentiating high food wasters from low food wasters. A large-scale survey was conducted in Lithuania. A total of 1001 respondents had participated in this survey and were selected using a multi-stage probability sample. Data were collected through face-to-face interviews using a structured questionnaire. Binary logistic regression modelling was used to analyse the effect of socio-demographics, food-related behaviours, attitudes towards food waste, and knowledge of date labelling on levels of food waste. Impulse buying, inappropriate food preparation practices, non-consumption of leftovers, lack of concern about food waste, and worry about food poisoning were related to higher food waste. On the other hand, correct planning practices and knowledge of date labelling were related to lower food waste. The findings of this study have practical implications for designing interventions aimed at reducing consumer food waste.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Jie Zhao

With the continuous development of multimedia social networks, online public opinion information is becoming more and more popular. The rule extraction matrix algorithm can effectively improve the probability of information data to be tested. The network information data abnormality detection is realized through the probability calculation, and the prior probability is calculated, to realize the detection of abnormally high network data. Practical results show that the rule-extracting matrix algorithm can effectively control the false positive rate of sample data, the detection accuracy is improved, and it has efficient detection performance.


2017 ◽  
Vol 76 (3) ◽  
pp. 213-219 ◽  
Author(s):  
Johanna Conrad ◽  
Ute Nöthlings

Valid estimation of usual dietary intake in epidemiological studies is a topic of present interest. The aim of the present paper is to review recent literature on innovative approaches focussing on: (1) the requirements to assess usual intake and (2) the application in large-scale settings. Recently, a number of technology-based self-administered tools have been developed, including short-term instruments such as web-based 24-h recalls, mobile food records or simple closed-ended questionnaires that assess the food intake of the previous 24 h. Due to their advantages in terms of feasibility and cost-effectiveness these tools may be superior to conventional assessment methods in large-scale settings. New statistical methods have been developed to combine dietary information from repeated 24-h dietary recalls and FFQ. Conceptually, these statistical methods presume that the usual food intake of a subject equals the probability of consuming a food on a given day, multiplied by the average amount of intake of that food on a typical consumption day. Repeated 24-h recalls from the same individual provide information on consumption probability and amount. In addition, the FFQ can add information on intake frequency of rarely consumed foods. It has been suggested that this combined approach may provide high-quality dietary information. A promising direction for estimation of usual intake in large-scale settings is the integration of both statistical methods and new technologies. Studies are warranted to assess the validity of estimated usual intake in comparison with biomarkers.


1978 ◽  
Vol 3 (2) ◽  
pp. 179-188
Author(s):  
Robert K. Tsutakawa

The comparison of two regression lines is often meaningful or of interest over a finite interval I of the independent variable. When the prior distribution of the parameters is a natural conjugate, the posterior distribution of the distances between two regression lines at the end points of I is bivariate t. The posterior probability that one regression line lies above the other uniformly over I is numerically evaluated using this distribution.


Sign in / Sign up

Export Citation Format

Share Document