A Bayesian Belief Network Methodology for Modeling Social Systems in Virtual Communities

Author(s):  
Ben K. Daniel ◽  
Juan-Diego Zapata-Rivera ◽  
Gordon I. McCalla

Bayesian Belief Networks (BBNs) are increasingly used for understanding and simulating computational models in many domains. Though BBN techniques are elegant ways of capturing uncertainties, knowledge engineering effort required to create and initialize the network has prevented many researchers from using them. Even though the structure of the network and its conditional & initial probabilities could be learned from data, data is not always available and/or too costly to obtain. Further, current algorithms that can be used to learn relationships among variables, initial and conditional probabilities from data are often complex and cumbersome to employ. Qualitative-based approaches applied to the creation of graphical models can be used to create initial computational models that can help researchers analyze complex problems and provide guidance/support for decision-making. Once created, initial BBN models can be refined once appropriate data is obtained. This chapter extends the use of BBNs to help experts make sense of complex social systems (e.g., social capital in virtual communities) using a Bayesian model as an interactive simulation tool. Scenarios are used to update the model and to find out whether the model is consistent with the expert’s beliefs. A sensitivity analysis was conducted to help explain how the model reacted to different sets of evidence. Currently, we are in the process of refining the initial probability values presented in the model using empirical data and developing more authentic scenarios to further validate the model. We will elaborate on how database technologies were used to support the current approach and will describe opportunities for future database tools needed to support this type of work.

Author(s):  
Ben K. Daniel ◽  
Juan-Diego Zapata-Rivera ◽  
Gordon I. McCalla

Bayesian belief networks (BBNs) are increasingly used for understanding and simulating computational models in many domains. Though BBN techniques are elegant ways of capturing uncertainties, knowledge engineering effort required to create and initialize the network has prevented many researchers from using them. Even though the structure of the network and its conditional & initial probabilities could be learned from data, data is not always available or it is too costly to obtain. In addition, current algorithms that can be used to learn relationships among variables, initial and conditional probabilities from data are often complex and cumbersome to employ. Qualitative-based approaches applied to the creation of graphical models can be used to create initial computational models that can help researchers analyze complex problems and provide guidance and support for decision-making. Initial BBN models can be refined once appropriate data is obtained. This chapter extends the use of BBNs to help experts make sense of complex social systems (e.g., social capital in virtual learning communities) using a Bayesian model as an interactive simulation tool. Scenarios are used to find out whether the model is consistent with the expert’s beliefs. The sensitivity analysis was conducted to help explain how the model reacted to different sets of evidence. Currently, we are in the process of refining the initial probability values presented in the model using empirical data and developing more authentic scenarios to further validate the model.


Author(s):  
Ben Kei Daniel

Bayesian Belief Networks (BBNs) are increasingly used for understanding different problems in many domains. Though BBN techniques are elegant ways of capturing uncertainties, knowledge engineering effort required to create and initialize a network has prevented many researchers from using them. Even though the structure of the network and its conditional and initial probabilities could be learned from data, data is not always available and/or too costly to obtain. Furthermore, current algorithms used to learn relationships among variables, initial and conditional probabilities from data are often complex and cumbersome to employ. A qualitative Bayesian network approach was introduced to address some of the difficulties in building models that mainly depend on quantitative data. Building BBN models from quantitative data presupposes that relationships among variables or concepts of interests are known and can be correlated, causally related or they can relate to each other independently. The interdependency or relationships among the variables enable more reliable inferences which in turn help in making informed decisions about results of the model.This Chapter presents qualitative techniques and algorithms for creating Bayesian belief network models. It simplifies the construction of Bayesian models in few steps. The goal of the Chapter is to introduce the reader to the basic principles underlying the constructions of Bayesian Belief Network.


Author(s):  
Mythili K. ◽  
Manish Narwaria

Quality assessment of audiovisual (AV) signals is important from the perspective of system design, optimization, and management of a modern multimedia communication system. However, automatic prediction of AV quality via the use of computational models remains challenging. In this context, machine learning (ML) appears to be an attractive alternative to the traditional approaches. This is especially when such assessment needs to be made in no-reference (i.e., the original signal is unavailable) fashion. While development of ML-based quality predictors is desirable, we argue that proper assessment and validation of such predictors is also crucial before they can be deployed in practice. To this end, we raise some fundamental questions about the current approach of ML-based model development for AV quality assessment and signal processing for multimedia communication in general. We also identify specific limitations associated with the current validation strategy which have implications on analysis and comparison of ML-based quality predictors. These include a lack of consideration of: (a) data uncertainty, (b) domain knowledge, (c) explicit learning ability of the trained model, and (d) interpretability of the resultant model. Therefore, the primary goal of this article is to shed some light into mentioned factors. Our analysis and proposed recommendations are of particular importance in the light of significant interests in ML methods for multimedia signal processing (specifically in cases where human-labeled data is used), and a lack of discussion of mentioned issues in existing literature.


Author(s):  
Н. Н. Смирнов ◽  
В. В. Тюренкова ◽  
В. Ф. Никитин

Разработка алгоритмической компоновки и программ для расчета многомасштабных процессов горения является актуальной междисциплинарной темой фундаментальных исследований, которая объединяет методы информационных технологий, механики многокомпонентных сплошных сред, химии и математического моделирования. Задача разработки алгоритмической компоновки и подбора программ для расчета многомасштабных процессов горения набирает актуальность с каждым годом в связи как с интенсивным развитием вычислительных методов и моделей, так и с увеличением современных возможностей суперкомпьютерных вычислений. Практическая применимость разрабатываемых вычислительных моделей и методов охватывает проблемы энергетики, двигателестроения, взрывопожаробезопасности, а также интенсификации добычи полезных ископаемых с применением методов термохимического воздействия на пласт. Основными проблемами, возникающими в процессе моделирования, являются: а) многомасштабность, не позволяющая проводить моделирование всех задействованных процессов на единых даже масштабируемых сетках; б) жесткость и большая размерность системы дифференциальных уравнений для описания химической кинетики, решение которой может занимать 80% процессорного времени. Данная статья представляет обзор уже проведенных исследований в ФГУ ФНЦ НИИСИ РАН и анализ трудностей, с которыми столкнулись исследователи. В статье содержатся новые предложения по преодолению вычислительных трудностей и намечены пути их реализации. Возможность решения проблем в части многомасштабности видится в применении подходов многоуровневого моделирования, при котором детальное решение задачи более мелкого масштаба обрабатывается и вносится в качестве элемента модели более крупного масштаба. Для решения проблемы сокращения времени интегрирования уравнений многостадийной химической кинетики актуальным трендом является применение нейросетевых подходов и методов в рамках разрабатываемых вычислительных моделей. Этот подход в настоящее время развивается сотрудниками отдела вычислительных систем совместно с коллективом Центра оптико-нейронных технологий ФГУ ФНЦ НИИСИ РАН. The development of algorithms and software for analyzing multiscale combustion processes is a relevant field of fundamental research that combines the methods of information technologies, mechanics of multicomponent continua, combustion chemistry, and simulation. It gains relevance year to year due to the intensive development of computational methods and models, and with the increase in supercomputing performance. The applications of the proposed computational models and methods include energy, engine manufacturing, explosion and fire safety fields, as well as thermochemical mineral recovery stimulation methods. The key simulation problems are a. the problem is multiscale: all the processes involved cannot be simulated with the same grid, even a scalable one; b. the rigidity and large dimensionality of the system of differential equations that describes chemical kinetics. Its solution may take up to 80 % of the processor time. This paper is an overview of the research conducted at the Scientific Research Institute for System Analysis and an analysis of the difficulties faced by the researchers. It also proposes new ways for overcoming the computational difficulties and give some implementation considerations. To solve the multi-scale issue, multi-level modeling approaches can be used: a detailed solution to a smaller-scale problem is processed and introduced as a component of a larger-scale model. To reduce the integration time of the multi-stage chemical kinetics equations, the current approach is applying neural networks and methods to the existing computational models. This approach is currently being developed at the Department of Computing Systems in collaboration with the Center for Optical-Neural Technologies, SRISA.


Author(s):  
Volkan Ustun ◽  
Paul S. Rosenbloom

Realism is required not only for how synthetic characters look but also for how they behave. Many applications, such as simulations, virtual worlds, and video games, require computational models of intelligence that generate realistic and credible behavior for the participating synthetic characters. Sigma (S) is being built as a computational model of general intelligence with a long-term goal of understanding and replicating the architecture of the mind; i.e., the fixed structure underlying intelligent behavior. Sigma leverages probabilistic graphical models towards a uniform grand unification of not only traditional cognitive capabilities but also key non-cognitive aspects, creating unique opportunities for the construction of new kinds of non-modular behavioral models. These ambitions strive for the complete control of synthetic characters that behave as humanly as possible. In this paper, Sigma is introduced along with two disparate proof-of-concept virtual humans – one conversational and the other a pair of ambulatory agents – that demonstrate its diverse capabilities.


2014 ◽  
Vol 35 ◽  
pp. 259-276 ◽  
Author(s):  
Francisco Campuzano ◽  
Teresa Garcia-Valverde ◽  
Emilio Serrano ◽  
Juan A. Botía

2019 ◽  
Author(s):  
Donald Ray Williams ◽  
Philippe Rast ◽  
Luis Pericchi ◽  
Joris Mulder

Gaussian graphical models are commonly used to characterize conditional independence structures (i.e., networks) of psychological constructs. Recently attention has shifted from estimating single networks to those from various sub-populations. The focus is primarily to detect differences or demonstrate replicability. We introduce two novel Bayesian methods for comparing networks that explicitly address these aims. The first is based on the posterior predictive distribution, with Kullback-Leibler divergence as the discrepancy measure, that tests differences between two multivariate normal distributions. The second approach makes use of Bayesian model selection, with the Bayes factor, and allows for gaining evidence for invariant network structures. This overcomes limitations of current approaches in the literature that use classical hypothesis testing, where it is only possible to determine whether groups are significantly different from each other. With simulation we show the posterior predictive method is approximately calibrated under the null hypothesis ($\alpha = 0.05$) and has more power to detect differences than alternative approaches. We then examine the necessary sample sizes for detecting invariant network structures with Bayesian hypothesis testing, in addition to how this is influenced by the choice of prior distribution. The methods are applied to post-traumatic stress disorder symptoms that were measured in four groups. We end by summarizing our major contribution, that is proposing two novel methods for comparing GGMs, which extends beyond the social-behavioral sciences. The methods have been implemented in the R package BGGM.


Sign in / Sign up

Export Citation Format

Share Document