bayesian probability
Recently Published Documents


TOTAL DOCUMENTS

323
(FIVE YEARS 81)

H-INDEX

27
(FIVE YEARS 5)

2022 ◽  
Vol 31 (2) ◽  
pp. 1-39
Author(s):  
Olawole Oni ◽  
Emmanuel Letier

Release planning—deciding what features to implement in upcoming releases of a software system—is a critical activity in iterative software development. Many release planning methods exist, but most ignore the inevitable uncertainty in estimating software development effort and business value. The article’s objective is to study whether analyzing uncertainty during release planning generates better release plans than if uncertainty is ignored. To study this question, we have developed a novel release planning method under uncertainty, called BEARS, that models uncertainty using Bayesian probability distributions and recommends release plans that maximize expected net present value and expected punctuality. We then compare release plans recommended by BEARS to those recommended by methods that ignore uncertainty on 32 release planning problems. The experiment shows that BEARS recommends release plans with higher expected net present value and expected punctuality than methods that ignore uncertainty, thereby indicating the harmful effects of ignoring uncertainty during release planning. These results highlight the importance of eliciting and analyzing uncertainty in software effort and value estimations and call for increased research in these areas.


Author(s):  
Kalyana Saravanan ◽  
Angamuthu Tamilarasi

Big data is a collection of large volume of data and extract similar data points from large dataset. Clustering is an essential data mining technique for examining large volume of data. Several techniques have been developed for handling big dataset. However, with much time consumption and space complexity, accuracy is said to be compromised. In order to improve clustering accuracy with less complexity, Sørensen-Dice Indexing based Weighted Iterative X-means Clustering (SDI-WIXC) technique is introduced. SDI-WIXC technique is used for grouping the similar data points with higher clustering accuracy and minimal time. First, number of data points is collected from big dataset. Then, along with the weight value, the given dataset is partitioned into ‘X’ number of clusters. Next, based on the similarity measure, Weighted Iterated X-means Clustering (WIXC) is applied for clustering data points. Sørensen-Dice Indexing Process is used for measuring similarity between cluster weight value and data points. Upon similarity found between weight value of cluster and data point, data points are grouped into a specific cluster. Besides, the WIXC method also improves the cluster assignments through repeated subdivision using Bayesian probability criterion. This in turn helps to group all data points and hence, improving the clustering accuracy. Experimental evaluation is carried out with number of factors such as clustering accuracy, clustering time and space complexity with respect to the number of data points. The experimental results reported that the proposed SDI-WIXC technique obtains high clustering accuracy with minimum time as well as space complexity.


Author(s):  
Vahid Badeli ◽  
Sascha Ranftl ◽  
Gian Marco Melito ◽  
Alice Reinbacher-Köstinger ◽  
Wolfgang Von Der Linden ◽  
...  

Purpose This paper aims to introduce a non-invasive and convenient method to detect a life-threatening disease called aortic dissection. A Bayesian inference based on enhanced multi-sensors impedance cardiography (ICG) method has been applied to classify signals from healthy and sick patients. Design/methodology/approach A 3D numerical model consisting of simplified organ geometries is used to simulate the electrical impedance changes in the ICG-relevant domain of the human torso. The Bayesian probability theory is used for detecting an aortic dissection, which provides information about the probabilities for both cases, a dissected and a healthy aorta. Thus, the reliability and the uncertainty of the disease identification are found by this method and may indicate further diagnostic clarification. Findings The Bayesian classification shows that the enhanced multi-sensors ICG is more reliable in detecting aortic dissection than conventional ICG. Bayesian probability theory allows a rigorous quantification of all uncertainties to draw reliable conclusions for the medical treatment of aortic dissection. Originality/value This paper presents a non-invasive and reliable method based on a numerical simulation that could be beneficial for the medical management of aortic dissection patients. With this method, clinicians would be able to monitor the patient’s status and make better decisions in the treatment procedure of each patient.


2021 ◽  
Vol 21 (68) ◽  
Author(s):  
Miguel Zapata Ros ◽  
Yamil Buenaño Palacios

En su acepción más sencilla se considera el pensamiento computacional como una serie de habilidades específicas que sirven a los programadores para hacer su tarea, pero que también son útiles a la gente en su vida profesional y en su vida personal como una forma de organizar la resolución de sus problemas, y de representar la realidad que hay en torno a ellos. En un esquema más elaborado este complejo de habilidades constituye una nueva alfabetización ---o la parte más sustancial de ella--- y una inculturación para manejarse en una nueva cultura, la cultura digital en la sociedad del conocimiento. Hemos visto cómo se usa la probabilidad bayesiana, en modelos de epidemiología, para determinar modelos de evolución de datos sobre contagio y fallecimientos en el COVID y en el procesamiento del lenguaje natural. Igualmente podríamos verlo en multitud de casos en los más variados campos científicos y de análisis de procesos. De esta forma, con la automatización de los métodos bayesianos y el uso de modelos gráficos probabilísticos es posible identificar patrones y anomalías en voluminosos conjuntos de datos en campos tan diversos como son los corpus lingüísticos, los mapas astronómicos, añadir funcionalidades a la práctica de la resonancia magnética, o a los hábitos de compra con tarjeta, online o smartphones. En esta nueva forma de proceder, se asocian el análisis de grandes datos y la teoría bayesiana. Si consideramos que el pensamiento bayesiano, esta forma de proceder, como un elemento más y relevante del pensamiento computacional, entonces a lo dicho en anteriores ocasiones hay que añadir ahora la idea de pensamiento computacional generalizado , que va más allá de la educación Ya no se trata de aspectos puramente asociados a la práctica profesional o vital ordinaria para manejarse por la vida y el mundo del trabajo, como ha sido lo que hemos llamado pensamiento computacional hasta ahora, sino como una preparación para la investigación básica y para metodología investigadora en casi todas las disciplinas. Porque, así definido, el pensamiento computacional está influyendo en la investigación en casi todas las áreas, tanto en las ciencias como en las humanidades. Una instrucción centrada en esta componente de pensamiento computacional, el pensamiento bayesiano, o que lo incluyese en una fase temprana, en Secundaria (K-12), incluyendo la fórmula de la probabilidad inversa, permitiría, basándonos en los First principles of learning de Merrill, y en particular en el principio de activación, activar estos aprendizajes como componentes muy valiosos y muy complejos en una etapa posterior de la actividad profesional o investigadora, o en la fase de formación, grados y postgrados, de estas profesiones o que capacitan para estas actividades y profesiones. In its simplest sense, computational thinking is considered as a series of specific skills that programmers use to do their homework. They are also useful to people in their professional and personal lives, as a way of organizing the resolution of their problems, and of representing the reality that surrounds them. In a more elaborate scheme, this complex of skills constitutes a new literacy --- or the most substantial part of it --- and an inculturation to deal with a new culture: digital culture in the knowledge society. We have seen how Bayesian Probability is used in epidemiology models to determine models of data evolution on contagion and deaths in COVID. We have also seen it in natural language processing. We could also see it in many cases in the most varied scientific and process analysis fields. In this way, with the automation of Bayesian methods and the use of probabilistic graphical models, it is possible to identify patterns and anomalies in voluminous data sets in fields. Fields as diverse as linguistic corpus, astronomical maps, adding functionalities to the practice of magnetic resonance imaging, or to the habits of buying with cards, online or smartphones. In this new way of proceeding, big data analysis and Bayesian theory are associated.. If we consider that Bayesian thinking (this way of proceeding) as one more element of computational thinking, then, to what has been said on previous occasions, we must now add the idea of ​​generalized computational thinking, which goes beyond education It is no longer about aspects purely associated with professional or vital practice to deal with life and the world of work, but as a preparation for basic research and for a research methodology in almost all disciplines. Thus defined, computational thinking is influencing research in almost all areas, both in the sciences and the humanities. An instruction focused on this component of computational thinking, or including it, at an early stage, in Secondary, would allow to activate these learnings as very valuable and very complex components at a later stage. In professional or research activity, in the training phase, undergraduate and postgraduate degrees, of these professions. Those that train for these activities and professions.


2021 ◽  
Vol 3 (1) ◽  
pp. 6
Author(s):  
Sascha Ranftl ◽  
Wolfgang von der Linden

The quantification of uncertainties of computer simulations due to input parameter uncertainties is paramount to assess a model’s credibility. For computationally expensive simulations, this is often feasible only via surrogate models that are learned from a small set of simulation samples. The surrogate models are commonly chosen and deemed trustworthy based on heuristic measures, and substituted for the simulation in order to approximately propagate the simulation input uncertainties to the simulation output. In the process, the contribution of the uncertainties of the surrogate itself to the simulation output uncertainties is usually neglected. In this work, we specifically address the case of doubtful surrogate trustworthiness, i.e., non-negligible surrogate uncertainties. We find that Bayesian probability theory yields a natural measure of surrogate trustworthiness, and that surrogate uncertainties can easily be included in simulation output uncertainties. For a Gaussian likelihood for the simulation data, with unknown surrogate variance and given a generalized linear surrogate model, the resulting formulas reduce to simple matrix multiplications. The framework contains Polynomial Chaos Expansions as a special case, and is easily extended to Gaussian Process Regression. Additionally, we show a simple way to implicitly include spatio-temporal correlations. Lastly, we demonstrate a numerical example where surrogate uncertainties are in part negligible and in part non-negligible.


2021 ◽  
Author(s):  
Miguel Zapata-Ros

In its simplest sense, computational thinking is considered as a series of specific skills that help programmers to do their homework, but that are also useful to people in their professional life and in their personal life as a way to organize the resolution on their problems, and of representing the reality that surrounds them.In a more elaborate scheme, this complex of skills constitutes a new literacy --- or the most substantial part of it --- and an inculturation to deal with a new culture, the digital culture in the knowledge society.We have seen how Bayesian probability is used in epidemiology models to determine models for the evolution of data on contagion and deaths in COVID and in natural language processing.We could also see it in a multitude of cases in the most varied scientific and process analysis fields. In this way, with the automation of Bayesian methods and the use of probabilistic graphical models, it is possible to identify patterns and anomalies in voluminous data sets in fields as diverse as linguistic corpus, astronomical maps, add functionalities to the practice of the magnetic resonance imaging, or to card, online or smartphone purchasing habits. In this new way of proceeding, big data analysis and Bayesian theory are associated.If we consider that Bayesian thinking, this way of proceeding, as one more and more relevant element of computational thinking, then to what has been said on previous occasions we must now add the idea of generalized computational thinking, which goes beyond education. It is no longer about aspects purely associated with ordinary professional or vital practice to deal with life and the world of work, as has been what we have called computational thinking until now, but as a preparation for basic research and research methodology in almost all disciplines. Because, thus defined, computational thinking is influencing research in almost all areas, both in the sciences and in the humanities. An instruction focused on this component of computational thinking, Bayesian thinking, of including it at an early stage, in Secondary (K-12), including the inverse probability formula, would allow, based on Merrill’s First principles of learning, and in particular in the activation principle, activate these learning as very valuable and very complex components in a later stage of professional or research activity, or in the training passed, undergraduate and postgraduate degrees, of these professions or that train for these activities and professions.


Molecules ◽  
2021 ◽  
Vol 26 (19) ◽  
pp. 5987
Author(s):  
Pier Luigi Gentili

Human interaction with the world is dominated by uncertainty. Probability theory is a valuable tool to face such uncertainty. According to the Bayesian definition, probabilities are personal beliefs. Experimental evidence supports the notion that human behavior is highly consistent with Bayesian probabilistic inference in both the sensory and motor and cognitive domain. All the higher-level psychophysical functions of our brain are believed to take the activities of interconnected and distributed networks of neurons in the neocortex as their physiological substrate. Neurons in the neocortex are organized in cortical columns that behave as fuzzy sets. Fuzzy sets theory has embraced uncertainty modeling when membership functions have been reinterpreted as possibility distributions. The terms of Bayes’ formula are conceivable as fuzzy sets and Bayes’ inference becomes a fuzzy inference. According to the QBism, quantum probabilities are also Bayesian. They are logical constructs rather than physical realities. It derives that the Born rule is nothing but a kind of Quantum Law of Total Probability. Wavefunctions and measurement operators are viewed epistemically. Both of them are similar to fuzzy sets. The new link that is established between fuzzy logic, neuroscience, and quantum mechanics through Bayesian probability could spark new ideas for the development of artificial intelligence and unconventional computing.


Sign in / Sign up

Export Citation Format

Share Document