Scalable Algorithms for Inverse and Uncertainty Modelling in Hydrology

Author(s):  
V. Vondrak ◽  
S. Kuchar ◽  
M. Golasowski ◽  
R. Vavrik ◽  
J. Martinovic ◽  
...  
2021 ◽  
pp. 1-11
Author(s):  
V.S. Anoop ◽  
P. Deepak ◽  
S. Asharaf

Online social networks are considered to be one of the most disruptive platforms where people communicate with each other on any topic ranging from funny cat videos to cancer support. The widespread diffusion of mobile platforms such as smart-phones causes the number of messages shared in such platforms to grow heavily, thus more intelligent and scalable algorithms are needed for efficient extraction of useful information. This paper proposes a method for retrieving relevant information from social network messages using a distributional semantics-based framework powered by topic modeling. The proposed framework combines the Latent Dirichlet Allocation and distributional representation of phrases (Phrase2Vec) for effective information retrieval from online social networks. Extensive and systematic experiments on messages collected from Twitter (tweets) show this approach outperforms some state-of-the-art approaches in terms of precision and accuracy and better information retrieval is possible using the proposed method.


2015 ◽  
Vol 22 (1) ◽  
pp. 21-36 ◽  
Author(s):  
R. N. Kimber ◽  
M. D. Curtis ◽  
F. O. Boundy ◽  
P. H. Diamond ◽  
A. O. Uwaga

2021 ◽  
Vol 47 (2) ◽  
pp. 1-34
Author(s):  
Umberto Villa ◽  
Noemi Petra ◽  
Omar Ghattas

We present an extensible software framework, hIPPYlib, for solution of large-scale deterministic and Bayesian inverse problems governed by partial differential equations (PDEs) with (possibly) infinite-dimensional parameter fields (which are high-dimensional after discretization). hIPPYlib overcomes the prohibitively expensive nature of Bayesian inversion for this class of problems by implementing state-of-the-art scalable algorithms for PDE-based inverse problems that exploit the structure of the underlying operators, notably the Hessian of the log-posterior. The key property of the algorithms implemented in hIPPYlib is that the solution of the inverse problem is computed at a cost, measured in linearized forward PDE solves, that is independent of the parameter dimension. The mean of the posterior is approximated by the MAP point, which is found by minimizing the negative log-posterior with an inexact matrix-free Newton-CG method. The posterior covariance is approximated by the inverse of the Hessian of the negative log posterior evaluated at the MAP point. The construction of the posterior covariance is made tractable by invoking a low-rank approximation of the Hessian of the log-likelihood. Scalable tools for sample generation are also discussed. hIPPYlib makes all of these advanced algorithms easily accessible to domain scientists and provides an environment that expedites the development of new algorithms.


2016 ◽  
Vol 107 ◽  
pp. 22-33 ◽  
Author(s):  
Svetlana Afanasyeva ◽  
Jussi Saari ◽  
Martin Kalkofen ◽  
Jarmo Partanen ◽  
Olli Pyrhönen

Axioms ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 312
Author(s):  
Haiyang Hou ◽  
Chunyu Zhao

D numbers theory is an extension of Dempster–Shafer evidence theory. It eliminates the constraints of mutual exclusion and completeness under the frame of discernment of Dempster–Shafer evidence theory, so it has been widely used to deal with uncertainty modelling, but if it cannot effectively deal with the problem of missing information, sometimes unreasonable conclusions will be drawn. This paper proposes a new type of integration representation of D numbers, which compares the data of multiple evaluation items horizontally, and can reasonably fill in missing information. We apply this method to the user experience evaluation problem of online live course platform to verify the effectiveness of this method.


Author(s):  
Hector Geffner

During the 60s and 70s, AI researchers explored intuitions about intelligence by writing programs that displayed intelligent behavior. Many good ideas came out from this work but programs written by hand were not robust or general. After the 80s, research increasingly shifted to the development of learners capable of inferring behavior and functions from experience and data, and solvers capable of tackling well-defined but intractable models like SAT, classical planning, Bayesian networks, and POMDPs. The learning approach has achieved considerable success but results in black boxes that do not have the flexibility, transparency, and generality of their model-based counterparts. Model-based approaches, on the other hand, require models and scalable algorithms. Model-free learners and model-based solvers have indeed close parallels with Systems 1 and 2 in current theories of the human mind: the first, a fast, opaque, and inflexible intuitive mind; the second, a slow, transparent, and flexible analytical mind. In this paper, I review developments in AI and draw on these theories to discuss the gap between model-free learners and model-based solvers, a gap that needs to be bridged in order to have intelligent systems that are robust and general.


Sign in / Sign up

Export Citation Format

Share Document