probabilistic processes
Recently Published Documents


TOTAL DOCUMENTS

97
(FIVE YEARS 8)

H-INDEX

24
(FIVE YEARS 2)

2021 ◽  
Vol 145 ◽  
pp. 179-198
Author(s):  
Igor Panasiuk

This paper deals with the psycholinguistic methods of investigation of the cognitive process of text reception and the process of translation – the free associative experiment. The psycholinguistic experiment can be seen as a complex investigation method. The application of the psycholinguistic experiment is thus to be embedded in the translational experiment in the aspect of the polyvariety of translation: Two German translations of the novel The Master and Margarita by Mikhail Bulgakov and the novel Doctor Zhivago by Boris Pasternak are to be analyzed in the frames of the psycholinguistic experiment. The source of the polyvariability of translation is the subjective character of the interpretation of meaning, which is based on probabilistic processes and associative mean-ings. Emotions play an important role here. The empirical data obtained will be used for didactic purposes in the training of prospective translators.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 280 ◽  
Author(s):  
Elie Wolfe ◽  
David Schmid ◽  
Ana Belén Sainz ◽  
Ravi Kunjwal ◽  
Robert W. Spekkens

We take a resource-theoretic approach to the problem of quantifying nonclassicality in Bell scenarios. The resources are conceptualized as probabilistic processes from the setting variables to the outcome variables having a particular causal structure, namely, one wherein the wings are only connected by a common cause. We term them "common-cause boxes". We define the distinction between classical and nonclassical resources in terms of whether or not a classical causal model can explain the correlations. One can then quantify the relative nonclassicality of resources by considering their interconvertibility relative to the set of operations that can be implemented using a classical common cause (which correspond to local operations and shared randomness). We prove that the set of free operations forms a polytope, which in turn allows us to derive an efficient algorithm for deciding whether one resource can be converted to another. We moreover define two distinct monotones with simple closed-form expressions in the two-party binary-setting binary-outcome scenario, and use these to reveal various properties of the pre-order of resources, including a lower bound on the cardinality of any complete set of monotones. In particular, we show that the information contained in the degrees of violation of facet-defining Bell inequalities is not sufficient for quantifying nonclassicality, even though it is sufficient for witnessing nonclassicality. Finally, we show that the continuous set of convexly extremal quantumly realizable correlations are all at the top of the pre-order of quantumly realizable correlations. In addition to providing new insights on Bell nonclassicality, our work also sets the stage for quantifying nonclassicality in more general causal networks.


2020 ◽  
Vol 813 ◽  
pp. 20-69 ◽  
Author(s):  
Valentina Castiglioni ◽  
Michele Loreti ◽  
Simone Tini

2019 ◽  
Vol 25 (5) ◽  
pp. 1085-1100 ◽  
Author(s):  
Ossi Ylijoki ◽  
Jari Porras

PurposeThe purpose of this paper is to present a process-theory-based model of big data value creation in a business context. The authors approach the topic from the viewpoint of a single firm.Design/methodology/approachThe authors reflect current big data literature in two widely used value creation frameworks and arrange the results according to a process theory perspective.FindingsThe model, consisting of four probabilistic processes, provides a “recipe” for converting big data investments into firm performance. The provided recipe helps practitioners to understand the ingredients and complexities that may promote or demote the performance impact of big data in a business context.Practical implicationsThe model acts as a framework which helps to understand the necessary conditions and their relationships in the conversion process. This helps to focus on success factors which promote positive performance.Originality/valueUsing well-established frameworks and process components, the authors synthetize big data value creation-related papers into a holistic model which explains how big data investments translate into economic performance, and why the conversion sometimes fails. While the authors rely on existing theories and frameworks, the authors claim that the arrangement and application of the elements to the big data context is novel.


2019 ◽  
Author(s):  
Lluis Oviedo

There has been extensive work on understanding belief, from a psychological, philosophical and neurobiological perspective. Meanwhile, artificial intelligence has produced compelling developments that can enrich and update the brain-as-a-computer metaphor and has tried to better represent beliefs as cognitive probabilistic processes. In parallel, there has been a surge of research in Complexity Sciences, with applications ranging from Medicine to Finance. Some authors have already linked the connected nature of belief to the behaviour of complex networks. We would like to expand this approach to understand belief as a complex system with the main functions of providing a model of the world –including the individual and her surroundings- and producing guidelines for action. The complex-system perspective allows us to understand some of the properties of belief systems in a comprehensive manner, which many authors have begun to study in isolation. Notably, this provides a framework to study the important phenomena of belief formation and change as processes of emergence and adaptation.


Sign in / Sign up

Export Citation Format

Share Document