scholarly journals Quantum Propensity in Economics

2022 ◽  
Vol 4 ◽  
Author(s):  
David Orrell ◽  
Monireh Houshmand

This paper describes an approach to economics that is inspired by quantum computing, and is motivated by the need to develop a consistent quantum mathematical framework for economics. The traditional neoclassical approach assumes that rational utility-optimisers drive market prices to a stable equilibrium, subject to external perturbations or market failures. While this approach has been highly influential, it has come under increasing criticism following the financial crisis of 2007/8. The quantum approach, in contrast, is inherently probabilistic and dynamic. Decision-makers are described, not by a utility function, but by a propensity function which specifies the probability of transacting. We show how a number of cognitive phenomena such as preference reversal and the disjunction effect can be modelled by using a simple quantum circuit to generate an appropriate propensity function. Conversely, a general propensity function can be quantized, via an entropic force, to incorporate effects such as interference and entanglement that characterise human decision-making. Applications to some common problems and topics in economics and finance, including the use of quantum artificial intelligence, are discussed.

Author(s):  
Francesco Galofaro

AbstractThe paper presents a semiotic interpretation of the phenomenological debate on the notion of person, focusing in particular on Edmund Husserl, Max Scheler, and Edith Stein. The semiotic interpretation lets us identify the categories that orient the debate: collective/individual and subject/object. As we will see, the phenomenological analysis of the relation between person and social units such as the community, the association, and the mass shows similarities to contemporary socio-semiotic models. The difference between community, association, and mass provides an explanation for the establishment of legal systems. The notion of person we inherit from phenomenology can also be useful in facing juridical problems raised by the use of non-human decision-makers such as machine learning algorithms and artificial intelligence applications.


Author(s):  
Matthew Coudron ◽  
Jalex Stark ◽  
Thomas Vidick

AbstractThe generation of certifiable randomness is the most fundamental information-theoretic task that meaningfully separates quantum devices from their classical counterparts. We propose a protocol for exponential certified randomness expansion using a single quantum device. The protocol calls for the device to implement a simple quantum circuit of constant depth on a 2D lattice of qubits. The output of the circuit can be verified classically in linear time, and is guaranteed to contain a polynomial number of certified random bits assuming that the device used to generate the output operated using a (classical or quantum) circuit of sub-logarithmic depth. This assumption contrasts with the locality assumption used for randomness certification based on Bell inequality violation and more recent proposals for randomness certification based on computational assumptions. Furthermore, to demonstrate randomness generation it is sufficient for a device to sample from the ideal output distribution within constant statistical distance. Our procedure is inspired by recent work of Bravyi et al. (Science 362(6412):308–311, 2018), who introduced a relational problem that can be solved by a constant-depth quantum circuit, but provably cannot be solved by any classical circuit of sub-logarithmic depth. We develop the discovery of Bravyi et al. into a framework for robust randomness expansion. Our results lead to a new proposal for a demonstrated quantum advantage that has some advantages compared to existing proposals. First, our proposal does not rest on any complexity-theoretic conjectures, but relies on the physical assumption that the adversarial device being tested implements a circuit of sub-logarithmic depth. Second, success on our task can be easily verified in classical linear time. Finally, our task is more noise-tolerant than most other existing proposals that can only tolerate multiplicative error, or require additional conjectures from complexity theory; in contrast, we are able to allow a small constant additive error in total variation distance between the sampled and ideal distributions.


Author(s):  
Jan H. Kwapisz ◽  
Leszek Z. Stolarczyk

AbstractThe equilibrium carbon-carbon (C-C) bond lengths in π-electron hydrocarbons are very sensitive to the electronic ground-state characteristic. In the recent two papers by Stolarczyk and Krygowski (J Phys Org Chem, 34:e4154,e4153, 2021) a simple quantum approach, the Augmented Hückel Molecular Orbital (AugHMO) model, is proposed for the qualitative, as well as quantitative, study of this phenomenon. The simplest realization of the AugHMO model is the Hückel-Su-Schrieffer-Heeger (HSSH) method, in which the resonance integral β of the HMO model is a linear function the bond length. In the present paper, the HSSH method is applied in a study of C-C bond lengths in a set of 34 selected polycyclic aromatic hydrocarbons (PAHs). This is exactly the set of molecules analyzed by Riegel and Müllen (J Phys Org Chem, 23:315, 2010) in the context of their electronic-excitation spectra. These PAHs have been obtained by chemical synthesis, but in most cases no diffraction data (by X-rays or neutrons) of sufficient quality is available to provide us with their geometry. On the other hand, these PAHs are rather big (up to 96 carbon atoms), and ab initio methods of quantum chemistry are too expensive for a reliable geometry optimization. That makes the HSSH method a very attractive alternative. Our HSSH calculations uncover a modular architecture of certain classes of PAHs. For the studied molecules (and their fragments – modules), we calculate the values of the aromaticity index HOMA.


Energies ◽  
2018 ◽  
Vol 11 (6) ◽  
pp. 1357 ◽  
Author(s):  
Simon Hirzel ◽  
Tim Hettesheimer ◽  
Peter Viebahn ◽  
Manfred Fischedick

New energy technologies may fail to make the transition to the market once research funding has ended due to a lack of private engagement to conclude their development. Extending public funding to cover such experimental developments could be one way to improve this transition. However, identifying promising research and development (R&D) proposals for this purpose is a difficult task for the following reasons: Close-to-market implementations regularly require substantial resources while public budgets are limited; the allocation of public funds needs to be fair, open, and documented; the evaluation is complex and subject to public sector regulations for public engagement in R&D funding. This calls for a rigorous evaluation process. This paper proposes an operational three-staged decision support system (DSS) to assist decision-makers in public funding institutions in the ex-ante evaluation of R&D proposals for large-scale close-to-market projects in energy research. The system was developed based on a review of literature and related approaches from practice combined with a series of workshops with practitioners from German public funding institutions. The results confirm that the decision-making process is a complex one that is not limited to simply scoring R&D proposals. Decision-makers also have to deal with various additional issues such as determining the state of technological development, verifying market failures or considering existing funding portfolios. The DSS that is suggested in this paper is unique in the sense that it goes beyond mere multi-criteria aggregation procedures and addresses these issues as well to help guide decision-makers in public institutions through the evaluation process.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Pooya Tabesh

Purpose While it is evident that the introduction of machine learning and the availability of big data have revolutionized various organizational operations and processes, existing academic and practitioner research within decision process literature has mostly ignored the nuances of these influences on human decision-making. Building on existing research in this area, this paper aims to define these concepts from a decision-making perspective and elaborates on the influences of these emerging technologies on human analytical and intuitive decision-making processes. Design/methodology/approach The authors first provide a holistic understanding of important drivers of digital transformation. The authors then conceptualize the impact that analytics tools built on artificial intelligence (AI) and big data have on intuitive and analytical human decision processes in organizations. Findings The authors discuss similarities and differences between machine learning and two human decision processes, namely, analysis and intuition. While it is difficult to jump to any conclusions about the future of machine learning, human decision-makers seem to continue to monopolize the majority of intuitive decision tasks, which will help them keep the upper hand (vis-à-vis machines), at least in the near future. Research limitations/implications The work contributes to research on rational (analytical) and intuitive processes of decision-making at the individual, group and organization levels by theorizing about the way these processes are influenced by advanced AI algorithms such as machine learning. Practical implications Decisions are building blocks of organizational success. Therefore, a better understanding of the way human decision processes can be impacted by advanced technologies will prepare managers to better use these technologies and make better decisions. By clarifying the boundaries/overlaps among concepts such as AI, machine learning and big data, the authors contribute to their successful adoption by business practitioners. Social implications The work suggests that human decision-makers will not be replaced by machines if they continue to invest in what they do best: critical thinking, intuitive analysis and creative problem-solving. Originality/value The work elaborates on important drivers of digital transformation from a decision-making perspective and discusses their practical implications for managers.


Author(s):  
Diederik Aerts ◽  
Massimiliano Sassoli de Bianchi ◽  
Sandro Sozzo ◽  
Tomas Veloz

Author(s):  
Paul W. Glimcher

In the early twentieth century, neoclassical economic theorists began to explore mathematical models of maximization. The theories of human behavior that they produced explored how optimal human agents, who were subject to no internal computational resource constraints of any kind, should make choices. During the second half of the twentieth century, empirical work laid bare the limitations of this approach. Human decision makers were often observed to fail to achieve maximization in domains ranging from health to happiness to wealth. Psychologists responded to these failures by largely abandoning holistic theory in favor of large-scale multi-parameter models that retained many of the key features of the earlier models. Over the last two decades, scholars combining neurobiology, psychology, economics, and evolutionary approaches have begun to examine alternative theoretical approaches. Their data suggest explanations for some of the failures of neoclassical approaches and revealed new theoretical avenues for exploration. While neurobiologists have largely validated the economic and psychological assumption that decision makers compute and represent a single-decision variable for every option considered during choice, their data also make clear that the human brain faces severe computational resource constraints which force it to rely on very specific modular approaches to the processes of valuation and choice.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-26
Author(s):  
Friederike Wall

Coordination among decision-makers of an organization, each responsible for a certain partition of an overall decision-problem, is of crucial relevance with respect to the overall performance obtained. Among the challenges of coordination in distributed decision-making systems (DDMS) is to understand how environmental conditions like, for example, the complexity of the decision-problem to be solved, the problem’s predictability and its dynamics shape the adaptation of coordination mechanisms. These challenges apply to DDMS resided by human decision-makers like firms as well as to systems of artificial agents as studied in the domain of multiagent systems (MAS). It is well known that coordination for increasing decision-problems and, accordingly, growing organizations is in a particular tension between shaping the search for new solutions and setting appropriate constraints to deal with increasing size and intraorganizational complexity. Against this background, the paper studies the adaptation of coordination in the course of growing decision-making organizations. For this, an agent-based simulation model based on the framework of NK fitness landscapes is employed. The study controls for different levels of complexity of the overall decision-problem, different strategies of search for new solutions, and different levels of cost of effort to implement new solutions. The results suggest that, with respect to the emerging coordination mode, complexity subtly interferes with the search strategy employed and cost of effort. In particular, results support the conjecture that increasing complexity leads to more hierarchical coordination. However, the search strategy shapes the predominance of hierarchy in favor of granting more autonomy to decentralized decision-makers. Moreover, the study reveals that the cost of effort for implementing new solutions in conjunction with the search strategy may remarkably affect the emerging form of coordination. This could explain differences in prevailing coordination modes across different branches or technologies or could explain the emergence of contextually inferior modes of coordination.


Author(s):  
Huseyin Avunduk

This study is to design a linear programming model for optimising the mixtures of flour which comes from different sieve passages in flour mill industry. There are different kinds of flour in the market which have different market prices and each has different properties and used for different purposes. The flour obtained from the sieve passages which can be more than 100 in flour milling factories. The characteristics of flour in each sieve passage are different. The main problem is to find out flour mixture plan from sieve passages of the factory in order to maximise the company sales revenue by taking into account the market demand for flour, market prices of flour, sieve passage flour flows and each sieve passage flour properties. In this study, a linear programming model has been developed to support the decision makers for sieve passage flour mixtures optimisation. Keywords: Linear programming, flour mill ındustry, sieve passage, flour mixture optimisation.


Sign in / Sign up

Export Citation Format

Share Document