Machine learning and social theory: Collective machine behaviour in algorithmic trading

2021 ◽  
pp. 136843102110560
Author(s):  
Christian Borch

This article examines what the rise in machine learning (ML) systems might mean for social theory. Focusing on financial markets, in which algorithmic securities trading founded on ML-based decision-making is gaining traction, I discuss the extent to which established sociological notions remain relevant or demand a reconsideration when applied to an ML context. I argue that ML systems have some capacity for agency and for engaging in forms of collective machine behaviour, in which ML systems interact with other machines. However, ML-based collective machine behaviour is irreducible to human decision-making and thereby challenges established sociological notions of financial markets (including that of embeddedness). I argue that such behaviour can nonetheless be analysed through an adaptation of sociological theories of interaction and collective behaviour.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Pooya Tabesh

Purpose While it is evident that the introduction of machine learning and the availability of big data have revolutionized various organizational operations and processes, existing academic and practitioner research within decision process literature has mostly ignored the nuances of these influences on human decision-making. Building on existing research in this area, this paper aims to define these concepts from a decision-making perspective and elaborates on the influences of these emerging technologies on human analytical and intuitive decision-making processes. Design/methodology/approach The authors first provide a holistic understanding of important drivers of digital transformation. The authors then conceptualize the impact that analytics tools built on artificial intelligence (AI) and big data have on intuitive and analytical human decision processes in organizations. Findings The authors discuss similarities and differences between machine learning and two human decision processes, namely, analysis and intuition. While it is difficult to jump to any conclusions about the future of machine learning, human decision-makers seem to continue to monopolize the majority of intuitive decision tasks, which will help them keep the upper hand (vis-à-vis machines), at least in the near future. Research limitations/implications The work contributes to research on rational (analytical) and intuitive processes of decision-making at the individual, group and organization levels by theorizing about the way these processes are influenced by advanced AI algorithms such as machine learning. Practical implications Decisions are building blocks of organizational success. Therefore, a better understanding of the way human decision processes can be impacted by advanced technologies will prepare managers to better use these technologies and make better decisions. By clarifying the boundaries/overlaps among concepts such as AI, machine learning and big data, the authors contribute to their successful adoption by business practitioners. Social implications The work suggests that human decision-makers will not be replaced by machines if they continue to invest in what they do best: critical thinking, intuitive analysis and creative problem-solving. Originality/value The work elaborates on important drivers of digital transformation from a decision-making perspective and discusses their practical implications for managers.


Author(s):  
Jorgen Vitting Andersen ◽  
Naji Masaad

We introduce tools to capture the dynamics of three different pathways, in which the synchronization of human decision making could lead to turbulent periods and contagion phenomena in financial markets. The first pathway is caused when stock market indices, seen as a set of coupled integrate-and-fire oscillators, synchronize in frequency. The integrate-and-fire dynamics happens due to "change blindness", a trait in human decision making where people have the tendency to ignore small changes, but take action when a large change happens. The second pathway happens due to feedback mechanisms between market performance and the use of certain (decoupled) trading strategies. The third pathway occurs through the effects of communication and its impact on human decision making. A model is introduced in which financial market performance has an impact on decision making through communication between people. Conversely, the sentiment created via communication has an impact on financial market performance.


2021 ◽  
Vol 3 ◽  
Author(s):  
Nikolaus Poechhacker ◽  
Severin Kacianka

The increasing use of automated decision making (ADM) and machine learning sparked an ongoing discussion about algorithmic accountability. Within computer science, a new form of producing accountability has been discussed recently: causality as an expression of algorithmic accountability, formalized using structural causal models (SCMs). However, causality itself is a concept that needs further exploration. Therefore, in this contribution we confront ideas of SCMs with insights from social theory, more explicitly pragmatism, and argue that formal expressions of causality must always be seen in the context of the social system in which they are applied. This results in the formulation of further research questions and directions.


2015 ◽  
Vol 38 ◽  
Author(s):  
Marco Verweij ◽  
Timothy J. Senior

AbstractPessoa's (2013) arguments imply that various leading approaches in the social sciences have not adequately conceptualized how emotion and cognition influence human decision making and social behavior. This is particularly unfortunate, as these approaches have been central to the efforts to build bridges between neuroscience and the social sciences. We argue that it would be better to base these efforts on other social theories that appear more compatible with Pessoa's analysis of the brain.


Science ◽  
2021 ◽  
Vol 372 (6547) ◽  
pp. 1209-1214
Author(s):  
Joshua C. Peterson ◽  
David D. Bourgin ◽  
Mayank Agrawal ◽  
Daniel Reichman ◽  
Thomas L. Griffiths

Predicting and understanding how people make decisions has been a long-standing goal in many fields, with quantitative models of human decision-making informing research in both the social sciences and engineering. We show how progress toward this goal can be accelerated by using large datasets to power machine-learning algorithms that are constrained to produce interpretable psychological theories. Conducting the largest experiment on risky choice to date and analyzing the results using gradient-based optimization of differentiable decision theories implemented through artificial neural networks, we were able to recapitulate historical discoveries, establish that there is room to improve on existing theories, and discover a new, more accurate model of human decision-making in a form that preserves the insights from centuries of research.


2020 ◽  
pp. 1-21
Author(s):  
Justin B. Biddle

Abstract Recent scholarship in philosophy of science and technology has shown that scientific and technological decision making are laden with values, including values of a social, political, and/or ethical character. This paper examines the role of value judgments in the design of machine-learning (ML) systems generally and in recidivism-prediction algorithms specifically. Drawing on work on inductive and epistemic risk, the paper argues that ML systems are value laden in ways similar to human decision making, because the development and design of ML systems requires human decisions that involve tradeoffs that reflect values. In many cases, these decisions have significant—and, in some cases, disparate—downstream impacts on human lives. After examining an influential court decision regarding the use of proprietary recidivism-prediction algorithms in criminal sentencing, Wisconsin v. Loomis, the paper provides three recommendations for the use of ML in penal systems.


Queue ◽  
2021 ◽  
Vol 19 (6) ◽  
pp. 28-56
Author(s):  
Valerie Chen ◽  
Jeffrey Li ◽  
Joon Sik Kim ◽  
Gregory Plumb ◽  
Ameet Talwalkar

The emergence of machine learning as a society-changing technology in the past decade has triggered concerns about people's inability to understand the reasoning of increasingly complex models. The field of IML (interpretable machine learning) grew out of these concerns, with the goal of empowering various stakeholders to tackle use cases, such as building trust in models, performing model debugging, and generally informing real human decision-making.


Author(s):  
Micah N. Villarreal ◽  
Alexander J. Kamrud ◽  
Brett J. Borghetti

Cognitive biases are known to affect human decision making and can have disastrous effects in the fast-paced environments of military operators. Traditionally, post-hoc behavioral analysis is used to measure the level of bias in a decision. However, these techniques can be hindered by subjective factors and cannot be collected in real-time. This pilot study collects behavior patterns and physiological signals present during biased and unbiased decision-making. Supervised machine learning models are trained to find the relationship between Electroencephalography (EEG) signals and behavioral evidence of cognitive bias. Once trained, the models should infer the presence of confirmation bias during decision-making using only EEG - without the interruptions or the subjective nature of traditional confirmation bias estimation techniques.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Mariem Gandouz ◽  
Hajo Holzmann ◽  
Dominik Heider

AbstractMachine learning and artificial intelligence have entered biomedical decision-making for diagnostics, prognostics, or therapy recommendations. However, these methods need to be interpreted with care because of the severe consequences for patients. In contrast to human decision-making, computational models typically make a decision also with low confidence. Machine learning with abstention better reflects human decision-making by introducing a reject option for samples with low confidence. The abstention intervals are typically symmetric intervals around the decision boundary. In the current study, we use asymmetric abstention intervals, which we demonstrate to be better suited for biomedical data that is typically highly imbalanced. We evaluate symmetric and asymmetric abstention on three real-world biomedical datasets and show that both approaches can significantly improve classification performance. However, asymmetric abstention rejects as many or fewer samples compared to symmetric abstention and thus, should be used in imbalanced data.


Sign in / Sign up

Export Citation Format

Share Document