scholarly journals Are We Too Smart for Our Own Good?: How Large-Scale Machine Learning Systems Can Vastly Exceed Human Level Decision-Making Abilities

Author(s):  
Jennifer Neville
2021 ◽  
pp. 1-14
Author(s):  
Cagatay Ozdemir ◽  
Sezi Cevik Onar ◽  
Selami Bagriyanik ◽  
Cengiz Kahraman ◽  
Burak Zafer Akalin ◽  
...  

Companies started to determine their strategies based on intelligent data analysis due to stagey enhance data production. Literature reviews show that the number of resources where demand estimation, location analysis, and decision-making technique applied together with the machine learning method is low in all sectors and almost none in the shopping mall domain. Within this study’s scope, a new hybrid fuzzy prediction method has been developed that will estimate the customer numbers for shopping malls. This new methodology is applied to predict the number of visitors of three shopping malls on the Anatolian side of Istanbul. The forecasting study for corresponding shopping malls is made by using the daily signaling data from indoor base stations of large-scale technology and telecommunications services provider and the features to be used in machine learning models is determined by fuzzy multi criteria decision making method. Output revealed by the application of the fuzzy multi criteria decision making method enables the prioritization of features.


2020 ◽  
Author(s):  
Ben Buchanan

One sentence summarizes the complexities of modern artificial intelligence: Machine learning systems use computing power to execute algorithms that learn from data. This AI triad of computing power, algorithms, and data offers a framework for decision-making in national security policy.


2017 ◽  
Author(s):  
Michael Veale

Presented as a talk at the 4th Workshop on Fairness, Accountability and Transparency in Machine Learning (FAT/ML 2017), Halifax, Nova Scotia, Canada.Machine learning systems are increasingly used to support public sector decision-making across a variety of sectors. Given concerns around accountability in these domains, and amidst accusations of intentional or unintentional bias, there have been increased calls for transparency of these technologies. Few, however, have considered how logics and practices concerning transparency have been understood by those involved in the machine learning systems already being piloted and deployed in public bodies today. This short paper distils insights about transparency on the ground from interviews with 27 such actors, largely public servants and relevant contractors, across 5 OECD countries. Considering transparency and opacity in relation to trust and buy-in, better decision-making, and the avoidance of gaming, it seeks to provide useful insights for those hoping to develop socio-technical approaches to transparency that might be useful to practitioners on-the-ground.


Science ◽  
2021 ◽  
Vol 372 (6547) ◽  
pp. 1209-1214
Author(s):  
Joshua C. Peterson ◽  
David D. Bourgin ◽  
Mayank Agrawal ◽  
Daniel Reichman ◽  
Thomas L. Griffiths

Predicting and understanding how people make decisions has been a long-standing goal in many fields, with quantitative models of human decision-making informing research in both the social sciences and engineering. We show how progress toward this goal can be accelerated by using large datasets to power machine-learning algorithms that are constrained to produce interpretable psychological theories. Conducting the largest experiment on risky choice to date and analyzing the results using gradient-based optimization of differentiable decision theories implemented through artificial neural networks, we were able to recapitulate historical discoveries, establish that there is room to improve on existing theories, and discover a new, more accurate model of human decision-making in a form that preserves the insights from centuries of research.


2020 ◽  
Vol 127 ◽  
pp. 106368 ◽  
Author(s):  
Lucy Ellen Lwakatare ◽  
Aiswarya Raj ◽  
Ivica Crnkovic ◽  
Jan Bosch ◽  
Helena Holmström Olsson

2019 ◽  
Vol 2019 ◽  
Author(s):  
Joanne Gray ◽  
Nicolas Suzor

This paper presents the results of an investigation of algorithmic copyright enforcement on YouTube. We use digital and computational methods to help understand the operation of automated decision-making at scale. We argue that in order to understand complex, automated systems, we require new methods and research infrastructure to understand their operation at scale, over time, and across platforms and jurisdictions. We use YouTube takedowns as a case study to develop and test an innovative methodology for evaluating automated decision-making. First, we built technical infrastructure to obtain a random sample of 59 million YouTube videos and tested their availability two weeks after they were first published. We then used topic modeling to identify categories of videos for further analysis, and trained a machine learning classifier to categorise videos across the entire dataset. We then use statistical analysis (multinomial logistic regression) to examine the characteristics of videos that are most likely to be removed through DMCA notices, Content ID removals, and Terms of Service enforcement. This interdisciplinary work provides the methodological base for further experimentation with the use of deep neural nets to enable large-scale analysis of the operation of automated systems in the realm of digital media. We hope that this work will improve understanding of a useful and fruitful set of methods to interrogate pressing public policy research questions in the context of content moderation and automated decision-making.


2022 ◽  
Vol 54 (9) ◽  
pp. 1-35
Author(s):  
José Mena ◽  
Oriol Pujol ◽  
Jordi Vitrià

Decision-making based on machine learning systems, especially when this decision-making can affect human lives, is a subject of maximum interest in the Machine Learning community. It is, therefore, necessary to equip these systems with a means of estimating uncertainty in the predictions they emit in order to help practitioners make more informed decisions. In the present work, we introduce the topic of uncertainty estimation, and we analyze the peculiarities of such estimation when applied to classification systems. We analyze different methods that have been designed to provide classification systems based on deep learning with mechanisms for measuring the uncertainty of their predictions. We will take a look at how this uncertainty can be modeled and measured using different approaches, as well as practical considerations of different applications of uncertainty. Moreover, we review some of the properties that should be borne in mind when developing such metrics. All in all, the present survey aims at providing a pragmatic overview of the estimation of uncertainty in classification systems that can be very useful for both academic research and deep learning practitioners.


Author(s):  
K. Ravikumar ◽  
M. Maheswaran

A computation indicated applying Tensor Movement may be accomplished with minimum modify on a wide selection of heterogeneous methods, including cellular devices such as for example devices and pills around large-scale spread methods of a huge selection of products and 1000s of computational units such as for example GPU cards. Even with arrangement, it's frequent to find out restrictions of the design or improvements in the goal notion that necessitate improvements to working out information and parameters. But, by nowadays, there's number frequent knowledge by what these iterations contain, or what debugging resources are required to help the investigative process. As more information becomes accessible, more formidable issues may be tackled. Consequently, device understanding is commonly utilized in pc technology and different fields. But, establishing effective device understanding programs involves an amazing level of "dark art" that's difficult to find in textbooks. This short article summarizes a dozen critical classes that device understanding scientists and practitioners have learned. These calculations are useful for numerous applications like information mining, picture running, predictive analytics, etc. to call a few. The key benefit of applying device understanding is that, when an algorithm finds what direction to go with information, it may do their function automatically.


2018 ◽  
Vol 12 ◽  
pp. 85-98
Author(s):  
Bojan Kostadinov ◽  
Mile Jovanov ◽  
Emil STANKOV

Data collection and machine learning are changing the world. Whether it is medicine, sports or education, companies and institutions are investing a lot of time and money in systems that gather, process and analyse data. Likewise, to improve competitiveness, a lot of countries are making changes to their educational policy by supporting STEM disciplines. Therefore, it’s important to put effort into using various data sources to help students succeed in STEM. In this paper, we present a platform that can analyse student’s activity on various contest and e-learning systems, combine and process the data, and then present it in various ways that are easy to understand. This in turn enables teachers and organizers to recognize talented and hardworking students, identify issues, and/or motivate students to practice and work on areas where they’re weaker.


Sign in / Sign up

Export Citation Format

Share Document