scholarly journals An Empirical Comparison of Bias Reduction Methods on Real-World Problems in High-Stakes Policy Settings

2021 ◽  
Vol 23 (1) ◽  
pp. 69-85
Author(s):  
Hemank Lamba ◽  
Kit T. Rodolfa ◽  
Rayid Ghani

Applications of machine learning (ML) to high-stakes policy settings - such as education, criminal justice, healthcare, and social service delivery - have grown rapidly in recent years, sparking important conversations about how to ensure fair outcomes from these systems. The machine learning research community has responded to this challenge with a wide array of proposed fairness-enhancing strategies for ML models, but despite the large number of methods that have been developed, little empirical work exists evaluating these methods in real-world settings. Here, we seek to fill this research gap by investigating the performance of several methods that operate at different points in the ML pipeline across four real-world public policy and social good problems. Across these problems, we find a wide degree of variability and inconsistency in the ability of many of these methods to improve model fairness, but postprocessing by choosing group-specific score thresholds consistently removes disparities, with important implications for both the ML research community and practitioners deploying machine learning to inform consequential policy decisions.

2021 ◽  
pp. 338-354
Author(s):  
Ute Schmid

With the growing number of applications of machine learning in complex real-world domains machine learning research has to meet new requirements to deal with the imperfections of real world data and the legal as well as ethical obligations to make classifier decisions transparent and comprehensible. In this contribution, arguments for interpretable and interactive approaches to machine learning are presented. It is argued that visual explanations are often not expressive enough to grasp critical information which relies on relations between different aspects or sub-concepts. Consequently, inductive logic programming (ILP) and the generation of verbal explanations from Prolog rules is advocated. Interactive learning in the context of ILP is illustrated with the Dare2Del system which helps users to manage their digital clutter. It is shown that verbal explanations overcome the explanatory one-way street from AI system to user. Interactive learning with mutual explanations allows the learning system to take into account not only class corrections but also corrections of explanations to guide learning. We propose mutual explanations as a building-block for human-like computing and an important ingredient for human AI partnership.


2020 ◽  
Vol 39 (5) ◽  
pp. 6579-6590
Author(s):  
Sandy Çağlıyor ◽  
Başar Öztayşi ◽  
Selime Sezgin

The motion picture industry is one of the largest industries worldwide and has significant importance in the global economy. Considering the high stakes and high risks in the industry, forecast models and decision support systems are gaining importance. Several attempts have been made to estimate the theatrical performance of a movie before or at the early stages of its release. Nevertheless, these models are mostly used for predicting domestic performances and the industry still struggles to predict box office performances in overseas markets. In this study, the aim is to design a forecast model using different machine learning algorithms to estimate the theatrical success of US movies in Turkey. From various sources, a dataset of 1559 movies is constructed. Firstly, independent variables are grouped as pre-release, distributor type, and international distribution based on their characteristic. The number of attendances is discretized into three classes. Four popular machine learning algorithms, artificial neural networks, decision tree regression and gradient boosting tree and random forest are employed, and the impact of each group is observed by compared by the performance models. Then the number of target classes is increased into five and eight and results are compared with the previously developed models in the literature.


2017 ◽  
Author(s):  
James Gibson

Despite what we learn in law school about the “meeting of the minds,” most contracts are merely boilerplate—take-it-or-leave-it propositions. Negotiation is nonexistent; we rely on our collective market power as consumers to regulate contracts’ content. But boilerplate imposes certain information costs because it often arrives late in the transaction and is hard to understand. If those costs get too high, then the market mechanism fails. So how high are boilerplate’s information costs? A few studies have attempted to measure them, but they all use a “horizontal” approach—i.e., they sample a single stratum of boilerplate and assume that it represents the whole transaction. Yet real-world transactions often involve multiple layers of contracts, each with its own information costs. What is needed, then, is a “vertical” analysis, a study that examines fewer contracts of any one kind but tracks all the contracts the consumer encounters, soup to nuts. This Article presents the first vertical study of boilerplate. It casts serious doubt on the market mechanism and shows that existing scholarship fails to appreciate the full scale of the information cost problem. It then offers two regulatory solutions. The first works within contract law’s unconscionability doctrine, tweaking what the parties need to prove and who bears the burden of proving it. The second, more radical solution involves forcing both sellers and consumers to confront and minimize boilerplate’s information costs—an approach I call “forced salience.” In the end, the boilerplate experience is as deep as it is wide. Our empirical work should reflect that fact, and our policy proposals should too.


Author(s):  
Tausifa Jan Saleem ◽  
Mohammad Ahsan Chishti

The rapid progress in domains like machine learning, and big data has created plenty of opportunities in data-driven applications particularly healthcare. Incorporating machine intelligence in healthcare can result in breakthroughs like precise disease diagnosis, novel methods of treatment, remote healthcare monitoring, drug discovery, and curtailment in healthcare costs. The implementation of machine intelligence algorithms on the massive healthcare datasets is computationally expensive. However, consequential progress in computational power during recent years has facilitated the deployment of machine intelligence algorithms in healthcare applications. Motivated to explore these applications, this paper presents a review of research works dedicated to the implementation of machine learning on healthcare datasets. The studies that were conducted have been categorized into following groups (a) disease diagnosis and detection, (b) disease risk prediction, (c) health monitoring, (d) healthcare related discoveries, and (e) epidemic outbreak prediction. The objective of the research is to help the researchers in this field to get a comprehensive overview of the machine learning applications in healthcare. Apart from revealing the potential of machine learning in healthcare, this paper will serve as a motivation to foster advanced research in the domain of machine intelligence-driven healthcare.


2020 ◽  
Author(s):  
Dianbo Liu

BACKGROUND Applications of machine learning (ML) on health care can have a great impact on people’s lives. At the same time, medical data is usually big, requiring a significant amount of computational resources. Although it might not be a problem for wide-adoption of ML tools in developed nations, availability of computational resource can very well be limited in third-world nations and on mobile devices. This can prevent many people from benefiting of the advancement in ML applications for healthcare. OBJECTIVE In this paper we explored three methods to increase computational efficiency of either recurrent neural net-work(RNN) or feedforward (deep) neural network (DNN) while not compromising its accuracy. We used in-patient mortality prediction as our case analysis upon intensive care dataset. METHODS We reduced the size of RNN and DNN by applying pruning of “unused” neurons. Additionally, we modified the RNN structure by adding a hidden-layer to the RNN cell but reduce the total number of recurrent layers to accomplish a reduction of total parameters in the network. Finally, we implemented quantization on DNN—forcing the weights to be 8-bits instead of 32-bits. RESULTS We found that all methods increased implementation efficiency–including training speed, memory size and inference speed–without reducing the accuracy of mortality prediction. CONCLUSIONS This improvements allow the implementation of sophisticated NN algorithms on devices with lower computational resources.


2021 ◽  
Vol 186 (Supplement_1) ◽  
pp. 445-451
Author(s):  
Yifei Sun ◽  
Navid Rashedi ◽  
Vikrant Vaze ◽  
Parikshit Shah ◽  
Ryan Halter ◽  
...  

ABSTRACT Introduction Early prediction of the acute hypotensive episode (AHE) in critically ill patients has the potential to improve outcomes. In this study, we apply different machine learning algorithms to the MIMIC III Physionet dataset, containing more than 60,000 real-world intensive care unit records, to test commonly used machine learning technologies and compare their performances. Materials and Methods Five classification methods including K-nearest neighbor, logistic regression, support vector machine, random forest, and a deep learning method called long short-term memory are applied to predict an AHE 30 minutes in advance. An analysis comparing model performance when including versus excluding invasive features was conducted. To further study the pattern of the underlying mean arterial pressure (MAP), we apply a regression method to predict the continuous MAP values using linear regression over the next 60 minutes. Results Support vector machine yields the best performance in terms of recall (84%). Including the invasive features in the classification improves the performance significantly with both recall and precision increasing by more than 20 percentage points. We were able to predict the MAP with a root mean square error (a frequently used measure of the differences between the predicted values and the observed values) of 10 mmHg 60 minutes in the future. After converting continuous MAP predictions into AHE binary predictions, we achieve a 91% recall and 68% precision. In addition to predicting AHE, the MAP predictions provide clinically useful information regarding the timing and severity of the AHE occurrence. Conclusion We were able to predict AHE with precision and recall above 80% 30 minutes in advance with the large real-world dataset. The prediction of regression model can provide a more fine-grained, interpretable signal to practitioners. Model performance is improved by the inclusion of invasive features in predicting AHE, when compared to predicting the AHE based on only the available, restricted set of noninvasive technologies. This demonstrates the importance of exploring more noninvasive technologies for AHE prediction.


2021 ◽  
Vol 51 (3) ◽  
pp. 9-16
Author(s):  
José Suárez-Varela ◽  
Miquel Ferriol-Galmés ◽  
Albert López ◽  
Paul Almasan ◽  
Guillermo Bernárdez ◽  
...  

During the last decade, Machine Learning (ML) has increasingly become a hot topic in the field of Computer Networks and is expected to be gradually adopted for a plethora of control, monitoring and management tasks in real-world deployments. This poses the need to count on new generations of students, researchers and practitioners with a solid background in ML applied to networks. During 2020, the International Telecommunication Union (ITU) has organized the "ITU AI/ML in 5G challenge", an open global competition that has introduced to a broad audience some of the current main challenges in ML for networks. This large-scale initiative has gathered 23 different challenges proposed by network operators, equipment manufacturers and academia, and has attracted a total of 1300+ participants from 60+ countries. This paper narrates our experience organizing one of the proposed challenges: the "Graph Neural Networking Challenge 2020". We describe the problem presented to participants, the tools and resources provided, some organization aspects and participation statistics, an outline of the top-3 awarded solutions, and a summary with some lessons learned during all this journey. As a result, this challenge leaves a curated set of educational resources openly available to anyone interested in the topic.


Sign in / Sign up

Export Citation Format

Share Document