Human Dignity, the Right to be Heard, and Algorithmic Judges

Author(s):  
André Dao

Abstract This article examines the requirements of the right to a fair trial in the context of the use of machine-learning algorithms (MLAs) in judicial proceedings, with a focus on a core component of this right, the right to be heard. Though NGOs and scholars have begun to note that the right to a fair trial may be the best framework to address the challenges raised by MLAs, the actual requirements of the right in this novel context are underdeveloped. This article evaluates two normative approaches to filling this gap. The first approach, the argument from fairness, produces three broad categories of measures for ensuring fairness: measures for increasing the transparency and accountability of MLAs, measures for ensuring the participation of litigants, and measures for securing the impartiality of the human judge. However, this article argues that the argument from fairness cannot provide the necessary normative grounding for the right to a fair trial in the context of MLAs, as it collapses into the concept of ‘algorithmic fairness’. The second approach is based on the concept of human dignity as a status. The primary argument of this article is that the concept of human dignity as a status can provide better normative grounding for the right to a fair trial because it offers an account of human personhood that resists the de-humanization of data subjectification. That richer account of human personhood allows us to think of the trial not only as a vehicle for accurate outcomes, but also as a forum for the expression of human dignity.

Author(s):  
Prince Nathan S

Abstract: Travelling Salesmen problem is a very popular problem in the world of computer programming. It deals with the optimization of algorithms and an ever changing scenario as it gets more and more complex as the number of variables goes on increasing. The solutions which exist for this problem are optimal for a small and definite number of cases. One cannot take into consideration of the various factors which are included when this specific problem is tried to be solved for the real world where things change continuously. There is a need to adapt to these changes and find optimized solutions as the application goes on. The ability to adapt to any kind of data, whether static or ever-changing, understand and solve it is a quality that is shown by Machine Learning algorithms. As advances in Machine Learning take place, there has been quite a good amount of research for how to solve NP-hard problems using Machine Learning. This reportis a survey to understand what types of machine algorithms can be used to solve with TSP. Different types of approaches like Ant Colony Optimization and Q-learning are explored and compared. Ant Colony Optimization uses the concept of ants following pheromone levels which lets them know where the most amount of food is. This is widely used for TSP problems where the path is with the most pheromone is chosen. Q-Learning is supposed to use the concept of awarding an agent when taking the right action for a state it is in and compounding those specific rewards. This is very much based on the exploiting concept where the agent keeps on learning onits own to maximize its own reward. This can be used for TSP where an agentwill be rewarded for having a short path and will be rewarded more if the path chosen is the shortest. Keywords: LINEAR REGRESSION, LASSO REGRESSION, RIDGE REGRESSION, DECISION TREE REGRESSOR, MACHINE LEARNING, HYPERPARAMETER TUNING, DATA ANALYSIS


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mona Bokharaei Nia ◽  
Mohammadali Afshar Kazemi ◽  
Changiz Valmohammadi ◽  
Ghanbar Abbaspour

PurposeThe increase in the number of healthcare wearable (Internet of Things) IoT options is making it difficult for individuals, healthcare experts and physicians to find the right smart device that best matches their requirements or treatments. The purpose of this research is to propose a framework for a recommender system to advise on the best device for the patient using machine learning algorithms and social media sentiment analysis. This approach will provide great value for patients, doctors, medical centers, and hospitals to enable them to provide the best advice and guidance in allocating the device for that particular time in the treatment process.Design/methodology/approachThis data-driven approach comprises multiple stages that lead to classifying the diseases that a patient is currently facing or is at risk of facing by using and comparing the results of various machine learning algorithms. Hereupon, the proposed recommender framework aggregates the specifications of wearable IoT devices along with the image of the wearable product, which is the extracted user perception shared on social media after applying sentiment analysis. Lastly, a proposed computation with the use of a genetic algorithm was used to compute all the collected data and to recommend the wearable IoT device recommendation for a patient.FindingsThe proposed conceptual framework illustrates how health record data, diseases, wearable devices, social media sentiment analysis and machine learning algorithms are interrelated to recommend the relevant wearable IoT devices for each patient. With the consultation of 15 physicians, each a specialist in their area, the proof-of-concept implementation result shows an accuracy rate of up to 95% using 17 settings of machine learning algorithms over multiple disease-detection stages. Social media sentiment analysis was computed at 76% accuracy. To reach the final optimized result for each patient, the proposed formula using a Genetic Algorithm has been tested and its results presented.Research limitations/implicationsThe research data were limited to recommendations for the best wearable devices for five types of patient diseases. The authors could not compare the results of this research with other studies because of the novelty of the proposed framework and, as such, the lack of available relevant research.Practical implicationsThe emerging trend of wearable IoT devices is having a significant impact on the lifestyle of people. The interest in healthcare and well-being is a major driver of this growth. This framework can help in accelerating the transformation of smart hospitals and can assist doctors in finding and suggesting the right wearable IoT for their patients smartly and efficiently during treatment for various diseases. Furthermore, wearable device manufacturers can also use the outcome of the proposed platform to develop personalized wearable devices for patients in the future.Originality/valueIn this study, by considering patient health, disease-detection algorithm, wearable and IoT social media sentiment analysis, and healthcare wearable device dataset, we were able to propose and test a framework for the intelligent recommendation of wearable and IoT devices helping healthcare professionals and patients find wearable devices with a better understanding of their demands and experiences.


2020 ◽  
Vol 12 (18) ◽  
pp. 7642 ◽  
Author(s):  
Michael J. Ryoba ◽  
Shaojian Qu ◽  
Ying Ji ◽  
Deqiang Qu

Only a small percentage of crowdfunding projects succeed in securing funds, the fact of which puts the sustainability of crowdfunding platforms at risk. Researchers have examined the influences of phased aspects of communication, drawn from updates and comments, on success of crowdfunding campaigns, but in most cases they have focused on the combined effects of the aspects. This paper investigated campaign success contribution of various combinations of phased communication aspects from updates and comments, the best of which can help creators to successfully manage campaigns by focusing on the important communication aspects. Metaheuristic and machine learning algorithms were used to search and evaluate the best combination of phased communication aspects for predicting success using Kickstarter dataset. The study found that the number of updates in phase one, the polarity of comments in phase two, readability of updates and polarity of comments in phase three, and the polarity of comments in phase five are the most important communication aspects in predicting campaign success. Moreover, the success prediction accuracy with the aspects identified after phasing is more than the baseline model without phasing. Our findings can help crowdfunding actors to focus on the important communication aspects leading to improved likelihood of success.


Detection of spam review is an important operation for present e-commwebsites and apps.We address the issue on fake review detection in user reviews in e-commerce application, which wasimportant for implementing anti-opinion spam.First we analyze the characteristics of fake reviews and we apply the machine learning algorithms on that data. Spam or fake reviews of the itemsreducing the reliability of decision making and competitive analysis.The presence of fake reviews makes the customer cannot make the right decisions of sellers, which can also causes the goodwill of the platform decreased. There is a chance of leaving appraisals via web-based networking media systems whether states or harming by spammers on specific item, firm alongside their answers by recognizing these spammers just as in like manner spams so as to understand the assessments in the interpersonal organizations sites, we exist a stand-out structure called Netspam which uses spam highlights for demonstrating tribute datasets as heterogeneous subtleties systems to guide spam location treatment directly into gathering issue in such systems.


2021 ◽  
Vol 8 ◽  
Author(s):  
Yoshihiko Raita ◽  
Carlos A. Camargo ◽  
Liming Liang ◽  
Kohei Hasegawa

Clinicians handle a growing amount of clinical, biometric, and biomarker data. In this “big data” era, there is an emerging faith that the answer to all clinical and scientific questions reside in “big data” and that data will transform medicine into precision medicine. However, data by themselves are useless. It is the algorithms encoding causal reasoning and domain (e.g., clinical and biological) knowledge that prove transformative. The recent introduction of (health) data science presents an opportunity to re-think this data-centric view. For example, while precision medicine seeks to provide the right prevention and treatment strategy to the right patients at the right time, its realization cannot be achieved by algorithms that operate exclusively in data-driven prediction modes, as do most machine learning algorithms. Better understanding of data science and its tasks is vital to interpret findings and translate new discoveries into clinical practice. In this review, we first discuss the principles and major tasks of data science by organizing it into three defining tasks: (1) association and prediction, (2) intervention, and (3) counterfactual causal inference. Second, we review commonly-used data science tools with examples in the medical literature. Lastly, we outline current challenges and future directions in the fields of medicine, elaborating on how data science can enhance clinical effectiveness and inform medical practice. As machine learning algorithms become ubiquitous tools to handle quantitatively “big data,” their integration with causal reasoning and domain knowledge is instrumental to qualitatively transform medicine, which will, in turn, improve health outcomes of patients.


2022 ◽  
pp. 21-28
Author(s):  
Dijana Oreški ◽  

The ability to generate data has never been as powerful as today when three quintile bytes of data are generated daily. In the field of machine learning, a large number of algorithms have been developed, which can be used for intelligent data analysis and to solve prediction and descriptive problems in different domains. Developed algorithms have different effects on different problems.If one algorithmworks better on one dataset,the same algorithm may work worse on another data set. The reason is that each dataset has different features in terms of local and global characteristics. It is therefore imperative to know intrinsic algorithms behavior on different types of datasets andchoose the right algorithm for the problem solving. To address this problem, this papergives scientific contribution in meta learning field by proposing framework for identifying the specific characteristics of datasets in two domains of social sciences:education and business and develops meta models based on: ranking algorithms, calculating correlation of ranks, developing a multi-criteria model, two-component index and prediction based on machine learning algorithms. Each of the meta models serve as the basis for the development of intelligent system version. Application of such framework should include a comparative analysis of a large number of machine learning algorithms on a large number of datasetsfromsocial sciences.


Stroke ◽  
2020 ◽  
Vol 51 (Suppl_1) ◽  
Author(s):  
Katherine Poinsatte ◽  
Denise M Ramirez ◽  
Apoorva Ajay ◽  
Dene Betz ◽  
Erik Plautz ◽  
...  

Background: It is a challenge to characterize post-stroke changes in neural connectivity at microscopic scale across the entire rodent brain. Serial two-photon tomography (STPT) is an advanced laser-scanning microscopy technique which collects serial fluorescence images across the brain and reconstructs 3D datasets. We examined changes in motor connectivity after cortical infarcts in mice, using retrograde viral tract tracing, STPT imaging, automated registration workflow, and machine learning algorithms. Methods: Young male C57/B6 mice received a photothrombotic motor cortex (M1) stroke (n=3) or sham surgery (n=3). 15 days later, a retrograde pseudorabies trans-synaptic virus encoding fluorescent protein was injected into the left forelimb flexor muscles to label motor system projections. Mice were sacrificed 3 weeks post-stroke. STPT images were analyzed using supervised machine learning (pixel-wise random forest via the “ilastik” software package) and datasets were mapped to the Allen Mouse Brain Atlas for region-specific visualization and quantification of fluorescent signals. Results: Machine learning algorithms successfully identified neuronal cell bodies, neuronal processes, and ischemic tissue throughout the brain. The fluorescent signal of cells and neuronal processes was higher in the right M1 and SS of uninjured mice than the left M1 and SS. After stroke, this signal was diminished in the right M1 and SS. Labeled neurons were also reduced in the left M1 suggesting the presence of secondary transcortical connections. Conclusions: STPT generates whole brain datasets that when analyzed with ML algorithms show early alterations in post-stroke neural connectivity in the corticospinal tract. Further studies utilizing monosynaptic and conditional viral tracers will better assess the full spectrum of connectivity changes during post-stroke functional recovery.


2018 ◽  
Vol 7 (3.4) ◽  
pp. 197
Author(s):  
Deepali Vora ◽  
Kamatchi Iyer

Predictive modelling is a statistical technique to predict future behaviour. Machine learning is one of the most popular methods for predicting the future behaviour. From the plethora of algorithms available it is always interesting to find out which algorithm or technique is most suitable for data under consideration. Educational Data Mining is the area of research where predictive modelling is most useful. Predicting the grades of the undergraduate students accurately can help students as well as educators in many ways. Early prediction can help motivating students in better ways to select their future endeavour. This paper presents the results of various machine learning algorithms applied to the data collected from undergraduate studies. It evaluates the effectiveness of various machine learning algorithms when applied to data collected from undergraduate studies. Two major challenges are addressed as: choosing the right features and choosing the right algorithm for prediction. 


2021 ◽  
Vol 266 ◽  
pp. 02001
Author(s):  
Li Eckart ◽  
Sven Eckart ◽  
Margit Enke

Machine learning is a popular way to find patterns and relationships in high complex datasets. With the nowadays advancements in storage and computational capabilities, some machine-learning techniques are becoming suitable for real-world applications. The aim of this work is to conduct a comparative analysis of machine learning algorithms and conventional statistical techniques. These methods have long been used for clustering large amounts of data and extracting knowledge in a wide variety of science fields. However, the central knowledge of the different methods and their specific requirements for the data set, as well as the limitations of the individual methods, are an obstacle for the correct use of these methods. New machine learning algorithms could be integrated even more strongly into the current evaluation if the right choice of methods were easier to make. In the present work, some different algorithms of machine learning are listed. Four methods (artificial neural network, regression method, self-organizing map, k-means al-algorithm) are compared in detail and possible selection criteria are pointed out. Finally, an estimation of the fields of work and application and possible limitations are provided, which should help to make choices for specific interdisciplinary analyses.


Sign in / Sign up

Export Citation Format

Share Document