scholarly journals Transparency in Predictive Algorithms: A Judicial Perspective

Author(s):  
Md. Abdul Malek

<p></p><p><i>Notwithstanding the apparent hyperbole about AI promises for judicial modernization, there arise deep concerns that span from unfairness, privacy invasion, bias, discrimination to the lack of transparency and legitimacy, etc. Likewise, critics branded their application in the judicial precincts as ethically, legally, and technically distressing. Accordingly, whereas there is already an ongoing transparency debate on board, this paper attempts to revisit, extend and contribute to such simmering debate with a particular focus from a judicial perspective. Since preserving and promoting trust and confidence in the judiciary as a whole</i> <i>appears to be imperative, it uses a searchlight to explore how and why justice algorithms ought to be transparent as to their training data, methods, and outcomes. This paper also ends up delineating the tentative paths to do away with black-box effects and suggesting the way out for the use of algorithms in high-stake areas like the judicial settings.</i></p><br><p></p>

2021 ◽  
Author(s):  
Md. Abdul Malek

<p><i>Although the apparent hyperbole about the promises of AI algorithms has successfully entered upon the judicial precincts; it has also procreated some robust concerns spanning from unfairness, privacy invasion, bias, discrimination, and the lack of legitimacy</i><i> to the lack of transparency</i><i> and explainability</i><i>, </i><i>etc.</i><i> Notably, critics have already denounced </i><i>the current use of the </i><i>predictive algorithm in the judicial decision-making process in many ways, and branded them as ethically, legally, and technically distressing.</i><i> So contextually, whereas there is already an ongoing transparency debate on board, this paper attempts to revisit, extend and contribute to such simmering debate with a particular focus from a judicial perspective. Since there is a good cause to preserve and promote trust and confidence in the judiciary as a whole, a searchlight is beamed on exploring how and why justice algorithms ought to be transparent as to their outcomes, with a sufficient level of explainability, interpretability, intelligibility, and contestability. This paper also ends up delineating the tentative paths to do away with black-box effects, and suggesting the way out for the use of algorithms in the high-stake areas like the judicial settings.</i></p>


2021 ◽  
Author(s):  
Md. Abdul Malek

<p><i>Although the apparent hyperbole about the promises of AI algorithms has successfully entered upon the judicial precincts; it has also procreated some robust concerns spanning from unfairness, privacy invasion, bias, discrimination, and the lack of legitimacy</i><i> to the lack of transparency</i><i> and explainability</i><i>, </i><i>etc.</i><i> Notably, critics have already denounced </i><i>the current use of the </i><i>predictive algorithm in the judicial decision-making process in many ways, and branded them as ethically, legally, and technically distressing.</i><i> So contextually, whereas there is already an ongoing transparency debate on board, this paper attempts to revisit, extend and contribute to such simmering debate with a particular focus from a judicial perspective. Since there is a good cause to preserve and promote trust and confidence in the judiciary as a whole, a searchlight is beamed on exploring how and why justice algorithms ought to be transparent as to their outcomes, with a sufficient level of explainability, interpretability, intelligibility, and contestability. This paper also ends up delineating the tentative paths to do away with black-box effects, and suggesting the way out for the use of algorithms in the high-stake areas like the judicial settings.</i></p>


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 18
Author(s):  
Pantelis Linardatos ◽  
Vasilis Papastefanopoulos ◽  
Sotiris Kotsiantis

Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.


2020 ◽  
Vol 10 (22) ◽  
pp. 8079
Author(s):  
Sanglee Park ◽  
Jungmin So

State-of-the-art neural network models are actively used in various fields, but it is well-known that they are vulnerable to adversarial example attacks. Throughout the efforts to make the models robust against adversarial example attacks, it has been found to be a very difficult task. While many defense approaches were shown to be not effective, adversarial training remains as one of the promising methods. In adversarial training, the training data are augmented by “adversarial” samples generated using an attack algorithm. If the attacker uses a similar attack algorithm to generate adversarial examples, the adversarially trained network can be quite robust to the attack. However, there are numerous ways of creating adversarial examples, and the defender does not know what algorithm the attacker may use. A natural question is: Can we use adversarial training to train a model robust to multiple types of attack? Previous work have shown that, when a network is trained with adversarial examples generated from multiple attack methods, the network is still vulnerable to white-box attacks where the attacker has complete access to the model parameters. In this paper, we study this question in the context of black-box attacks, which can be a more realistic assumption for practical applications. Experiments with the MNIST dataset show that adversarially training a network with an attack method helps defending against that particular attack method, but has limited effect for other attack methods. In addition, even if the defender trains a network with multiple types of adversarial examples and the attacker attacks with one of the methods, the network could lose accuracy to the attack if the attacker uses a different data augmentation strategy on the target network. These results show that it is very difficult to make a robust network using adversarial training, even for black-box settings where the attacker has restricted information on the target network.


Author(s):  
Mark Darius Juszczak

Latour’s Black Box is a dynamic gateway that segregates the continuum of scientific processes into those that occur before and after a certain critical point: the shift from ‘science in the making’ to ‘ready made science’. For reasons that will be explored in this paper, the field of data science does not appear to follow a unidirectional heuristic in the way that technologies transition from ‘in the making’ to ‘ready made’. This paper is a theoretical analysis of the extent to which the fundamental technologies of data science either violate or adhere to this heuristic. If data science does not follow a unidirectional heuristic, is there any evidence as to the causes of this dynamic and, furthermore, are the limitations, as they may exist, a function of technological advancement or a function of the theoretical limits of the field of data science itself?


2020 ◽  
Author(s):  
Linda Cambon ◽  
François ALLA

Abstract BackgroundA better understanding of what is happening inside the ”black box” of population health interventions is needed because of their inherent complexity. The theory-driven intervention/evaluation paradigm is one approach used for this purpose. However, barriers related to semantic or practical issues stand in the way of its complete integration into evaluation designs.Methods and discussionIn this study, we aimed to clarify how various theories, models, and frameworks could contribute to conceiving a grounded theory, called interventional system theory (ISyT), suitable for understanding the black box of population health interventions and acknowledging their complexity. We suggest that this interventional system theory (ISyT) could guide evaluation processes, whatever evaluation design is applied.ConclusionWe believe that such clarification could contribute to encouraging the use of theories in complex intervention evaluations, and to identifying ways to consider the transferability and the scalability of interventions.


2021 ◽  
Vol 4 (1) ◽  
pp. 116-125
Author(s):  
Indah Purwitasari Ihsan ◽  
Sukriyah Buwarda ◽  
Hilda Novianty ◽  
Ifsan Aditya Putra

Penggunaan kunci manual sebagai pembuka dan pengunci pintu masih belum optimal. Masalah yang sering terjadi adalah pemilik kunci sering kali lupa dimana menyimpan kunci bahkan hingga terjadi kehilangan kunci. Sistem biometrik pola suara memiliki ciri khas dan karakteristik yang berbeda pada setiap manusia, maka suara dapat dijadikan salah satu alternatif solusi, yaitu suara sebagai kunci untuk membuka pintu secara otomatis sehingga lebih efisien. Otomatisasi sistem pengunci pintu dibuat menggunakan Elechouse v3 yang berfungsi sebagai voice recognition serta Solenoid lock door sebagai pengunci otomatis pada pintu. Hasil pengujian fungsional menggunakan black box menunjukkan bahwa seluruh alat yang dirangkai berfungsi sesuai fungsinya. Pengujian tingkat keberhasilan sistem dilakukan menggunakan variable derau, non derau dan jarak. Pada data training tingkat keberhasilan sistem ini jika tanpa derau adalah 100%, sedangkan dengan derau 50.0 dB hingga 70 dB rata-rata tingkat keberhasilannya adalah 56,2%. Untuk jarak 30 cm sampai 180 cm rata-rata keberhasilannya sebesar 40,51%. Jarak terjauh adalah pada jarak 150 cm dengan presentase keberhasilan 5%. Pada data testing tingkat keberhasilannya jika tanpa derau adalah 0%, sedangkan dengan derau 50.0 dB hingga 70 dB rata-rata tingkat keberhasilannya adalah 1,9%. Untuk jarak 30 cm sampai 180 cm rata-rata keberhasilannya sebesar 0%.The use of manual locks as door openers and locks is still not optimal. The problem that often occurs is that the key owner often forgets where to store the key and even loses the key. The voice pattern biometric system has different characteristics for each human, so sound can be an alternative solution, namely voice as a key to open doors automatically so that it is more efficient. Door lock system automation is made using Elechouse v3 which functions as voice recognition and Solenoid door lock as automatic locking of doors. The results of functional testing using a black box show that all the tools assembled function according to their function. Testing the success rate of the system is carried out using noise, non-noise and distance variables. In the training data, the success rate of this system without noise is 100%, while with a noise of 50.0 dB to 70 dB the average success rate is 56.2%. For a distance of 30 cm to 180 cm the success rate is 40.51%. The farthest distance is at a distance of 150 cm with a success percentage of 5%. In the testing data, the success rate without noise is 0%, while with a noise of 50.0 dB to 70 dB the average success rate is 1.9%. For a distance of 30 cm to 180 cm the success rate is 0%. 


2020 ◽  
pp. 146144482093255
Author(s):  
Francesco D’Amato ◽  
Milena Cassella

Although a great amount of research has been concerned with the growing relevance of crowdfunding for cultural productions, it is still little investigated how the actual functioning of crowdfunding platforms can affect both the way of conceiving and doing crowdfunding and the financing opportunities and performances of different projects. The article illustrates how this occurs in the case of an Italian crowdfunding platform, through activities of project classification and evaluation and campaign consulting it carries out, which are not visible from the outside. It also points out how these activities are shaped through the constant search for a balance between meritocratic principles and company sustainability. Opening what is usually treated as an organizational black box, the article provides an original contribution that enriches the understanding of the ways in which crowdfunding platforms can influence the production of culture as well as the subjectivities characterized by the neoliberal ethos of self-management and self-entrepreneurship.


Author(s):  
Yonatan Mendel ◽  
Abeer AlNajjar

The Introduction, co-written by Yonatan Mendel and Abeer AlNajjar, highlights that language – including ones’ verbal and written expression, selected or forgotten terminology, vowels pronounced or not pronounced, and locations written on road signs – provides researchers with a nuanced, honest and deep analysis of society, of what it tells us and of what it keeps away from us. The authors highlight that for Yasir Suleiman, to whom the book is dedicated, and as seen in his extensive academic research, language deeply and profoundly exposes social and political realities. The authors therefore refer to language in its larger context, including the words which form the building blocks of language, the context in which such words are written and the phraseology selected, and to the way that the study of language may emerge as the ‘Black Box’ of human social and political journeys.


2019 ◽  
Vol 26 (12) ◽  
pp. 1651-1654 ◽  
Author(s):  
Ben Van Calster ◽  
Laure Wynants ◽  
Dirk Timmerman ◽  
Ewout W Steyerberg ◽  
Gary S Collins

Abstract There is increasing awareness that the methodology and findings of research should be transparent. This includes studies using artificial intelligence to develop predictive algorithms that make individualized diagnostic or prognostic risk predictions. We argue that it is paramount to make the algorithm behind any prediction publicly available. This allows independent external validation, assessment of performance heterogeneity across settings and over time, and algorithm refinement or updating. Online calculators and apps may aid uptake if accompanied with sufficient information. For algorithms based on “black box” machine learning methods, software for algorithm implementation is a must. Hiding algorithms for commercial exploitation is unethical, because there is no possibility to assess whether algorithms work as advertised or to monitor when and how algorithms are updated. Journals and funders should demand maximal transparency for publications on predictive algorithms, and clinical guidelines should only recommend publicly available algorithms.


Sign in / Sign up

Export Citation Format

Share Document