scholarly journals Towards Intelligent Regulation of Artificial Intelligence

2019 ◽  
Vol 10 (1) ◽  
pp. 41-59 ◽  
Author(s):  
Miriam C BUITEN

Artificial intelligence (AI) is becoming a part of our daily lives at a fast pace, offering myriad benefits for society. At the same time, there is concern about the unpredictability and uncontrollability of AI. In response, legislators and scholars call for more transparency and explainability of AI. This article considers what it would mean to require transparency of AI. It advocates looking beyond the opaque concept of AI, focusing on the concrete risks and biases of its underlying technology: machine-learning algorithms. The article discusses the biases that algorithms may produce through the input data, the testing of the algorithm and the decision model. Any transparency requirement for algorithms should result in explanations of these biases that are both understandable for the prospective recipients, and technically feasible for producers. Before asking how much transparency the law should require from algorithms, we should therefore consider if the explanation that programmers could offer is useful in specific legal contexts.

Author(s):  
M. A. Fesenko ◽  
G. V. Golovaneva ◽  
A. V. Miskevich

The new model «Prognosis of men’ reproductive function disorders» was developed. The machine learning algorithms (artificial intelligence) was used for this purpose, the model has high prognosis accuracy. The aim of the model applying is prioritize diagnostic and preventive measures to minimize reproductive system diseases complications and preserve workers’ health and efficiency.


2020 ◽  
Vol 237 (12) ◽  
pp. 1430-1437
Author(s):  
Achim Langenbucher ◽  
Nóra Szentmáry ◽  
Jascha Wendelstein ◽  
Peter Hoffmann

Abstract Background and Purpose In the last decade, artificial intelligence and machine learning algorithms have been more and more established for the screening and detection of diseases and pathologies, as well as for describing interactions between measures where classical methods are too complex or fail. The purpose of this paper is to model the measured postoperative position of an intraocular lens implant after cataract surgery, based on preoperatively assessed biometric effect sizes using techniques of machine learning. Patients and Methods In this study, we enrolled 249 eyes of patients who underwent elective cataract surgery at Augenklinik Castrop-Rauxel. Eyes were measured preoperatively with the IOLMaster 700 (Carl Zeiss Meditec), as well as preoperatively and postoperatively with the Casia 2 OCT (Tomey). Based on preoperative effect sizes axial length, corneal thickness, internal anterior chamber depth, thickness of the crystalline lens, mean corneal radius and corneal diameter a selection of 17 machine learning algorithms were tested for prediction performance for calculation of internal anterior chamber depth (AQD_post) and axial position of equatorial plane of the lens in the pseudophakic eye (LEQ_post). Results The 17 machine learning algorithms (out of 4 families) varied in root mean squared/mean absolute prediction error between 0.187/0.139 mm and 0.255/0.204 mm (AQD_post) and 0.183/0.135 mm and 0.253/0.206 mm (LEQ_post), using 5-fold cross validation techniques. The Gaussian Process Regression Model using an exponential kernel showed the best performance in terms of root mean squared error for prediction of AQDpost and LEQpost. If the entire dataset is used (without splitting for training and validation data), comparison of a simple multivariate linear regression model vs. the algorithm with the best performance showed a root mean squared prediction error for AQD_post/LEQ_post with 0.188/0.187 mm vs. the best performance Gaussian Process Regression Model with 0.166/0.159 mm. Conclusion In this paper we wanted to show the principles of supervised machine learning applied to prediction of the measured physical postoperative axial position of the intraocular lenses. Based on our limited data pool and the algorithms used in our setting, the benefit of machine learning algorithms seems to be limited compared to a standard multivariate regression model.


2021 ◽  
Vol 10 (2) ◽  
pp. 205846012199029
Author(s):  
Rani Ahmad

Background The scope and productivity of artificial intelligence applications in health science and medicine, particularly in medical imaging, are rapidly progressing, with relatively recent developments in big data and deep learning and increasingly powerful computer algorithms. Accordingly, there are a number of opportunities and challenges for the radiological community. Purpose To provide review on the challenges and barriers experienced in diagnostic radiology on the basis of the key clinical applications of machine learning techniques. Material and Methods Studies published in 2010–2019 were selected that report on the efficacy of machine learning models. A single contingency table was selected for each study to report the highest accuracy of radiology professionals and machine learning algorithms, and a meta-analysis of studies was conducted based on contingency tables. Results The specificity for all the deep learning models ranged from 39% to 100%, whereas sensitivity ranged from 85% to 100%. The pooled sensitivity and specificity were 89% and 85% for the deep learning algorithms for detecting abnormalities compared to 75% and 91% for radiology experts, respectively. The pooled specificity and sensitivity for comparison between radiology professionals and deep learning algorithms were 91% and 81% for deep learning models and 85% and 73% for radiology professionals (p < 0.000), respectively. The pooled sensitivity detection was 82% for health-care professionals and 83% for deep learning algorithms (p < 0.005). Conclusion Radiomic information extracted through machine learning programs form images that may not be discernible through visual examination, thus may improve the prognostic and diagnostic value of data sets.


Author(s):  
Joel Weijia Lai ◽  
Candice Ke En Ang ◽  
U. Rajendra Acharya ◽  
Kang Hao Cheong

Artificial Intelligence in healthcare employs machine learning algorithms to emulate human cognition in the analysis of complicated or large sets of data. Specifically, artificial intelligence taps on the ability of computer algorithms and software with allowable thresholds to make deterministic approximate conclusions. In comparison to traditional technologies in healthcare, artificial intelligence enhances the process of data analysis without the need for human input, producing nearly equally reliable, well defined output. Schizophrenia is a chronic mental health condition that affects millions worldwide, with impairment in thinking and behaviour that may be significantly disabling to daily living. Multiple artificial intelligence and machine learning algorithms have been utilized to analyze the different components of schizophrenia, such as in prediction of disease, and assessment of current prevention methods. These are carried out in hope of assisting with diagnosis and provision of viable options for individuals affected. In this paper, we review the progress of the use of artificial intelligence in schizophrenia.


2020 ◽  
Vol 5 (19) ◽  
pp. 32-35
Author(s):  
Anand Vijay ◽  
Kailash Patidar ◽  
Manoj Yadav ◽  
Rishi Kushwah

In this paper an analytical survey on the role of machine learning algorithms in case of intrusion detection has been presented and discussed. This paper shows the analytical aspects in the development of efficient intrusion detection system (IDS). The related study for the development of this system has been presented in terms of computational methods. The discussed methods are data mining, artificial intelligence and machine learning. It has been discussed along with the attack parameters and attack types. This paper also elaborates the impact of different attack and handling mechanism based on the previous papers.


2021 ◽  
Vol 1 (1) ◽  
pp. 76-87
Author(s):  
Alexander Buhmann ◽  
Christian Fieseler

Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.


Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4332
Author(s):  
Daniel Jancarczyk ◽  
Marcin Bernaś ◽  
Tomasz Boczar

The paper proposes a method of automatic detection of parameters of a distribution transformer (model, type, and power) from a distance, based on its low-frequency noise spectra. The spectra are registered by sensors and processed by a method based on evolutionary algorithms and machine learning. The method, as input data, uses the frequency spectra of sound pressure levels generated during operation by transformers in the real environment. The model also uses the background characteristic to take under consideration the changing working conditions of the transformers. The method searches for frequency intervals and its resolution using both a classic genetic algorithm and particle swarm optimization. The interval selection was verified using five state-of-the-art machine learning algorithms. The research was conducted on 16 different distribution transformers. As a result, a method was proposed that allows the detection of a specific transformer model, its type, and its power with an accuracy greater than 84%, 99%, and 87%, respectively. The proposed optimization process using the genetic algorithm increased the accuracy by up to 5%, at the same time reducing the input data set significantly (from 80% up to 98%). The machine learning algorithms were selected, which were proven efficient for this task.


2020 ◽  
Vol 7 (2) ◽  
pp. 129-134
Author(s):  
Takudzwa Fadziso

In modern times, the collection of data is not a big deal but using it in a meaningful is a challenging task. Different organizations are using artificial intelligence and machine learning for collecting and utilizing the data. These should also be used in the medical because different disease requires the prediction. One of these diseases is asthma that is continuously increasing and affecting more and more people. The major issue is that it is difficult to diagnose in children. Machine learning algorithms can help in diagnosing it early so that the doctors can start the treatment early. Machine learning algorithms can perform this prediction so this study will be helpful for both the doctors and patients. There are different machine learning predictive algorithms are available that have been used for this purpose.  


Sign in / Sign up

Export Citation Format

Share Document