iRDMA: Efficient Use of RDMA in Distributed Deep Learning Systems

Author(s):  
Yufei Ren ◽  
Xingbo Wu ◽  
Li Zhang ◽  
Yandong Wang ◽  
Wei Zhang ◽  
...  
Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2514
Author(s):  
Tharindu Kaluarachchi ◽  
Andrew Reis ◽  
Suranga Nanayakkara

After Deep Learning (DL) regained popularity recently, the Artificial Intelligence (AI) or Machine Learning (ML) field is undergoing rapid growth concerning research and real-world application development. Deep Learning has generated complexities in algorithms, and researchers and users have raised concerns regarding the usability and adoptability of Deep Learning systems. These concerns, coupled with the increasing human-AI interactions, have created the emerging field that is Human-Centered Machine Learning (HCML). We present this review paper as an overview and analysis of existing work in HCML related to DL. Firstly, we collaborated with field domain experts to develop a working definition for HCML. Secondly, through a systematic literature review, we analyze and classify 162 publications that fall within HCML. Our classification is based on aspects including contribution type, application area, and focused human categories. Finally, we analyze the topology of the HCML landscape by identifying research gaps, highlighting conflicting interpretations, addressing current challenges, and presenting future HCML research opportunities.


2021 ◽  
Author(s):  
Mizuho Mori ◽  
Yoshiko Ariji ◽  
Motoki Fukuda ◽  
Tomoya Kitano ◽  
Takuma Funakoshi ◽  
...  

Abstract Objectives The aim of the present study was to create and test an automatic system for assessing the technical quality of positioning in periapical radiography of the maxillary canines using deep learning classification and segmentation techniques. Methods We created and tested two deep learning systems using 500 periapical radiographs (250 each of good- and bad-quality images). We assigned 350, 70, and 80 images as the training, validation, and test datasets, respectively. The learning model of system 1 was created with only the classification process, whereas system 2 consisted of both the segmentation and classification models. In each model, 500 epochs of training were performed using AlexNet and U-net for classification and segmentation, respectively. The segmentation results were evaluated by the intersection over union method, with values of 0.6 or more considered as success. The classification results were compared between the two systems. Results The segmentation performance of system 2 was recall, precision, and F measure of 0.937, 0.961, and 0.949, respectively. System 2 showed better classification performance values than those obtained by system 1. The area under the receiver operating characteristic curve values differed significantly between system 1 (0.649) and system 2 (0.927). Conclusions The deep learning systems we created appeared to have potential benefits in evaluation of the technical positioning quality of periapical radiographs through the use of segmentation and classification functions.


2021 ◽  
pp. 161-176
Author(s):  
Xinle Liu ◽  
Akinori Mitani ◽  
Terry Spitz ◽  
Derek J. Wu ◽  
Joseph R. Ledsam

Author(s):  
Swagath Venkataramani ◽  
Vijayalakshmi Srinivasan ◽  
Jungwook Choi ◽  
Philip Heidelberger ◽  
Leland Chang ◽  
...  

2020 ◽  
Vol 216 ◽  
pp. 140-146
Author(s):  
Hee Kyung Yang ◽  
Young Jae Kim ◽  
Jae Yun Sung ◽  
Dong Hyun Kim ◽  
Kwang Gi Kim ◽  
...  

Author(s):  
Mary E. Webb ◽  
Andrew Fluck ◽  
Johannes Magenheim ◽  
Joyce Malyn-Smith ◽  
Juliet Waters ◽  
...  

AbstractMachine learning systems are infiltrating our lives and are beginning to become important in our education systems. This article, developed from a synthesis and analysis of previous research, examines the implications of recent developments in machine learning for human learners and learning. In this article we first compare deep learning in computers and humans to examine their similarities and differences. Deep learning is identified as a sub-set of machine learning, which is itself a component of artificial intelligence. Deep learning often depends on backwards propagation in weighted neural networks, so is non-deterministic—the system adapts and changes through practical experience or training. This adaptive behaviour predicates the need for explainability and accountability in such systems. Accountability is the reverse of explainability. Explainability flows through the system from inputs to output (decision) whereas accountability flows backwards, from a decision to the person taking responsibility for it. Both explainability and accountability should be incorporated in machine learning system design from the outset to meet social, ethical and legislative requirements. For students to be able to understand the nature of the systems that may be supporting their own learning as well as to act as responsible citizens in contemplating the ethical issues that machine learning raises, they need to understand key aspects of machine learning systems and have opportunities to adapt and create such systems. Therefore, some changes are needed to school curricula. The article concludes with recommendations about machine learning for teachers, students, policymakers, developers and researchers.


Author(s):  
Tom Eelbode ◽  
Pieter Sinonquel ◽  
Frederik Maes ◽  
Raf Bisschops

2020 ◽  
Vol 101 ◽  
pp. 107184 ◽  
Author(s):  
Jie Hang ◽  
Keji Han ◽  
Hui Chen ◽  
Yun Li

Author(s):  
Zhan Wei Lim ◽  
Mong Li Lee ◽  
Wynne Hsu ◽  
Tien Yin Wong

Though deep learning systems have achieved high accuracy in detecting diseases from medical images, few such systems have been deployed in highly automated disease screening settings due to lack of trust in how well these systems can generalize to out-of-datasets. We propose to use uncertainty estimates of the deep learning system’s prediction to know when to accept or to disregard its prediction. We evaluate the effectiveness of using such estimates in a real-life application for the screening of diabetic retinopathy. We also generate visual explanation of the deep learning system to convey the pixels in the image that influences its decision. Together, these reveal the deep learning system’s competency and limits to the human, and in turn the human can know when to trust the deep learning system.


Sign in / Sign up

Export Citation Format

Share Document