scholarly journals Finding High-Value Training Data Subset Through Differentiable Convex Programming

2021 ◽  
pp. 666-681
Author(s):  
Soumi Das ◽  
Arshdeep Singh ◽  
Saptarshi Chatterjee ◽  
Suparna Bhattacharya ◽  
Sourangshu Bhattacharya
Author(s):  
Kashyap Chitta ◽  
Jose M. Alvarez ◽  
Elmar Haussmann ◽  
Clement Farabet

2011 ◽  
Vol 131 (8) ◽  
pp. 1459-1466
Author(s):  
Yasunari Maeda ◽  
Hideki Yoshida ◽  
Masakiyo Suzuki ◽  
Toshiyasu Matsushima

2016 ◽  
Vol 136 (12) ◽  
pp. 898-907 ◽  
Author(s):  
Joao Gari da Silva Fonseca Junior ◽  
Hideaki Ohtake ◽  
Takashi Oozeki ◽  
Kazuhiko Ogimoto

2011 ◽  
Vol 9 (2) ◽  
pp. 99
Author(s):  
Alex J Auseon ◽  
Albert J Kolibash ◽  
◽  

Background:Educating trainees during cardiology fellowship is a process in constant evolution, with program directors regularly adapting to increasing demands and regulations as they strive to prepare graduates for practice in today’s healthcare environment.Methods and Results:In a 10-year follow-up to a previous manuscript regarding fellowship education, we reviewed the literature regarding the most topical issues facing training programs in 2010, describing our approach at The Ohio State University.Conclusion:In the midst of challenges posed by the increasing complexity of training requirements and documentation, work hour restrictions, and the new definitions of quality and safety, we propose methods of curricula revision and collaboration that may serve as an example to other medical centers.


2020 ◽  
Vol 2020 (10) ◽  
pp. 310-1-310-7
Author(s):  
Khalid Omer ◽  
Luca Caucci ◽  
Meredith Kupinski

This work reports on convolutional neural network (CNN) performance on an image texture classification task as a function of linear image processing and number of training images. Detection performance of single and multi-layer CNNs (sCNN/mCNN) are compared to optimal observers. Performance is quantified by the area under the receiver operating characteristic (ROC) curve, also known as the AUC. For perfect detection AUC = 1.0 and AUC = 0.5 for guessing. The Ideal Observer (IO) maximizes AUC but is prohibitive in practice because it depends on high-dimensional image likelihoods. The IO performance is invariant to any fullrank, invertible linear image processing. This work demonstrates the existence of full-rank, invertible linear transforms that can degrade both sCNN and mCNN even in the limit of large quantities of training data. A subsequent invertible linear transform changes the images’ correlation structure again and can improve this AUC. Stationary textures sampled from zero mean and unequal covariance Gaussian distributions allow closed-form analytic expressions for the IO and optimal linear compression. Linear compression is a mitigation technique for high-dimension low sample size (HDLSS) applications. By definition, compression strictly decreases or maintains IO detection performance. For small quantities of training data, linear image compression prior to the sCNN architecture can increase AUC from 0.56 to 0.93. Results indicate an optimal compression ratio for CNN based on task difficulty, compression method, and number of training images.


2020 ◽  
Vol 2020 (8) ◽  
pp. 188-1-188-7
Author(s):  
Xiaoyu Xiang ◽  
Yang Cheng ◽  
Jianhang Chen ◽  
Qian Lin ◽  
Jan Allebach

Image aesthetic assessment has always been regarded as a challenging task because of the variability of subjective preference. Besides, the assessment of a photo is also related to its style, semantic content, etc. Conventionally, the estimations of aesthetic score and style for an image are treated as separate problems. In this paper, we explore the inter-relatedness between the aesthetics and image style, and design a neural network that can jointly categorize image by styles and give an aesthetic score distribution. To this end, we propose a multi-task network (MTNet) with an aesthetic column serving as a score predictor and a style column serving as a style classifier. The angular-softmax loss is applied in training primary style classifiers to maximize the margin among classes in single-label training data; the semi-supervised method is applied to improve the network’s generalization ability iteratively. We combine the regression loss and classification loss in training aesthetic score. Experiments on the AVA dataset show the superiority of our network in both image attributes classification and aesthetic ranking tasks.


Sign in / Sign up

Export Citation Format

Share Document