A novel exponential loss function for pathological lymph node image classification

Author(s):  
Guoping Xu ◽  
Hanqiang Cao ◽  
Jayaram K. Udupa ◽  
Chunyi Yue ◽  
Youli Dong ◽  
...  
2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Chenrui Wen ◽  
Xinhao Yang ◽  
Ke Zhang ◽  
Jiahui Zhang

An improved loss function free of sampling procedures is proposed to improve the ill-performed classification by sample shortage. Adjustable parameters are used to expand the loss scope, minimize the weight of easily classified samples, and further substitute the sampling function, which are added to the cross-entropy loss and the SoftMax loss. Experiment results indicate that improvements in all classification performance of our loss function are shown in various network architectures and on different datasets. To summarize, compared with traditional loss functions, our improved version not only elevates classification performance but also lowers the difficulty of network training.


Nowadays Deep learning was advanced so much in our daily life. From 2014, there is massive growth in this technology as there is a vast amount of data present. We are even getting better results from whatever we may do. In my work, I have used Convolution Neural Networks as my project depends on image classification. So what I’m trying to do is I’m using two classes in which one class is male and one class is female. I’m classifying both the classes and trying to predict who is male and who is female. For this, I have been using layers like Sequential, Convolution2D, Max-pooling, Flattening, and finally Dense. So, I connect all of these layers. I have been using two more extra layers which are Convolution2D and max-pooling connected as one layer for better classifications. In my model, I’m using Adam optimizer as I’m having only two classes and in my experiments, I found Adam as a good optimizer and I use binary cross entropy as my loss function as I’m using only two classes if we have more than two classes we can use categorical loss function and the images which I use for predictions will be converted into 64*64 matrix form. In the end, I will be getting predictions as 1 for male and 0 for female.


2019 ◽  
Author(s):  
Tingying Peng ◽  
Melanie Boxberg ◽  
Wilko Weichert ◽  
Nassir Navab ◽  
Carsten Marr

AbstractDeep neural networks have achieved tremendous success in image recognition, classification and object detection. However, deep learning is often criticised for its lack of transparency and general inability to rationalize its predictions. The issue of poor model interpretability becomes critical in medical applications, as a model that is not understood and trusted by physicians is unlikely to be used in daily clinical practice. In this work, we develop a novel multi-task deep learning framework for simultaneous histopathology image classification and retrieval, leveraging on the classic concept of k-nearest neighbors to improve model interpretability. For a test image, we retrieve the most similar images from our training databases. These retrieved nearest neighbours can be used to classify the test image with a confidence score, and provide a human-interpretable explanation of our classification. Our original framework can be built on top of any existing classification network (and therefore benefit from pretrained models), by (i) adding a triplet loss function with a novel triplet sampling strategy to compare distances between samples and (ii) a Cauchy hashing loss function to accelerate neighbour searching. We evaluate our method on colorectal cancer histology slides, and show that the confidence estimates are strongly correlated with model performance. The explanations provided by nearest neighbors are intuitive and useful for expert evaluation by giving insights into understanding possible model failures, and can support clinical decision making by comparing archived images and patient records with the actual case.


2021 ◽  
Vol 3 ◽  
pp. 3-10
Author(s):  
Valeria Andreieva ◽  
Nadiia Shvai

Classification task is one of the most common tasks in machine learning. This supervised learning problem consists in assigning each input to one of a finite number of discrete categories. Classification task appears naturally in numerous applications, such as medical image processing, speech recognition, maintenance systems, accident detection, autonomous driving etc.In the last decade methods of deep learning have proven to be extremely efficient in multiple machine learning problems, including classification. Whereas the neural network architecture might depend a lot on data type and restrictions posed by the nature of the problem (for example, real-time applications), the process of its training (i.e. finding model’s parameters) is almost always presented as loss function optimization problem.Cross-entropy is a loss function often used for multiclass classification problems, as it allows to achieve high accuracy results.Here we propose to use a generalized version of this loss based on Renyi divergence and entropy. We remark that in case of binary labels proposed generalization is reduced to cross-entropy, thus we work in the context of soft labels. Specifically, we consider a problem of image classification being solved by application of convolution neural networks with mixup regularizer. The latter expands the training set by taking convex combination of pairs of data samples and corresponding labels. Consequently, labels are no longer binary (corresponding to single class), but have a form of vector of probabilities. In such settings cross-entropy and proposed generalization with Renyi divergence and entropy are distinct, and their comparison makes sense.To measure effectiveness of the proposed loss function we consider image classification problem on benchmark CIFAR-10 dataset. This dataset consists of 60000 images belonging to 10 classes, where images are color and have the size of 32×32. Training set consists of 50000 images, and the test set contains 10000 images.For the convolution neural network, we follow [1] where the same classification task was studied with respect to different loss functions and consider the same neural network architecture in order to obtain comparable results.Experiments demonstrate superiority of the proposed method over cross-entropy for loss function parameter value α < 1. For parameter value α > 1 proposed method shows worse results than cross-entropy loss function. Finally, parameter value α = 1 corresponds to cross-entropy.


Sign in / Sign up

Export Citation Format

Share Document