scholarly journals Semiotic Aggregation in Deep Learning

Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1365
Author(s):  
Bogdan Muşat ◽  
Răzvan Andonie

Convolutional neural networks utilize a hierarchy of neural network layers. The statistical aspects of information concentration in successive layers can bring an insight into the feature abstraction process. We analyze the saliency maps of these layers from the perspective of semiotics, also known as the study of signs and sign-using behavior. In computational semiotics, this aggregation operation (known as superization) is accompanied by a decrease of spatial entropy: signs are aggregated into supersign. Using spatial entropy, we compute the information content of the saliency maps and study the superization processes which take place between successive layers of the network. In our experiments, we visualize the superization process and show how the obtained knowledge can be used to explain the neural decision model. In addition, we attempt to optimize the architecture of the neural model employing a semiotic greedy technique. To the extent of our knowledge, this is the first application of computational semiotics in the analysis and interpretation of deep neural networks.

2020 ◽  
Author(s):  
Albahli Saleh ◽  
Ali Alkhalifah

BACKGROUND To diagnose cardiothoracic diseases, a chest x-ray (CXR) is examined by a radiologist. As more people get affected, doctors are becoming scarce especially in developing countries. However, with the advent of image processing tools, the task of diagnosing these cardiothoracic diseases has seen great progress. A lot of researchers have put in work to see how the problems associated with medical images can be mitigated by using neural networks. OBJECTIVE Previous works used state-of-the-art techniques and got effective results with one or two cardiothoracic diseases but could lead to misclassification. In our work, we adopted GANs to synthesize the chest radiograph (CXR) to augment the training set on multiple cardiothoracic diseases to efficiently diagnose the chest diseases in different classes as shown in Figure 1. In this regard, our major contributions are classifying various cardiothoracic diseases to detect a specific chest disease based on CXR, use the advantage of GANs to overcome the shortages of small training datasets, address the problem of imbalanced data; and implementing optimal deep neural network architecture with different hyper-parameters to improve the model with the best accuracy. METHODS For this research, we are not building a model from scratch due to computational restraints as they require very high-end computers. Rather, we use a Convolutional Neural Network (CNN) as a class of deep neural networks to propose a generative adversarial network (GAN) -based model to generate synthetic data for training the data as the amount of the data is limited. We will use pre-trained models which are models that were trained on a large benchmark dataset to solve a problem similar to the one we want to solve. For example, the ResNet-152 model we used was initially trained on the ImageNet dataset. RESULTS After successful training and validation of the models we developed, ResNet-152 with image augmentation proved to be the best model for the automatic detection of cardiothoracic disease. However, one of the main problems associated with radiographic deep learning projects and research is the scarcity and unavailability of enough datasets which is a key component of all deep learning models as they require a lot of data for training. This is the reason why some of our models had image augmentation to increase the number of images without duplication. As more data are collected in the field of chest radiology, the models could be retrained to improve the accuracies of the models as deep learning models improve with more data. CONCLUSIONS This research employs the advantages of computer vision and medical image analysis to develop an automated model that has the clinical potential for early detection of the disease. Using deep learning models, the research aims to evaluate the effectiveness and accuracy of different convolutional neural network models in the automatic diagnosis of cardiothoracic diseases from x-ray images compared to diagnosis by experts in the medical community.


mSphere ◽  
2020 ◽  
Vol 5 (5) ◽  
Author(s):  
Artur Yakimovich ◽  
Moona Huttunen ◽  
Jerzy Samolej ◽  
Barbara Clough ◽  
Nagisa Yoshida ◽  
...  

ABSTRACT The use of deep neural networks (DNNs) for analysis of complex biomedical images shows great promise but is hampered by a lack of large verified data sets for rapid network evolution. Here, we present a novel strategy, termed “mimicry embedding,” for rapid application of neural network architecture-based analysis of pathogen imaging data sets. Embedding of a novel host-pathogen data set, such that it mimics a verified data set, enables efficient deep learning using high expressive capacity architectures and seamless architecture switching. We applied this strategy across various microbiological phenotypes, from superresolved viruses to in vitro and in vivo parasitic infections. We demonstrate that mimicry embedding enables efficient and accurate analysis of two- and three-dimensional microscopy data sets. The results suggest that transfer learning from pretrained network data may be a powerful general strategy for analysis of heterogeneous pathogen fluorescence imaging data sets. IMPORTANCE In biology, the use of deep neural networks (DNNs) for analysis of pathogen infection is hampered by a lack of large verified data sets needed for rapid network evolution. Artificial neural networks detect handwritten digits with high precision thanks to large data sets, such as MNIST, that allow nearly unlimited training. Here, we developed a novel strategy we call mimicry embedding, which allows artificial intelligence (AI)-based analysis of variable pathogen-host data sets. We show that deep learning can be used to detect and classify single pathogens based on small differences.


2020 ◽  
Author(s):  
Raju Singh

This report is an insight into the world of deep learning and CNN networks. It is an attempt to perform classification using neural network and deep learning for a given dataset (which is a subset from the MNIST dataset). The MNIST dataset contains 70,000 images of handwritten digits, divided into 60,000 training images and 10,000 testing images.


2022 ◽  
Vol 13 (1) ◽  
Author(s):  
Tianyu Wang ◽  
Shi-Yuan Ma ◽  
Logan G. Wright ◽  
Tatsuhiro Onodera ◽  
Brian C. Richard ◽  
...  

AbstractDeep learning has become a widespread tool in both science and industry. However, continued progress is hampered by the rapid growth in energy costs of ever-larger deep neural networks. Optical neural networks provide a potential means to solve the energy-cost problem faced by deep learning. Here, we experimentally demonstrate an optical neural network based on optical dot products that achieves 99% accuracy on handwritten-digit classification using ~3.1 detected photons per weight multiplication and ~90% accuracy using ~0.66 photons (~2.5 × 10−19 J of optical energy) per weight multiplication. The fundamental principle enabling our sub-photon-per-multiplication demonstration—noise reduction from the accumulation of scalar multiplications in dot-product sums—is applicable to many different optical-neural-network architectures. Our work shows that optical neural networks can achieve accurate results using extremely low optical energies.


2021 ◽  
Vol 118 (43) ◽  
pp. e2103091118
Author(s):  
Cong Fang ◽  
Hangfeng He ◽  
Qi Long ◽  
Weijie J. Su

In this paper, we introduce the Layer-Peeled Model, a nonconvex, yet analytically tractable, optimization program, in a quest to better understand deep neural networks that are trained for a sufficiently long time. As the name suggests, this model is derived by isolating the topmost layer from the remainder of the neural network, followed by imposing certain constraints separately on the two parts of the network. We demonstrate that the Layer-Peeled Model, albeit simple, inherits many characteristics of well-trained neural networks, thereby offering an effective tool for explaining and predicting common empirical patterns of deep-learning training. First, when working on class-balanced datasets, we prove that any solution to this model forms a simplex equiangular tight frame, which, in part, explains the recently discovered phenomenon of neural collapse [V. Papyan, X. Y. Han, D. L. Donoho, Proc. Natl. Acad. Sci. U.S.A. 117, 24652–24663 (2020)]. More importantly, when moving to the imbalanced case, our analysis of the Layer-Peeled Model reveals a hitherto-unknown phenomenon that we term Minority Collapse, which fundamentally limits the performance of deep-learning models on the minority classes. In addition, we use the Layer-Peeled Model to gain insights into how to mitigate Minority Collapse. Interestingly, this phenomenon is first predicted by the Layer-Peeled Model before being confirmed by our computational experiments.


Author(s):  
Dong-Dong Chen ◽  
Wei Wang ◽  
Wei Gao ◽  
Zhi-Hua Zhou

Deep neural networks have witnessed great successes in various real applications, but it requires a large number of labeled data for training. In this paper, we propose tri-net, a deep neural network which is able to use massive unlabeled data to help learning with limited labeled data. We consider model initialization, diversity augmentation and pseudo-label editing simultaneously. In our work, we utilize output smearing to initialize modules, use fine-tuning on labeled data to augment diversity and eliminate unstable pseudo-labels to alleviate the influence of suspicious pseudo-labeled data. Experiments show that our method achieves the best performance in comparison with state-of-the-art semi-supervised deep learning methods. In particular, it achieves 8.30% error rate on CIFAR-10 by using only 4000 labeled examples.


Author(s):  
Joan Serrà

Deep learning is an undeniably hot topic, not only within both academia and industry, but also among society and the media. The reasons for the advent of its popularity are manifold: unprecedented availability of data and computing power, some innovative methodologies, minor but significant technical tricks, etc. However, interestingly, the current success and practice of deep learning seems to be uncorrelated with its theoretical, more formal understanding. And with that, deep learning’s state-of-the-art presents a number of unintuitive properties or situations. In this note, I highlight some of these unintuitive properties, trying to show relevant recent work, and expose the need to get insight into them, either by formal or more empirical means.


Author(s):  
P.V.G.D. Prasad Reddy

Age-Related Macular Degeneration (ARMD) is a medical situation resulting in blurred or no vision in the middle of the eye view. Though this disease doesn’t make the person completely blind, it makes it very difficult for the person to perform day to day activities like reading, driving, recognizing people etc. This paper aims to detect ARMD though Optical Coherence Tomography (OCT) scans where the drusen in the macula is detected and identify the infected. The images are first passed though Directional Total Variation (DTV) Denoising followed by Active contour algorithm to mark the boundaries of the layers in macula. In deep learning, a convolutional neural network is a class of deep neural networks, most commonly applied to analyzing visual imagery. Then these images categorized as healthy and infected using Convolution Neural Network. Different CNN variant algorithms like Alexnet, VggNet and GoogleNet have been compared in the experiments and the results obtained are better compared to traditional methods.


Author(s):  
Mohammad Khalid Pandit ◽  
Roohie Naaz Mir ◽  
Mohammad Ahsan Chishti

Background: Deep neural networks have become the state of the art technology for real- world classification tasks due to their ability to learn better feature representations at each layer. However, the added accuracy that is associated with the deeper layers comes at a huge cost of computation, energy and added latency. Objective: The implementations of such architectures in resource constraint IoT devices are computationally prohibitive due to its computational and memory requirements. These factors are particularly severe in IoT domain. In this paper, we propose the Adaptive Deep Neural Network (ADNN) which gets split across the compute hierarchical layers i.e. edge, fog and cloud with all splits having one or more exit locations. Methods: At every location, the data sample adaptively chooses to exit from the NN (based on confidence criteria) or get fed into deeper layers housed across different compute layers. Design of ADNN, an adaptive deep neural network which results in fast and energy- efficient decision making (inference). : Joint optimization of all the exit points in ADNN such that the overall loss is minimized. Results: Experiments on MNIST dataset show that 41.9% of samples exit at the edge location (correctly classified) and 49.7% of samples exit at fog layer. Similar results are obtained on fashion MNIST dataset with only 19.4% of the samples requiring the entire neural network layers. With this architecture, most of the data samples are locally processed and classified while maintaining the classification accuracy and also keeping in check the communication, energy and latency requirements for time sensitive IoT applications. Conclusion: We investigated the approach of distributing the layers of the deep neural network across edge, fog and the cloud computing devices wherein data samples adaptively choose the exit points to classify themselves based on the confidence criteria (threshold). The results show that the majority of the data samples are classified within the private network of the user (edge, fog) while only a few samples require the entire layers of ADNN for classification.


Author(s):  
Jeffrey A. Ruffolo ◽  
Carlos Guerra ◽  
Sai Pooja Mahajan ◽  
Jeremias Sulam ◽  
Jeffrey J. Gray

AbstractAntibody structure is largely conserved, except for a complementarity-determining region featuring six variable loops. Five of these loops adopt canonical folds which can typically be predicted with existing methods, while the remaining loop (CDR H3) remains a challenge due to its highly diverse set of observed conformations. In recent years, deep neural networks have proven to be effective at capturing the complex patterns of protein structure. This work proposes DeepH3, a deep residual neural network that learns to predict inter-residue distances and orientations from antibody heavy and light chain sequence. The output of DeepH3 is a set of probability distributions over distances and orientation angles between pairs of residues. These distributions are converted to geometric potentials and used to discriminate between decoy structures produced by RosettaAntibody. When evaluated on the Rosetta Antibody Benchmark dataset of 49 targets, DeepH3-predicted potentials identified better, same, and worse structures (measured by root-mean-squared distance [RMSD] from the experimental CDR H3 loop structure) than the standard Rosetta energy function for 30, 13, and 6 targets, respectively, and improved the average RMSD of predictions by 21.3% (0.48 Å). Analysis of individual geometric potentials revealed that inter-residue orientations were more effective than inter-residue distances for discriminating near-native CDR H3 loop structures.


Sign in / Sign up

Export Citation Format

Share Document