scholarly journals Zero-Keep Filter Pruning for Energy/Power Efficient Deep Neural Networks

Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1238
Author(s):  
Yunhee Woo ◽  
Dongyoung Kim ◽  
Jaemin Jeong ◽  
Young-Woong Ko ◽  
Jeong-Gun Lee

Recent deep learning models succeed in achieving high accuracy and fast inference time, but they require high-performance computing resources because they have a large number of parameters. However, not all systems have high-performance hardware. Sometimes, a deep learning model needs to be run on edge devices such as IoT devices or smartphones. On edge devices, however, limited computing resources are available and the amount of computation must be reduced to launch the deep learning models. Pruning is one of the well-known approaches for deriving light-weight models by eliminating weights, channels or filters. In this work, we propose “zero-keep filter pruning” for energy-efficient deep neural networks. The proposed method maximizes the number of zero elements in filters by replacing small values with zero and pruning the filter that has the lowest number of zeros. In the conventional approach, the filters that have the highest number of zeros are generally pruned. As a result, through this zero-keep filter pruning, we can have the filters that have many zeros in a model. We compared the results of the proposed method with the random filter pruning and proved that our method shows better performance with many fewer non-zero elements with a marginal drop in accuracy. Finally, we discuss a possible multiplier architecture, zero-skip multiplier circuit, which skips the multiplications with zero to accelerate and reduce energy consumption.

2021 ◽  
Vol 6 (5) ◽  
pp. 10-15
Author(s):  
Ela Bhattacharya ◽  
D. Bhattacharya

COVID-19 has emerged as the latest worrisome pandemic, which is reported to have its outbreak in Wuhan, China. The infection spreads by means of human contact, as a result, it has caused massive infections across 200 countries around the world. Artificial intelligence has likewise contributed to managing the COVID-19 pandemic in various aspects within a short span of time. Deep Neural Networks that are explored in this paper have contributed to the detection of COVID-19 from imaging sources. The datasets, pre-processing, segmentation, feature extraction, classification and test results which can be useful for discovering future directions in the domain of automatic diagnosis of the disease, utilizing artificial intelligence-based frameworks, have been investigated in this paper.


2021 ◽  
pp. 27-38
Author(s):  
Rafaela Carvalho ◽  
João Pedrosa ◽  
Tudor Nedelcu

AbstractSkin cancer is one of the most common types of cancer and, with its increasing incidence, accurate early diagnosis is crucial to improve prognosis of patients. In the process of visual inspection, dermatologists follow specific dermoscopic algorithms and identify important features to provide a diagnosis. This process can be automated as such characteristics can be extracted by computer vision techniques. Although deep neural networks can extract useful features from digital images for skin lesion classification, performance can be improved by providing additional information. The extracted pseudo-features can be used as input (multimodal) or output (multi-tasking) to train a robust deep learning model. This work investigates the multimodal and multi-tasking techniques for more efficient training, given the single optimization of several related tasks in the latter, and generation of better diagnosis predictions. Additionally, the role of lesion segmentation is also studied. Results show that multi-tasking improves learning of beneficial features which lead to better predictions, and pseudo-features inspired by the ABCD rule provide readily available helpful information about the skin lesion.


2020 ◽  
Author(s):  
Albahli Saleh ◽  
Ali Alkhalifah

BACKGROUND To diagnose cardiothoracic diseases, a chest x-ray (CXR) is examined by a radiologist. As more people get affected, doctors are becoming scarce especially in developing countries. However, with the advent of image processing tools, the task of diagnosing these cardiothoracic diseases has seen great progress. A lot of researchers have put in work to see how the problems associated with medical images can be mitigated by using neural networks. OBJECTIVE Previous works used state-of-the-art techniques and got effective results with one or two cardiothoracic diseases but could lead to misclassification. In our work, we adopted GANs to synthesize the chest radiograph (CXR) to augment the training set on multiple cardiothoracic diseases to efficiently diagnose the chest diseases in different classes as shown in Figure 1. In this regard, our major contributions are classifying various cardiothoracic diseases to detect a specific chest disease based on CXR, use the advantage of GANs to overcome the shortages of small training datasets, address the problem of imbalanced data; and implementing optimal deep neural network architecture with different hyper-parameters to improve the model with the best accuracy. METHODS For this research, we are not building a model from scratch due to computational restraints as they require very high-end computers. Rather, we use a Convolutional Neural Network (CNN) as a class of deep neural networks to propose a generative adversarial network (GAN) -based model to generate synthetic data for training the data as the amount of the data is limited. We will use pre-trained models which are models that were trained on a large benchmark dataset to solve a problem similar to the one we want to solve. For example, the ResNet-152 model we used was initially trained on the ImageNet dataset. RESULTS After successful training and validation of the models we developed, ResNet-152 with image augmentation proved to be the best model for the automatic detection of cardiothoracic disease. However, one of the main problems associated with radiographic deep learning projects and research is the scarcity and unavailability of enough datasets which is a key component of all deep learning models as they require a lot of data for training. This is the reason why some of our models had image augmentation to increase the number of images without duplication. As more data are collected in the field of chest radiology, the models could be retrained to improve the accuracies of the models as deep learning models improve with more data. CONCLUSIONS This research employs the advantages of computer vision and medical image analysis to develop an automated model that has the clinical potential for early detection of the disease. Using deep learning models, the research aims to evaluate the effectiveness and accuracy of different convolutional neural network models in the automatic diagnosis of cardiothoracic diseases from x-ray images compared to diagnosis by experts in the medical community.


Author(s):  
Kosuke Takagi

Abstract Despite the recent success of deep learning models in solving various problems, their ability is still limited compared with human intelligence, which has the flexibility to adapt to a changing environment. To obtain a model which achieves adaptability to a wide range of problems and tasks is a challenging problem. To achieve this, an issue that must be addressed is identification of the similarities and differences between the human brain and deep neural networks. In this article, inspired by the human flexibility which might suggest the existence of a common mechanism allowing solution of different kinds of tasks, we consider a general learning process in neural networks, on which no specific conditions and constraints are imposed. Subsequently, we theoretically show that, according to the learning progress, the network structure converges to the state, which is characterized by a unique distribution model with respect to network quantities such as the connection weight and node strength. Noting that the empirical data indicate that this state emerges in the large scale network in the human brain, we show that the same state can be reproduced in a simple example of deep learning models. Although further research is needed, our findings provide an insight into the common inherent mechanism underlying the human brain and deep learning. Thus, our findings provide suggestions for designing efficient learning algorithms for solving a wide variety of tasks in the future.


2020 ◽  
Author(s):  
Wesley Wei Qian ◽  
Nathan T. Russell ◽  
Claire L. W. Simons ◽  
Yunan Luo ◽  
Martin D. Burke ◽  
...  

<div>Accurate <i>in silico</i> models for the prediction of novel chemical reaction outcomes can be used to guide the rapid discovery of new reactivity and enable novel synthesis strategies for newly discovered lead compounds. Recent advances in machine learning, driven by deep learning models and data availability, have shown utility throughout synthetic organic chemistry as a data-driven method for reaction prediction. Here we present a machine-intelligence approach to predict the products of an organic reaction by integrating deep neural networks with a probabilistic and symbolic inference that flexibly enforces chemical constraints and accounts for prior chemical knowledge. We first train a graph convolutional neural network to estimate the likelihood of changes in covalent bonds, hydrogen counts, and formal charges. These estimated likelihoods govern a probability distribution over potential products. Integer Linear Programming is then used to infer the most probable products from the probability distribution subject to heuristic rules such as the octet rule and chemical constraints that reflect a user's prior knowledge. Our approach outperforms previous graph-based neural networks by predicting products with more than 90% accuracy, demonstrates intuitive chemical reasoning through a learned attention mechanism, and provides generalizability across various reaction types. Furthermore, we demonstrate the potential for even higher model accuracy when complemented by expert chemists contributing to the system, boosting both machine and expert performance. The results show the advantages of empowering deep learning models with chemical intuition and knowledge to expedite the drug discovery process.</div>


2021 ◽  
Vol 118 (43) ◽  
pp. e2103091118
Author(s):  
Cong Fang ◽  
Hangfeng He ◽  
Qi Long ◽  
Weijie J. Su

In this paper, we introduce the Layer-Peeled Model, a nonconvex, yet analytically tractable, optimization program, in a quest to better understand deep neural networks that are trained for a sufficiently long time. As the name suggests, this model is derived by isolating the topmost layer from the remainder of the neural network, followed by imposing certain constraints separately on the two parts of the network. We demonstrate that the Layer-Peeled Model, albeit simple, inherits many characteristics of well-trained neural networks, thereby offering an effective tool for explaining and predicting common empirical patterns of deep-learning training. First, when working on class-balanced datasets, we prove that any solution to this model forms a simplex equiangular tight frame, which, in part, explains the recently discovered phenomenon of neural collapse [V. Papyan, X. Y. Han, D. L. Donoho, Proc. Natl. Acad. Sci. U.S.A. 117, 24652–24663 (2020)]. More importantly, when moving to the imbalanced case, our analysis of the Layer-Peeled Model reveals a hitherto-unknown phenomenon that we term Minority Collapse, which fundamentally limits the performance of deep-learning models on the minority classes. In addition, we use the Layer-Peeled Model to gain insights into how to mitigate Minority Collapse. Interestingly, this phenomenon is first predicted by the Layer-Peeled Model before being confirmed by our computational experiments.


2020 ◽  
Author(s):  
Wesley Wei Qian ◽  
Nathan T. Russell ◽  
Claire L. W. Simons ◽  
Yunan Luo ◽  
Martin D. Burke ◽  
...  

<div>Accurate <i>in silico</i> models for the prediction of novel chemical reaction outcomes can be used to guide the rapid discovery of new reactivity and enable novel synthesis strategies for newly discovered lead compounds. Recent advances in machine learning, driven by deep learning models and data availability, have shown utility throughout synthetic organic chemistry as a data-driven method for reaction prediction. Here we present a machine-intelligence approach to predict the products of an organic reaction by integrating deep neural networks with a probabilistic and symbolic inference that flexibly enforces chemical constraints and accounts for prior chemical knowledge. We first train a graph convolutional neural network to estimate the likelihood of changes in covalent bonds, hydrogen counts, and formal charges. These estimated likelihoods govern a probability distribution over potential products. Integer Linear Programming is then used to infer the most probable products from the probability distribution subject to heuristic rules such as the octet rule and chemical constraints that reflect a user's prior knowledge. Our approach outperforms previous graph-based neural networks by predicting products with more than 90% accuracy, demonstrates intuitive chemical reasoning through a learned attention mechanism, and provides generalizability across various reaction types. Furthermore, we demonstrate the potential for even higher model accuracy when complemented by expert chemists contributing to the system, boosting both machine and expert performance. The results show the advantages of empowering deep learning models with chemical intuition and knowledge to expedite the drug discovery process.</div>


Electronics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 17
Author(s):  
Soha A. Nossier ◽  
Julie Wall ◽  
Mansour Moniri ◽  
Cornelius Glackin ◽  
Nigel Cannings

Recent speech enhancement research has shown that deep learning techniques are very effective in removing background noise. Many deep neural networks are being proposed, showing promising results for improving overall speech perception. The Deep Multilayer Perceptron, Convolutional Neural Networks, and the Denoising Autoencoder are well-established architectures for speech enhancement; however, choosing between different deep learning models has been mainly empirical. Consequently, a comparative analysis is needed between these three architecture types in order to show the factors affecting their performance. In this paper, this analysis is presented by comparing seven deep learning models that belong to these three categories. The comparison includes evaluating the performance in terms of the overall quality of the output speech using five objective evaluation metrics and a subjective evaluation with 23 listeners; the ability to deal with challenging noise conditions; generalization ability; complexity; and, processing time. Further analysis is then provided while using two different approaches. The first approach investigates how the performance is affected by changing network hyperparameters and the structure of the data, including the Lombard effect. While the second approach interprets the results by visualizing the spectrogram of the output layer of all the investigated models, and the spectrograms of the hidden layers of the convolutional neural network architecture. Finally, a general evaluation is performed for supervised deep learning-based speech enhancement while using SWOC analysis, to discuss the technique’s Strengths, Weaknesses, Opportunities, and Challenges. The results of this paper contribute to the understanding of how different deep neural networks perform the speech enhancement task, highlight the strengths and weaknesses of each architecture, and provide recommendations for achieving better performance. This work facilitates the development of better deep neural networks for speech enhancement in the future.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 143
Author(s):  
Domjan Barić ◽  
Petar Fumić ◽  
Davor Horvatić ◽  
Tomislav Lipic

The adaptation of deep learning models within safety-critical systems cannot rely only on good prediction performance but needs to provide interpretable and robust explanations for their decisions. When modeling complex sequences, attention mechanisms are regarded as the established approach to support deep neural networks with intrinsic interpretability. This paper focuses on the emerging trend of specifically designing diagnostic datasets for understanding the inner workings of attention mechanism based deep learning models for multivariate forecasting tasks. We design a novel benchmark of synthetically designed datasets with the transparent underlying generating process of multiple time series interactions with increasing complexity. The benchmark enables empirical evaluation of the performance of attention based deep neural networks in three different aspects: (i) prediction performance score, (ii) interpretability correctness, (iii) sensitivity analysis. Our analysis shows that although most models have satisfying and stable prediction performance results, they often fail to give correct interpretability. The only model with both a satisfying performance score and correct interpretability is IMV-LSTM, capturing both autocorrelations and crosscorrelations between multiple time series. Interestingly, while evaluating IMV-LSTM on simulated data from statistical and mechanistic models, the correctness of interpretability increases with more complex datasets.


2020 ◽  
Vol 10 (24) ◽  
pp. 8934
Author(s):  
Yan He ◽  
Bin Fu ◽  
Jian Yu ◽  
Renfa Li ◽  
Rucheng Jiang

Wireless and mobile health applications promote the development of smart healthcare. Effective diagnosis and feedbacks of remote health data pose significant challenges due to streaming data, high noise, network latency and user privacy. Therefore, we explore efficient edge and cloud design to maintain electrocardiogram classification performance while reducing the communication cost. These contributions include: (1) We introduce a hybrid smart medical architecture named edge convolutional neural networks (EdgeCNN) that balances the capability of edge and cloud computing to address the issue for agile learning of healthcare data from IoT devices. (2) We present an effective deep learning model for electrocardiogram (ECG) inference, which can be deployed to run on edge smart devices for low-latency diagnosis. (3) We design a data enhancement method for ECG based on deep convolutional generative adversarial network to expand ECG data volume. (4) We carried out experiments on two representative datasets to evaluate the effectiveness of the deep learning model of ECG classification based on EdgeCNN. EdgeCNN shows superior to traditional cloud medical systems in terms of network Input/Output (I/O) pressure, architecture cost and system high availability. The deep learning model not only ensures high diagnostic accuracy, but also has advantages in aspect of inference time, storage, running memory and power consumption.


Sign in / Sign up

Export Citation Format

Share Document