Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review

2017 ◽  
Vol 29 (9) ◽  
pp. 2352-2449 ◽  
Author(s):  
Waseem Rawat ◽  
Zenghui Wang

Convolutional neural networks (CNNs) have been applied to visual tasks since the late 1980s. However, despite a few scattered applications, they were dormant until the mid-2000s when developments in computing power and the advent of large amounts of labeled data, supplemented by improved algorithms, contributed to their advancement and brought them to the forefront of a neural network renaissance that has seen rapid progression since 2012. In this review, which focuses on the application of CNNs to image classification tasks, we cover their development, from their predecessors up to recent state-of-the-art deep learning systems. Along the way, we analyze (1) their early successes, (2) their role in the deep learning renaissance, (3) selected symbolic works that have contributed to their recent popularity, and (4) several improvement attempts by reviewing contributions and challenges of over 300 publications. We also introduce some of their current trends and remaining challenges.

Author(s):  
Zilong Zhong ◽  
Jonathan Li

The prevailing framework consisted of complex feature extractors following by conventional classifiers. Nevertheless, the high spatial and high spectral dimensionality of each pixel in the hyperspectral imagery hinders the development of hyperspectral image classification. Fortunately, since 2012, deep learning models, which can extract the hierarchical features of large amounts of daily three-channel optical images, have emerged as a better alternative to their shallow learning counterparts. Within all deep learning models, convolutional neural networks (CNNs) exhibit convincing and stunning ability to process a huge mass of data. In this paper, the CNNs have been adopted as an end-to-end pixelwise scheme to classify the pixels of hyperspectral imagery, in which each pixel contains hundreds of continuous spectral bands. According to the preliminarily qualitative and quantitative results, the existing CNN models achieve promising classification accuracy and process effectively and robustly on the University of Pavia dataset.


2019 ◽  
Vol 7 (3) ◽  
pp. SF27-SF40 ◽  
Author(s):  
Rafael Pires de Lima ◽  
Fnu Suriamin ◽  
Kurt J. Marfurt ◽  
Matthew J. Pranter

Artificial intelligence methods have a very wide range of applications. From speech recognition to self-driving cars, the development of modern deep-learning architectures is helping researchers to achieve new levels of accuracy in different fields. Although deep convolutional neural networks (CNNs) (a kind of deep-learning technique) have reached or surpassed human-level performance in image recognition tasks, little has been done to transport this new image classification technology to geoscientific problems. We have developed what we believe to be the first use of CNNs to identify lithofacies in cores. We use highly accurate models (trained with millions of images) and transfer learning to classify images of cored carbonate rocks. We found that different modern CNN architectures can achieve high levels of lithologic image classification accuracy (approximately 90%) and can aid in the core description task. This core image classification technique has the potential to greatly standardize and accelerate the description process. We also provide the community with a new set of labeled data that can be used for further geologic/data science studies.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


2019 ◽  
Vol 277 ◽  
pp. 02024 ◽  
Author(s):  
Lincan Li ◽  
Tong Jia ◽  
Tianqi Meng ◽  
Yizhe Liu

In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques.


Author(s):  
Sheng Shen ◽  
M. K. Sadoughi ◽  
Xiangyi Chen ◽  
Mingyi Hong ◽  
Chao Hu

Over the past two decades, safety and reliability of lithium-ion (Li-ion) rechargeable batteries have been receiving a considerable amount of attention from both industry and academia. To guarantee safe and reliable operation of a Li-ion battery pack and build failure resilience in the pack, battery management systems (BMSs) should possess the capability to monitor, in real time, the state of health (SOH) of the individual cells in the pack. This paper presents a deep learning method, named deep convolutional neural networks, for cell-level SOH assessment based on the capacity, voltage, and current measurements during a charge cycle. The unique features of deep convolutional neural networks include the local connectivity and shared weights, which enable the model to estimate battery capacity accurately using the measurements during charge. To our knowledge, this is the first attempt to apply deep learning to online SOH assessment of Li-ion battery. 10-year daily cycling data from implantable Li-ion cells are used to verify the performance of the proposed method. Compared with traditional machine learning methods such as relevance vector machine and shallow neural networks, the proposed method is demonstrated to produce higher accuracy and robustness in capacity estimation.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 614 ◽  
Author(s):  
M Manoj krishna ◽  
M Neelima ◽  
M Harshali ◽  
M Venu Gopala Rao

The image classification is a classical problem of image processing, computer vision and machine learning fields. In this paper we study the image classification using deep learning. We use AlexNet architecture with convolutional neural networks for this purpose. Four test images are selected from the ImageNet database for the classification purpose. We cropped the images for various portion areas and conducted experiments. The results show the effectiveness of deep learning based image classification using AlexNet.  


Sign in / Sign up

Export Citation Format

Share Document