scholarly journals Multiclassification of license plate based on deep convolution neural networks

Author(s):  
Masar Abed Uthaib ◽  
Muayad Sadik Croock

In the classification of license plate there are some challenges such that the different sizes of plate numbers, the plates' background, and the number of the dataset of the plates. In this paper, a multiclass classification model established using deep convolutional neural network (CNN) to classify the license plate for three countries (Armenia, Belarus, Hungary) with the dataset of 600 images as 200 images for each class (160 for training and 40 for validation sets). Because of the small numbers of datasets, a preprocessing on the dataset is performed using pixel normalization and image data augmentation techniques (rotation, horizontal flip, zoom range) to increase the number of datasets. After that, we feed the augmented images into the convolution layer model, which consists of four blocks of convolution layer. For calculating and optimizing the efficiency of the classification model, a categorical cross-entropy and Adam optimizer used with a learning rate was 0.0001. The model's performance showed 99.17% and 97.50% of the training and validation sets accuracies sequentially, with total accuracy of classification is 96.66%. The time of training is lasting for 12 minutes. An anaconda python 3.7 and Keras Tensor flow backend are used.

Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 624
Author(s):  
Stefan Rohrmanstorfer ◽  
Mikhail Komarov ◽  
Felix Mödritscher

With the always increasing amount of image data, it has become a necessity to automatically look for and process information in these images. As fashion is captured in images, the fashion sector provides the perfect foundation to be supported by the integration of a service or application that is built on an image classification model. In this article, the state of the art for image classification is analyzed and discussed. Based on the elaborated knowledge, four different approaches will be implemented to successfully extract features out of fashion data. For this purpose, a human-worn fashion dataset with 2567 images was created, but it was significantly enlarged by the performed image operations. The results show that convolutional neural networks are the undisputed standard for classifying images, and that TensorFlow is the best library to build them. Moreover, through the introduction of dropout layers, data augmentation and transfer learning, model overfitting was successfully prevented, and it was possible to incrementally improve the validation accuracy of the created dataset from an initial 69% to a final validation accuracy of 84%. More distinct apparel like trousers, shoes and hats were better classified than other upper body clothes.


2021 ◽  
Vol 11 (9) ◽  
pp. 842
Author(s):  
Shruti Atul Mali ◽  
Abdalla Ibrahim ◽  
Henry C. Woodruff ◽  
Vincent Andrearczyk ◽  
Henning Müller ◽  
...  

Radiomics converts medical images into mineable data via a high-throughput extraction of quantitative features used for clinical decision support. However, these radiomic features are susceptible to variation across scanners, acquisition protocols, and reconstruction settings. Various investigations have assessed the reproducibility and validation of radiomic features across these discrepancies. In this narrative review, we combine systematic keyword searches with prior domain knowledge to discuss various harmonization solutions to make the radiomic features more reproducible across various scanners and protocol settings. Different harmonization solutions are discussed and divided into two main categories: image domain and feature domain. The image domain category comprises methods such as the standardization of image acquisition, post-processing of raw sensor-level image data, data augmentation techniques, and style transfer. The feature domain category consists of methods such as the identification of reproducible features and normalization techniques such as statistical normalization, intensity harmonization, ComBat and its derivatives, and normalization using deep learning. We also reflect upon the importance of deep learning solutions for addressing variability across multi-centric radiomic studies especially using generative adversarial networks (GANs), neural style transfer (NST) techniques, or a combination of both. We cover a broader range of methods especially GANs and NST methods in more detail than previous reviews.


Camera traps are used to recover images of animals in their habitats to help in the conservation of fauna. Millions of images are captured by camera traps and extracting information from these data delays and consumes enough resources so sometimes millions of images cannot be used due to lack of resources. That is why researchers have proposed solution approaches using Convolutional Neural Networks (CNNs) and object detection models to be able to automate the retrieval of information from these images. We used Faster R-CNN and data augmentation techniques on Gold Standard Snapshot Serengeti Dataset to detect animals in images and count them. The performances of the two models (the one trained on the original dataset and the one trained on the augmented dataset) were compared to show the importance of having more data for this task. Using the augmented dataset, we trained our model which reached an accuracy of 98.26% for classification of the proposed regions, an accuracy of 79.55% for counting the species present on the images and a mAP of 95.3%. For future work, the model can be trained to recognize the actions and characteristics of animals and tuned to be more efficient for counting task.


Author(s):  
Y Mohana Roopa Et.al

Brain tumor is one of the most hazardous and lethal cancers which require effective detection of tumors for diagnosis, here medical image information is extremely essential. Mostly used images are Magnetic Resonance Image (MRI) images which provide a greater differentiation of assorted body soft tissues. In this paper we propose Deep learning architecture, specially the Convolutional Neural Network (CNN) along with augmentation techniques has been developed for Automatic classification of MRI images under study into tumor or no tumor with supervised learning. The proposed system has three stages at first, brain tumor images are re-sized(normalized) into equal size for effective training of model. Next, extensive data augmentation is employed, avoiding the lack of data problem when dealing with classification. Finally building CNN model for image classification.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Olarik Surinta ◽  
Narong Boonsirisumpun

Vehicle Type Recognition has a significant problem that happens when people need to search for vehicle data from a video surveillance system at a time when a license plate does not appear in the image. This paper proposes to solve this problem with a deep learning technique called Convolutional Neural Network (CNN), which is one of the latest advanced machine learning techniques. In the experiments, researchers collected two datasets of Vehicle Type Image Data (VTID I & II), which contained 1,310 and 4,356 images, respectively. The first experiment was performed with 5 CNN architectures (MobileNets, VGG16, VGG19, Inception V3, and Inception V4), and the second experiment with another 5 CNNs (MobileNetV2, ResNet50, Inception ResNet V2, Darknet-19, and Darknet-53) including several data augmentation methods. The results showed that MobileNets, when combine with the brightness augmented method, significantly outperformed other CNN architectures, producing the highest accuracy rate at 95.46%. It was also the fastest model when compared to other CNN networks.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2504
Author(s):  
Marlies Lauwers ◽  
Benny De Cauwer ◽  
David Nuyttens ◽  
Simon R. Cool ◽  
Jan G. Pieters

Cyperus esculentus (yellow nutsedge) is one of the world’s worst weeds as it can cause great damage to crops and crop production. To eradicate C. esculentus, early detection is key—a challenging task as it is often confused with other Cyperaceae and displays wide genetic variability. In this study, the objective was to classify C. esculentus clones and morphologically similar weeds. Hyperspectral reflectance between 500 and 800 nm was tested as a measure to discriminate between (I) C. esculentus and morphologically similar Cyperaceae weeds, and between (II) different clonal populations of C. esculentus using three classification models: random forest (RF), regularized logistic regression (RLR) and partial least squares–discriminant analysis (PLS–DA). RLR performed better than RF and PLS–DA, and was able to adequately classify the samples. The possibility of creating an affordable multispectral sensing tool, for precise in-field recognition of C. esculentus plants based on fewer spectral bands, was tested. Results of this study were compared against simulated results from a commercially available multispectral camera with four spectral bands. The model created with customized bands performed almost equally well as the original PLS–DA or RLR model, and much better than the model describing multispectral image data from a commercially available camera. These results open up the opportunity to develop a dedicated robust tool for C. esculentus recognition based on four spectral bands and an appropriate classification model.


2020 ◽  
Vol 10 (24) ◽  
pp. 8833
Author(s):  
Álvaro Acción ◽  
Francisco Argüello ◽  
Dora B. Heras

Deep learning (DL) has been shown to obtain superior results for classification tasks in the field of remote sensing hyperspectral imaging. Superpixel-based techniques can be applied to DL, significantly decreasing training and prediction times, but the results are usually far from satisfactory due to overfitting. Data augmentation techniques alleviate the problem by synthetically generating new samples from an existing dataset in order to improve the generalization capabilities of the classification model. In this paper we propose a novel data augmentation framework in the context of superpixel-based DL called dual-window superpixel (DWS). With DWS, data augmentation is performed over patches centered on the superpixels obtained by the application of simple linear iterative clustering (SLIC) superpixel segmentation. DWS is based on dividing the input patches extracted from the superpixels into two regions and independently applying transformations over them. As a result, four different data augmentation techniques are proposed that can be applied to a superpixel-based CNN classification scheme. An extensive comparison in terms of classification accuracy with other data augmentation techniques from the literature using two datasets is also shown. One of the datasets consists of small hyperspectral small scenes commonly found in the literature. The other consists of large multispectral vegetation scenes of river basins. The experimental results show that the proposed approach increases the overall classification accuracy for the selected datasets. In particular, two of the data augmentation techniques introduced, namely, dual-flip and dual-rotate, obtained the best results.


Sign in / Sign up

Export Citation Format

Share Document