scholarly journals Classification of surface water objects in visible spectrum images

Author(s):  
A. A. Artemyev ◽  
E. A. Kazachkov ◽  
S. N. Matyugin ◽  
V. V. Sharonov

This paper considers the problem of classifying surface water objects, e.g. ships of different classes, in visible spectrum images using convolutional neural networks. A technique for forming a database of images of surface water objects and a special training dataset for creating a classification are presented. A method for forming and training of a convolutional neural network is described. The dependence of the probability of correct recognition on the number and variants of the selection of specific classes of surface water objects is analysed. The results of recognizing different sets of classes are presented.

2019 ◽  
Vol 8 (4) ◽  
pp. 160 ◽  
Author(s):  
Bingxin Liu ◽  
Ying Li ◽  
Guannan Li ◽  
Anling Liu

Spectral characteristics play an important role in the classification of oil film, but the presence of too many bands can lead to information redundancy and reduced classification accuracy. In this study, a classification model that combines spectral indices-based band selection (SIs) and one-dimensional convolutional neural networks was proposed to realize automatic oil films classification using hyperspectral remote sensing images. Additionally, for comparison, the minimum Redundancy Maximum Relevance (mRMR) was tested for reducing the number of bands. The support vector machine (SVM), random forest (RF), and Hu’s convolutional neural networks (CNN) were trained and tested. The results show that the accuracy of classifications through the one dimensional convolutional neural network (1D CNN) models surpassed the accuracy of other machine learning algorithms such as SVM and RF. The model of SIs+1D CNN could produce a relatively higher accuracy oil film distribution map within less time than other models.


Author(s):  
Luis Fernando De Mingo Lopez ◽  
Clemencio Morales Lucas ◽  
NURIA GOMEZ BLAS ◽  
Krassimira Ivanova

This paper presents a study and implementation of a convolutional neural network to identify and recognize humpback whale specimens from the unique patterns of their tails. Starting from a dataset composed of images of whale tails, all the phases of the process of creation and training of a neural network are detailed – from the analysis and pre-processing of images to the elaboration of predictions, using TensorFlow and Keras frameworks. Other possible alternatives are also explained when it comes to tackling this problem and the complications that have arisen during the process of developing this paper.


Author(s):  
Glen Williams ◽  
Nicholas A. Meisel ◽  
Timothy W. Simpson ◽  
Christopher McComb

Abstract The widespread growth of additive manufacturing, a field with a complex informatic “digital thread”, has helped fuel the creation of design repositories, where multiple users can upload distribute, and download a variety of candidate designs for a variety of situations. Additionally, advancements in additive manufacturing process development, design frameworks, and simulation are increasing what is possible to fabricate with AM, further growing the richness of such repositories. Machine learning offers new opportunities to combine these design repository components’ rich geometric data with their associated process and performance data to train predictive models capable of automatically assessing build metrics related to AM part manufacturability. Although design repositories that can be used to train these machine learning constructs are expanding, our understanding of what makes a particular design repository useful as a machine learning training dataset is minimal. In this study we use a metamodel to predict the extent to which individual design repositories can train accurate convolutional neural networks. To facilitate the creation and refinement of this metamodel, we constructed a large artificial design repository, and subsequently split it into sub-repositories. We then analyzed metadata regarding the size, complexity, and diversity of the sub-repositories for use as independent variables predicting accuracy and the required training computational effort for training convolutional neural networks. The networks each predict one of three additive manufacturing build metrics: (1) part mass, (2) support material mass, and (3) build time. Our results suggest that metamodels predicting the convolutional neural network coefficient of determination, as opposed to computational effort, were most accurate. Moreover, the size of a design repository, the average complexity of its constituent designs, and the average and spread of design spatial diversity were the best predictors of convolutional neural network accuracy.


2019 ◽  
Vol 24 (3-4) ◽  
pp. 107-113
Author(s):  
Kondratiuk S.S. ◽  

The technology, which is implemented with cross platform tools, is proposed for modeling of gesture units of sign language, animation between states of gesture units with a combination of gestures (words). Implemented technology simulates sequence of gestures using virtual spatial hand model and performs recognition of dactyl items from camera input using trained on collected training dataset set convolutional neural network. With the cross platform means technology achieves the ability to run on multiple platforms without re-implementing for each platform


2019 ◽  
Vol 24 (1-2) ◽  
pp. 94-100
Author(s):  
Kondratiuk S.S. ◽  

The technology, which is implemented with cross platform tools, is proposed for modeling of gesture units of sign language, animation between states of gesture units with a combination of gestures (words). Implemented technology simulates sequence of gestures using virtual spatial hand model and performs recognition of dactyl items from camera input using trained on collected training dataset set convolutional neural network, based on the MobileNetv3 architecture, and with the optimal configuration of layers and network parameters. On the collected test dataset accuracy of over 98% is achieved.


Author(s):  
Luis Fernando de Mingo López ◽  
Clemencio Morales Lucas ◽  
Nuria Gómez Blas ◽  
Krassimira Ivanova

This paper presents a study and implementation of a convolutional neural network to identify and recognize humpback whale specimens from the unique patterns of their tails. Starting from a dataset composed of images of whale tails, all the phases of the process of creation and training of a neural network are detailed – from the analysis and pre-processing of images to the elaboration of predictions, using TensorFlow and Keras frameworks. Other possible alternatives are also explained when it comes to tackling this problem and the complications that have arisen during the process of developing this paper.


2021 ◽  
Author(s):  
Blessy Babu ◽  
Hari V Sreeniva

Abstract This paper summarizes the intelligent detection of modulation scheme in an incoming signal, build on convolutional neural network (CNN). It describes the creation of training dataset, realization of CNN, testing and validation. The raw modulated signals are converted into 2D and put on to the network for training. The resulting prototype is adopted for detection. The results signify that the intended approach gives better prediction for the identification of modulated signal without need for any selective feature extraction. The system performance on noise is also evaluated and modelled.


The Analyst ◽  
2017 ◽  
Vol 142 (21) ◽  
pp. 4067-4074 ◽  
Author(s):  
Jinchao Liu ◽  
Margarita Osadchy ◽  
Lorna Ashton ◽  
Michael Foster ◽  
Christopher J. Solomon ◽  
...  

Classification of unprocessed Raman spectra using a convolutional neural network.


2020 ◽  
Author(s):  
Leandro Silva ◽  
Jocival D. Júnior ◽  
Jean Santos ◽  
João Fernando Mari ◽  
Maurício Escarpinati ◽  
...  

Currently, the use of unmanned aerial vehicles (UAVs) is becoming ever more common for acquiring images in precision agriculture, either to identify characteristics of interest or to estimate plantations. However, despite this growth, their processing usually requires specialized techniques and software. During flight, UAVs may undergo some variations, such as wind interference and small altitude variations, which directly influence the captured images. In order to address this problem, we proposed a Convolutional Neural Network (CNN) architecture for the classification of three linear distortions common in UAV flight: rotation, translation and perspective transformations. To train and test our CNN, we used two mosaics that were divided into smaller individual images and then artificially distorted. Results demonstrate the potential of CNNs for solving possible distortions caused in the images during UAV flight. Therefore this becomes a promising area of exploration.


Author(s):  
R. Niessner ◽  
H. Schilling ◽  
B. Jutzi

In recent years, there has been a significant improvement in the detection, identification and classification of objects and images using Convolutional Neural Networks. To study the potential of the Convolutional Neural Network, in this paper three approaches are investigated to train classifiers based on Convolutional Neural Networks. These approaches allow Convolutional Neural Networks to be trained on datasets containing only a few hundred training samples, which results in a successful classification. Two of these approaches are based on the concept of transfer learning. In the first approach features, created by a pretrained Convolutional Neural Network, are used for a classification using a support vector machine. In the second approach a pretrained Convolutional Neural Network gets fine-tuned on a different data set. The third approach includes the design and training for flat Convolutional Neural Networks from the scratch. The evaluation of the proposed approaches is based on a data set provided by the IEEE Geoscience and Remote Sensing Society (GRSS) which contains RGB and LiDAR data of an urban area. In this work it is shown that these Convolutional Neural Networks lead to classification results with high accuracy both on RGB and LiDAR data. Features which are derived by RGB data transferred into LiDAR data by transfer learning lead to better results in classification in contrast to RGB data. Using a neural network which contains fewer layers than common neural networks leads to the best classification results. In this framework, it can furthermore be shown that the practical application of LiDAR images results in a better data basis for classification of vehicles than the use of RGB images.


Sign in / Sign up

Export Citation Format

Share Document