scholarly journals Utilizing the Road Mark Training Set from Ground-Based Mapping System to Airborne Imagery in Deep Learning Framework

2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Tee-Ann Teo

<p><strong>Abstract.</strong> Deep Learning is a kind of Machine Learning technology which utilizing the deep neural network to learn a promising model from a large training data set. Convolutional Neural Network (CNN) has been successfully applied in image segmentation and classification with high accuracy results. The CNN applies multiple kernels (also called filters) to extract image features via image convolution. It is able to determine multiscale features through the multiple layers of convolution and pooling processes. The variety of training data plays an important role to determine a reliable CNN model. The benchmarking training data for road mark extraction is mainly focused on close-range imagery because it is easier to obtain a close-range image rather than an airborne image. For example, KITTI Vision Benchmark Suite. This study aims to transfer the road mark training data from mobile lidar system to aerial orthoimage in Fully Convolutional Networks (FCN). The transformation of the training data from ground-based system to airborne system may reduce the effort of producing a large training data set.</p><p>This study uses FCN technology and aerial orthoimage to localize road marks on the road regions. The road regions are first extracted from 2-D large-scale vector map. The input aerial orthoimage is 10&amp;thinsp;cm spatial resolution and the non-road regions are masked out before the road mark localization. The training data are road mark’s polygons, which are originally digitized from ground-based mobile lidar and prepared for the road mark extraction using mobile mapping system. This study reuses these training data and applies them for the road mark extraction using aerial orthoimage. The digitized training road marks are then transformed to road polygon based on mapping coordinates. As the detail of ground-based lidar is much better than the airborne system, the partially occulted parking lot in aerial orthoimage can also be obtained from the ground-based system. The labels (also called annotations) for FCN include road region, non-regions and road mark. The size of a training batch is 500&amp;thinsp;pixel by 500&amp;thinsp;pixel (50&amp;thinsp;m by 50&amp;thinsp;m on the ground), and the total number of training batches for training is 75 batches. After the FCN training stage, an independent aerial orthoimage (Figure 1a) is applied to predict the road marks. The results of FCN provide initial regions for road marks (Figure 1b). Usually, road marks show higher reflectance than road asphalts. Therefore, this study uses this characteristic to refine the road marks (Figure 1c) by a binary classification inside the initial road mark’s region.</p><p>To compare the automatically extracted road marks (Figure 1c) and manually digitized road marks (Figure 1d), most road marks can be extracted using the training set from ground-based system. This study also selects an area of 600&amp;thinsp;m&amp;thinsp;&amp;times;&amp;thinsp;200&amp;thinsp;m in quantitative analysis. Among the 371 reference road marks, 332 can be extracted from proposed scheme, and the completeness reached 89%. The preliminary experiment demonstrated that most road marks can be successfully extracted by the proposed scheme. Therefore, the training data from the ground-based mapping system can be utilized in airborne orthoimage in similar spatial resolution.</p>

2020 ◽  
Vol 2020 ◽  
pp. 1-13 ◽  
Author(s):  
Suxia Cui ◽  
Yu Zhou ◽  
Yonghui Wang ◽  
Lujun Zhai

Recently, human being’s curiosity has been expanded from the land to the sky and the sea. Besides sending people to explore the ocean and outer space, robots are designed for some tasks dangerous for living creatures. Take the ocean exploration for an example. There are many projects or competitions on the design of Autonomous Underwater Vehicle (AUV) which attracted many interests. Authors of this article have learned the necessity of platform upgrade from a previous AUV design project, and would like to share the experience of one task extension in the area of fish detection. Because most of the embedded systems have been improved by fast growing computing and sensing technologies, which makes them possible to incorporate more and more complicated algorithms. In an AUV, after acquiring surrounding information from sensors, how to perceive and analyse corresponding information for better judgement is one of the challenges. The processing procedure can mimic human being’s learning routines. An advanced system with more computing power can facilitate deep learning feature, which exploit many neural network algorithms to simulate human brains. In this paper, a convolutional neural network (CNN) based fish detection method was proposed. The training data set was collected from the Gulf of Mexico by a digital camera. To fit into this unique need, three optimization approaches were applied to the CNN: data augmentation, network simplification, and training process speed up. Data augmentation transformation provided more learning samples; the network was simplified to accommodate the artificial neural network; the training process speed up is introduced to make the training process more time efficient. Experimental results showed that the proposed model is promising, and has the potential to be extended to other underwear objects.


2021 ◽  
Author(s):  
Yuqi Wang ◽  
Tianyuan Liu ◽  
Di Zhang

Abstract The research on the supercritical carbon dioxide (S-CO2) Brayton cycle has gradually become a hot spot in recent years. The off-design performance of turbine is an important reference for analyzing the variable operating conditions of the cycle. With the development of deep learning technology, the research of surrogate models based on neural network has received extensive attention. In order to improve the inefficiency in traditional off-design analyses, this research establishes a data-driven deep learning off-design aerodynamic prediction model for a S-CO2 centrifugal turbine, which is based on a deep convolutional neural network. The network can rapidly and adaptively provide dynamic aerodynamic performance prediction results for varying blade profiles and operating conditions. Meanwhile, it can illustrate the mechanism based on the field reconstruction results for the generated aerodynamic performance. The training results show that the off-design aerodynamic prediction convolutional neural network (OAP-CNN) has reduced the mean and maximum error of efficiency prediction compared with the traditional Gaussian Process Regression (GPR) and Artificial Neural Network (ANN). Aiming at the off-design conditions, the pressure and temperature distributions with acceptable error can be obtained without a CFD calculation. Besides, the influence of off-design parameters on the efficiency and power can be conveniently acquired, thus providing the reference for an optimized operation strategy. Analyzing the sensitivity of AOP-CNN to training data set size, the prediction accuracy is acceptable when the percentage of training samples exceeds 50%. The minimum error appears when the training data set size is 0.8. The mean and maximum errors are respectively 1.46% and 6.42%. In summary, this research provides a precise and fast aerodynamic performance prediction model in the analyses of off-design conditions for S-CO2 turbomachinery and Brayton cycle.


Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1606
Author(s):  
Daniela Onita ◽  
Adriana Birlutiu ◽  
Liviu P. Dinu

Images and text represent types of content that are used together for conveying a message. The process of mapping images to text can provide very useful information and can be included in many applications from the medical domain, applications for blind people, social networking, etc. In this paper, we investigate an approach for mapping images to text using a Kernel Ridge Regression model. We considered two types of features: simple RGB pixel-value features and image features extracted with deep-learning approaches. We investigated several neural network architectures for image feature extraction: VGG16, Inception V3, ResNet50, Xception. The experimental evaluation was performed on three data sets from different domains. The texts associated with images represent objective descriptions for two of the three data sets and subjective descriptions for the other data set. The experimental results show that the more complex deep-learning approaches that were used for feature extraction perform better than simple RGB pixel-value approaches. Moreover, the ResNet50 network architecture performs best in comparison to the other three deep network architectures considered for extracting image features. The model error obtained using the ResNet50 network is less by approx. 0.30 than other neural network architectures. We extracted natural language descriptors of images and we made a comparison between original and generated descriptive words. Furthermore, we investigated if there is a difference in performance between the type of text associated with the images: subjective or objective. The proposed model generated more similar descriptions to the original ones for the data set containing objective descriptions whose vocabulary is simpler, bigger and clearer.


2019 ◽  
Vol 8 (2) ◽  
pp. 5073-5081

Prediction of student performance is the significant part in processing the educational data. Machine learning algorithms are leading the role in this process. Deep learning is one of the important concepts of machine learning algorithm. In this paper, we applied the deep learning technique for prediction of the academic excellence of the students using R Programming. Keras and Tensorflow libraries utilized for making the model using neural network on the Kaggle dataset. The data is separated into testing data training data set. Plot the neural network model using neuralnet method and created the Deep Learning model using two hidden layers using ReLu activation function and one output layer using softmax activation function. After fine tuning process until the stable changes; this model produced accuracy as 85%.


2018 ◽  
Vol 7 (4.11) ◽  
pp. 198 ◽  
Author(s):  
Mohamad Hazim Johari ◽  
Hasliza Abu Hassan ◽  
Ahmad Ihsan Mohd Yassin ◽  
Nooritawati Md Tahir ◽  
Azlee Zabidi ◽  
...  

This project presents a method to detect diabetic retinopathy on the fundus images by using deep learning neural network. Alexnet Convolution Neural Network (CNN) has been used in the project to ease the process of neural learning. The data set used were retrieved from MESSIDOR database and it contains 1200 pieces of fundus images. The images were filtered based on the project needed.  There were 580 pieces of images types .tif has been used after filtered and those pictures were divided into 2, which is Exudates images and Normal images. On the training and testing session, the 580 mixed of exudates and normal fundus images were divided into 2 sets which is training set and testing set. The result of the training and testing set were merged into a confusion matrix. The result for this project shows that the accuracy of the CNN for training and testing set was 99.3% and 88.3% respectively.   


2017 ◽  
Vol 14 (4) ◽  
pp. 172988141770357 ◽  
Author(s):  
Lei Tai ◽  
Shaohua Li ◽  
Ming Liu

The exploration problem of mobile robots aims to allow mobile robots to explore an unknown environment. We describe an indoor exploration algorithm for mobile robots using a hierarchical structure that fuses several convolutional neural network layers with decision-making process. The whole system is trained end to end by taking only visual information (RGB-D information) as input and generates a sequence of main moving direction as output so that the robot achieves autonomous exploration ability. The robot is a TurtleBot with a Kinect mounted on it. The model is trained and tested in a real world environment. And the training data set is provided for download. The outputs of the test data are compared with the human decision. We use Gaussian process latent variable model to visualize the feature map of last convolutional layer, which proves the effectiveness of this deep convolution neural network mode. We also present a novel and lightweight deep-learning library libcnn especially for deep-learning processing of robotics tasks.


2020 ◽  
Vol 8 ◽  
Author(s):  
Huiying Ren ◽  
Z. Jason Hou ◽  
Bharat Vyakaranam ◽  
Heng Wang ◽  
Pavel Etingov

Detection and timely identification of power system disturbances are essential for situation awareness and reliable electricity grid operation. Because records of actual events in the system are limited, ensemble simulation-based events are needed to provide adequate data for building event-detection models through deep learning; e.g., a convolutional neural network (CNN). An ensemble numerical simulation-based training data set have been generated through dynamic simulations performed on the Polish system with various types of faults in different locations. Such data augmentation is proven to be able to provide adequate data for deep learning. The synchronous generators’ frequency signals are used and encoded into images for developing and evaluating CNN models for classification of fault types and locations. With a time-domain stacked image set as the benchmark, two different time-series encoding approaches, i.e., wavelet decomposition-based frequency-domain stacking and polar coordinate system-based Gramian Angular Field (GAF) stacking, are also adopted to evaluate and compare the CNN model performance and applicability. The various encoding approaches are suitable for different fault types and spatial zonation. With optimized settings of the developed CNN models, the classification and localization accuracies can go beyond 84 and 91%, respectively.


2021 ◽  
pp. 1-11
Author(s):  
Yaning Liu ◽  
Lin Han ◽  
Hexiang Wang ◽  
Bo Yin

Papillary thyroid carcinoma (PTC) is a common carcinoma in thyroid. As many benign thyroid nodules have the papillary structure which could easily be confused with PTC in morphology. Thus, pathologists have to take a lot of time on differential diagnosis of PTC besides personal diagnostic experience and there is no doubt that it is subjective and difficult to obtain consistency among observers. To address this issue, we applied deep learning to the differential diagnosis of PTC and proposed a histological image classification method for PTC based on the Inception Residual convolutional neural network (IRCNN) and support vector machine (SVM). First, in order to expand the dataset and solve the problem of histological image color inconsistency, a pre-processing module was constructed that included color transfer and mirror transform. Then, to alleviate overfitting of the deep learning model, we optimized the convolution neural network by combining Inception Network and Residual Network to extract image features. Finally, the SVM was trained via image features extracted by IRCNN to perform the classification task. Experimental results show effectiveness of the proposed method in the classification of PTC histological images.


2021 ◽  
Vol 13 (3) ◽  
pp. 364
Author(s):  
Han Gao ◽  
Jinhui Guo ◽  
Peng Guo ◽  
Xiuwan Chen

Recently, deep learning has become the most innovative trend for a variety of high-spatial-resolution remote sensing imaging applications. However, large-scale land cover classification via traditional convolutional neural networks (CNNs) with sliding windows is computationally expensive and produces coarse results. Additionally, although such supervised learning approaches have performed well, collecting and annotating datasets for every task are extremely laborious, especially for those fully supervised cases where the pixel-level ground-truth labels are dense. In this work, we propose a new object-oriented deep learning framework that leverages residual networks with different depths to learn adjacent feature representations by embedding a multibranch architecture in the deep learning pipeline. The idea is to exploit limited training data at different neighboring scales to make a tradeoff between weak semantics and strong feature representations for operational land cover mapping tasks. We draw from established geographic object-based image analysis (GEOBIA) as an auxiliary module to reduce the computational burden of spatial reasoning and optimize the classification boundaries. We evaluated the proposed approach on two subdecimeter-resolution datasets involving both urban and rural landscapes. It presented better classification accuracy (88.9%) compared to traditional object-based deep learning methods and achieves an excellent inference time (11.3 s/ha).


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


Sign in / Sign up

Export Citation Format

Share Document