Semantic Segmentation of Aerial Images With Shuffling Convolutional Neural Networks

2018 ◽  
Vol 15 (2) ◽  
pp. 173-177 ◽  
Author(s):  
Kaiqiang Chen ◽  
Kun Fu ◽  
Menglong Yan ◽  
Xin Gao ◽  
Xian Sun ◽  
...  
Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1456
Author(s):  
Gabriel Martinez-Soltero ◽  
Alma Y. Alanis ◽  
Nancy Arana-Daniel ◽  
Carlos Lopez-Franco

Mobile robots commonly have to traverse rough terrains. One way to find the easiest traversable path is by determining the types of terrains in the environment. The result of this process can be used by the path planning algorithms to find the best traversable path. In this work, we present an approach for terrain classification from aerial images while using a Convolutional Neural Networks at the pixel level. The segmented images can be used in robot mapping and navigation tasks. The performance of two different Convolutional Neural Networks is analyzed in order to choose the best architecture.


Author(s):  
S. Azimi ◽  
E. Vig ◽  
F. Kurz ◽  
P. Reinartz

<p><strong>Abstract.</strong> High-resolution aerial imagery can provide detailed and in some cases even real-time information about traffic related objects. Vehicle localization and counting using aerial imagery play an important role in a broad range of applications. Recently, convolutional neural networks (CNNs) with atrous convolution layers have shown better performance for semantic segmentation compared to conventional convolutional aproaches. In this work, we propose a joint vehicle segmentation and counting method based on atrous convolutional layers. This method uses a multi-task loss function to simultaneously reduce pixel-wise segmentation and vehicle counting errors. In addition, the rectangular shapes of vehicle segmentations are refined using morphological operations. In order to evaluate the proposed methodology, we apply it to the public “DLR 3K” benchmark dataset which contains aerial images with a ground sampling distance of 13<span class="thinspace"></span>cm. Results show that our proposed method reaches 81.58<span class="thinspace"></span>% mean intersection over union in vehicle segmentation and shows an accuracy of 91.12<span class="thinspace"></span>% in vehicle counting, outperforming the baselines.</p>


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Vishal Singh ◽  
Pradeeba Sridar ◽  
Jinman Kim ◽  
Ralph Nanan ◽  
N. Poornima ◽  
...  

2021 ◽  
Vol 40 (1) ◽  
Author(s):  
David Müller ◽  
Andreas Ehlen ◽  
Bernd Valeske

AbstractConvolutional neural networks were used for multiclass segmentation in thermal infrared face analysis. The principle is based on existing image-to-image translation approaches, where each pixel in an image is assigned to a class label. We show that established networks architectures can be trained for the task of multiclass face analysis in thermal infrared. Created class annotations consisted of pixel-accurate locations of different face classes. Subsequently, the trained network can segment an acquired unknown infrared face image into the defined classes. Furthermore, face classification in live image acquisition is shown, in order to be able to display the relative temperature in real-time from the learned areas. This allows a pixel-accurate temperature face analysis e.g. for infection detection like Covid-19. At the same time our approach offers the advantage of concentrating on the relevant areas of the face. Areas of the face irrelevant for the relative temperature calculation or accessories such as glasses, masks and jewelry are not considered. A custom database was created to train the network. The results were quantitatively evaluated with the intersection over union (IoU) metric. The methodology shown can be transferred to similar problems for more quantitative thermography tasks like in materials characterization or quality control in production.


Sign in / Sign up

Export Citation Format

Share Document