scholarly journals Semantic segmentation from remote sensor data and the exploitation of latent learning for classification of auxiliary tasks

Author(s):  
Bodhiswatta Chatterjee ◽  
Charalambos Poullis
2018 ◽  
Vol 7 (2) ◽  
pp. 939 ◽  
Author(s):  
Shivakumar B R ◽  
Rajashekararadhya S V

In the past two decades, a significant amount of research has been conducted in the area of information extraction from heterogeneous remotely sensed (RS) datasets. However, it is arduous to exactly predict the behaviour of the classification technique employed due to issues such as the type of the dataset, resolution of the imagery, the presence of mixed pixels, and spectrally overlapping of classes. In this paper, land cover classification of the heterogeneous dataset using classical and Fuzzy based Maximum Likelihood Classifiers (MLC) is presented and compared. Three decision parameters and their significance in pixel assignment is illustrated. The presented Fuzzy based MLC uses a weighted inverse distance measure for defuzzification process. 10 pixels were randomly selected from the study area to illustrate pixel assignment for both the classifiers. The study aims at enhancing the classification accuracy of heterogeneous multispectral remote sensor data characterized by spectrally overlapping classes and mixed pixels. The study additionally aims at obtaining classification results with a confidence level of 95% with ±4% error margin. Classification success rate was analysed using accuracy assessment. Fuzzy based MLC produced significantly higher classification accuracy as compared to classical MLC. The conducted research achieves the expected classification accuracy and proves to be a valuable technique for classification of heterogeneous RS multispectral imagery. 


Energies ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 353
Author(s):  
Yu Hou ◽  
Rebekka Volk ◽  
Lucio Soibelman

Multi-sensor imagery data has been used by researchers for the image semantic segmentation of buildings and outdoor scenes. Due to multi-sensor data hunger, researchers have implemented many simulation approaches to create synthetic datasets, and they have also synthesized thermal images because such thermal information can potentially improve segmentation accuracy. However, current approaches are mostly based on the laws of physics and are limited to geometric models’ level of detail (LOD), which describes the overall planning or modeling state. Another issue in current physics-based approaches is that thermal images cannot be aligned to RGB images because the configurations of a virtual camera used for rendering thermal images are difficult to synchronize with the configurations of a real camera used for capturing RGB images, which is important for segmentation. In this study, we propose an image translation approach to directly convert RGB images to simulated thermal images for expanding segmentation datasets. We aim to investigate the benefits of using an image translation approach for generating synthetic aerial thermal images and compare those approaches with physics-based approaches. Our datasets for generating thermal images are from a city center and a university campus in Karlsruhe, Germany. We found that using the generating model established by the city center to generate thermal images for campus datasets performed better than using the latter to generate thermal images for the former. We also found that using a generating model established by one building style to generate thermal images for datasets with the same building styles performed well. Therefore, we suggest using training datasets with richer and more diverse building architectural information, more complex envelope structures, and similar building styles to testing datasets for an image translation approach.


2021 ◽  
Vol 13 (15) ◽  
pp. 3021
Author(s):  
Bufan Zhao ◽  
Xianghong Hua ◽  
Kegen Yu ◽  
Xiaoxing He ◽  
Weixing Xue ◽  
...  

Urban object segmentation and classification tasks are critical data processing steps in scene understanding, intelligent vehicles and 3D high-precision maps. Semantic segmentation of 3D point clouds is the foundational step in object recognition. To identify the intersecting objects and improve the accuracy of classification, this paper proposes a segment-based classification method for 3D point clouds. This method firstly divides points into multi-scale supervoxels and groups them by proposed inverse node graph (IN-Graph) construction, which does not need to define prior information about the node, it divides supervoxels by judging the connection state of edges between them. This method reaches minimum global energy by graph cutting, obtains the structural segments as completely as possible, and retains boundaries at the same time. Then, the random forest classifier is utilized for supervised classification. To deal with the mislabeling of scattered fragments, higher-order CRF with small-label cluster optimization is proposed to refine the classification results. Experiments were carried out on mobile laser scan (MLS) point dataset and terrestrial laser scan (TLS) points dataset, and the results show that overall accuracies of 97.57% and 96.39% were obtained in the two datasets. The boundaries of objects were retained well, and the method achieved a good result in the classification of cars and motorcycles. More experimental analyses have verified the advantages of the proposed method and proved the practicability and versatility of the method.


2021 ◽  
Vol 185 ◽  
pp. 282-291
Author(s):  
Nizam U. Ahamed ◽  
Kellen T. Krajewski ◽  
Camille C. Johnson ◽  
Adam J. Sterczala ◽  
Julie P. Greeves ◽  
...  

2004 ◽  
Author(s):  
Alan E. Lipton ◽  
Jean-Luc Moncet ◽  
John Galantowicz ◽  
Haijun Hu ◽  
Richard Lynch ◽  
...  

1997 ◽  
Vol 30 (9) ◽  
pp. 347-351 ◽  
Author(s):  
Z. Boger ◽  
L. Ratton ◽  
T.A. Kunt ◽  
T.J. Mc Avoy ◽  
R.E. Cavicchi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document