scholarly journals IRAS 03158+4227 - a ULIRG in a Widely Separated Pair of Galaxies

2002 ◽  
Vol 184 ◽  
pp. 215-216
Author(s):  
Helmut Meusinger ◽  
Bringfried Stecklum ◽  
Jens Brunzendorf

AbstractWe present new deep optical images, optical spectroscopy, and high-resolution NIR images of IRAS 03158+4227, one of the most luminous ULIRGs from the IRAS 2 Jy sample. The data are best explained either by a multiple merger or by the assumption of a ULIRG triggered in an early phase of galaxy interaction.

Author(s):  
J. Fagir ◽  
A. Schubert ◽  
M. Frioud ◽  
D. Henke

The fusion of synthetic aperture radar (SAR) and optical data is a dynamic research area, but image segmentation is rarely treated. While a few studies use low-resolution nadir-view optical images, we approached the segmentation of SAR and optical images acquired from the same airborne platform – leading to an oblique view with high resolution and thus increased complexity. To overcome the geometric differences, we generated a digital surface model (DSM) from adjacent optical images and used it to project both the DSM and SAR data into the optical camera frame, followed by segmentation with each channel. The fused segmentation algorithm was found to out-perform the single-channel version.


2016 ◽  
Vol 459 (4) ◽  
pp. 4183-4190 ◽  
Author(s):  
V. G. Klochkova ◽  
E. L. Chentsov ◽  
A. S. Miroshnichenko ◽  
V. E. Panchuk ◽  
M. V. Yushkin

2019 ◽  
Vol 11 (13) ◽  
pp. 1619 ◽  
Author(s):  
Zhou Ya’nan ◽  
Luo Jiancheng ◽  
Feng Li ◽  
Zhou Xiaocheng

Spatial features retrieved from satellite data play an important role for improving crop classification. In this study, we proposed a deep-learning-based time-series analysis method to extract and organize spatial features to improve parcel-based crop classification using high-resolution optical images and multi-temporal synthetic aperture radar (SAR) data. Central to this method is the use of multiple deep convolutional networks (DCNs) to extract spatial features and to use the long short-term memory (LSTM) network to organize spatial features. First, a precise farmland parcel map was delineated from optical images. Second, hundreds of spatial features were retrieved using multiple DCNs from preprocessed SAR images and overlaid onto the parcel map to construct multivariate time-series of crop growth for parcels. Third, LSTM-based network structures for organizing these time-series features were constructed to produce a final parcel-based classification map. The method was applied to a dataset of high-resolution ZY-3 optical images and multi-temporal Sentinel-1A SAR data to classify crop types in the Hunan Province of China. The classification results, showing an improvement of greater than 5.0% in overall accuracy relative to methods without spatial features, demonstrated the effectiveness of the proposed method in extracting and organizing spatial features for improving parcel-based crop classification.


Author(s):  
Balnarsaiah Battula ◽  
Laxminarayana Parayitam ◽  
T. S. Prasad ◽  
Penta Balakrishna ◽  
Chandrasekhar Patibandla

2020 ◽  
Vol 101 (20) ◽  
Author(s):  
M. N. Popova ◽  
E. P. Chukalina ◽  
D. S. Erofeev ◽  
A. Jablunovskis ◽  
I. A. Gudim ◽  
...  

2018 ◽  
Vol 10 (9) ◽  
pp. 1459 ◽  
Author(s):  
Ying Sun ◽  
Xinchang Zhang ◽  
Xiaoyang Zhao ◽  
Qinchuan Xin

Identifying and extracting building boundaries from remote sensing data has been one of the hot topics in photogrammetry for decades. The active contour model (ACM) is a robust segmentation method that has been widely used in building boundary extraction, but which often results in biased building boundary extraction due to tree and background mixtures. Although the classification methods can improve this efficiently by separating buildings from other objects, there are often ineluctable salt and pepper artifacts. In this paper, we combine the robust classification convolutional neural networks (CNN) and ACM to overcome the current limitations in algorithms for building boundary extraction. We conduct two types of experiments: the first integrates ACM into the CNN construction progress, whereas the second starts building footprint detection with a CNN and then uses ACM for post processing. Three level assessments conducted demonstrate that the proposed methods could efficiently extract building boundaries in five test scenes from two datasets. The achieved mean accuracies in terms of the F1 score for the first type (and the second type) of the experiment are 96.43 ± 3.34% (95.68 ± 3.22%), 88.60 ± 3.99% (89.06 ± 3.96%), and 91.62 ±1.61% (91.47 ± 2.58%) at the scene, object, and pixel levels, respectively. The combined CNN and ACM solutions were shown to be effective at extracting building boundaries from high-resolution optical images and LiDAR data.


2000 ◽  
Vol 314 (1) ◽  
pp. 199-208 ◽  
Author(s):  
N. Lehner ◽  
P. L. Dufton ◽  
D. L. Lambert ◽  
R. S. I. Ryans ◽  
F. P. Keenan

2017 ◽  
Vol 7 (3) ◽  
Author(s):  
M. Salewski ◽  
S. V. Poltavtsev ◽  
I. A. Yugova ◽  
G. Karczewski ◽  
M. Wiater ◽  
...  

1988 ◽  
Vol 334 ◽  
pp. L99 ◽  
Author(s):  
Richard D. Schwartz ◽  
Donald G. Jennings ◽  
Peredur M. Williams ◽  
Martin Cohen

Sign in / Sign up

Export Citation Format

Share Document