scholarly journals Deep-learning Reconstruction of Three-dimensional Galaxy Distributions with Intensity Mapping Observations

2021 ◽  
Vol 923 (1) ◽  
pp. L7
Author(s):  
Kana Moriwaki ◽  
Naoki Yoshida

Abstract Line-intensity mapping is emerging as a novel method that can measure the collective intensity fluctuations of atomic/molecular line emission from distant galaxies. Several observational programs with various wavelengths are ongoing and planned, but there remains a critical problem of line confusion; emission lines originating from galaxies at different redshifts are confused at the same observed wavelength. We devise a generative adversarial network that extracts designated emission-line signals from noisy three-dimensional data. Our novel network architecture allows two input data, in which the same underlying large-scale structure is traced by two emission lines of H α and [Oiii], so that the network learns the relative contributions at each wavelength and is trained to decompose the respective signals. After being trained with a large number of realistic mock catalogs, the network is able to reconstruct the three-dimensional distribution of emission-line galaxies at z = 1.3−2.4. Bright galaxies are identified with a precision of 84%, and the cross correlation coefficients between the true and reconstructed intensity maps are as high as 0.8. Our deep-learning method can be readily applied to data from planned spaceborne and ground-based experiments.

2021 ◽  
Vol 11 (23) ◽  
pp. 11551
Author(s):  
Armando Levid Rodríguez-Santiago ◽  
José Aníbal Arias-Aguilar ◽  
Hiroshi Takemura ◽  
Alberto Elías Petrilli-Barceló

In this paper, an approach through a Deep Learning architecture for the three-dimensional reconstruction of outdoor environments in challenging terrain conditions is presented. The architecture proposed is configured as an Autoencoder. However, instead of the typical convolutional layers, some differences are proposed. The Encoder stage is set as a residual net with four residual blocks, which have been provided with the necessary knowledge to extract the feature maps from aerial images of outdoor environments. On the other hand, the Decoder stage is set as a Generative Adversarial Network (GAN) and called a GAN-Decoder. The proposed network architecture uses a sequence of the 2D aerial image as input. The Encoder stage works for the extraction of the vector of features that describe the input image, while the GAN-Decoder generates a point cloud based on the information obtained in the previous stage. By supplying a sequence of frames that a percentage of overlap between them, it is possible to determine the spatial location of each generated point. The experiments show that with this proposal it is possible to perform a 3D representation of an area flown over by a drone using the point cloud generated with a deep architecture that has a sequence of aerial 2D images as input. In comparison with other works, our proposed system is capable of performing three-dimensional reconstructions in challenging urban landscapes. Compared with the results obtained using commercial software, our proposal was able to generate reconstructions in less processing time, with less overlapping percentage between 2D images and is invariant to the type of flight path.


2020 ◽  
Vol 31 (6) ◽  
pp. 681-689
Author(s):  
Jalal Mirakhorli ◽  
Hamidreza Amindavar ◽  
Mojgan Mirakhorli

AbstractFunctional magnetic resonance imaging a neuroimaging technique which is used in brain disorders and dysfunction studies, has been improved in recent years by mapping the topology of the brain connections, named connectopic mapping. Based on the fact that healthy and unhealthy brain regions and functions differ slightly, studying the complex topology of the functional and structural networks in the human brain is too complicated considering the growth of evaluation measures. One of the applications of irregular graph deep learning is to analyze the human cognitive functions related to the gene expression and related distributed spatial patterns. Since a variety of brain solutions can be dynamically held in the neuronal networks of the brain with different activity patterns and functional connectivity, both node-centric and graph-centric tasks are involved in this application. In this study, we used an individual generative model and high order graph analysis for the region of interest recognition areas of the brain with abnormal connection during performing certain tasks and resting-state or decompose irregular observations. Accordingly, a high order framework of Variational Graph Autoencoder with a Gaussian distributer was proposed in the paper to analyze the functional data in brain imaging studies in which Generative Adversarial Network is employed for optimizing the latent space in the process of learning strong non-rigid graphs among large scale data. Furthermore, the possible modes of correlations were distinguished in abnormal brain connections. Our goal was to find the degree of correlation between the affected regions and their simultaneous occurrence over time. We can take advantage of this to diagnose brain diseases or show the ability of the nervous system to modify brain topology at all angles and brain plasticity according to input stimuli. In this study, we particularly focused on Alzheimer’s disease.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2852
Author(s):  
Parvathaneni Naga Srinivasu ◽  
Jalluri Gnana SivaSai ◽  
Muhammad Fazal Ijaz ◽  
Akash Kumar Bhoi ◽  
Wonjoon Kim ◽  
...  

Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.


Galaxies ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 100 ◽  
Author(s):  
Karen Olsen ◽  
Andrea Pallottini ◽  
Aida Wofford ◽  
Marios Chatzikos ◽  
Mitchell Revalski ◽  
...  

Modeling emission lines from the millimeter to the UV and producing synthetic spectra is crucial for a good understanding of observations, yet it is an art filled with hazards. This is the proceedings of “Walking the Line”, a 3-day conference held in 2018 that brought together scientists working on different aspects of emission line simulations, in order to share knowledge and discuss the methodology. Emission lines across the spectrum from the millimeter to the UV were discussed, with most of the focus on the interstellar medium, but also some topics on the circumgalactic medium. The most important quality of a useful model is a good synergy with observations and experiments. Challenges in simulating line emission are identified, some of which are already being worked upon, and others that must be addressed in the future for models to agree with observations. Recent advances in several areas aiming at achieving that synergy are summarized here, from micro-physical to galactic and circum-galactic scale.


2020 ◽  
Vol 645 ◽  
pp. A12
Author(s):  
B. Balmaverde ◽  
A. Capetti ◽  
A. Marconi ◽  
G. Venturi ◽  
M. Chiaberge ◽  
...  

We present the final observations of a complete sample of 37 radio galaxies from the Third Cambridge Catalogue (3C) with redshift < 0.3 and declination < 20° obtained with the VLT/MUSE optical integral field spectrograph. These data were obtained as part of the MUse RAdio Loud Emission line Snapshot (MURALES) survey with the main goal of exploring the AGN feedback process in the most powerful radio sources. We present the data analysis and, for each source, the resulting emission line images and the 2D gas velocity field. Thanks to the unprecedented depth these observations reveal emission line regions (ELRs) extending several tens of kiloparsec in most objects. The gas velocity shows ordered rotation in 25 galaxies, but in several sources it is highly complex. We find that the 3C sources show a connection between radio morphology and emission line properties. In the ten FR I sources the line emission region is generally compact, only a few kpc in size; only in one case does it exceed the size of the host. Conversely, all but two of the FR II galaxies show large-scale structures of ionized gas. The median extent is 16 kpc with the maximum reaching a size of ∼80 kpc. There are no apparent differences in extent or strength between the ELR properties of the FR II sources of high and low gas excitation. We confirm that the previous optical identification of 3C 258 is incorrect: this radio source is likely associated with a quasi-stellar object at z ∼ 1.54.


2009 ◽  
Vol 5 (S267) ◽  
pp. 398-398
Author(s):  
Patrick B. Hall ◽  
Laura S. Chajet

Murray & Chiang (1997) developed a model wherein broad emission lines come from the optically thick base of a rotating, outwardly accelerating wind at the surface of an accretion disk. Photons preferentially escape radially in such a wind, explaining why broad emission lines are usually single-peaked. Less well understood are the observed shifts of emission-line peaks (from 1000 km s−1 redshifted to 2500 km s−1 blueshifted in C iv, with an average 800 km s−1 blueshift).


2020 ◽  
Vol 127 (Suppl_1) ◽  
Author(s):  
Bryant M Baldwin ◽  
Shane Joseph ◽  
Xiaodong Zhong ◽  
Ranya Kakish ◽  
Cherie Revere ◽  
...  

This study investigated MRI and semantic segmentation-based deep-learning (SSDL) automation for left-ventricular chamber quantifications (LVCQ) and low longitudinal strain (LLS) determination, thus eliminating user-bias by providing an automated tool to detect cardiotoxicity (CT) in breast cancer patients treated with antineoplastic agents. Displacement Encoding with Stimulated Echoes-based (DENSE) myocardial images from 26 patients were analyzed with the tool’s Convolution Neural Network with underlying Resnet-50 architecture. Quantifications based on the SSDL tool’s output were for LV end-diastolic diameter (LVEDD), ejection fraction (LVEF), and mass (LVM) (see figure for phase sequence). LLS was analyzed with Radial Point Interpolation Method (RPIM) with DENSE phase-based displacements. LVCQs were validated by comparison to measurements obtained with an existing semi-automated vendor tool (VT) and strains by 2 independent users employing Bland-Altman analysis (BAA) and interclass correlation coefficients estimated with Cronbach’s Alpha (C-Alpha) index. F1 score for classification accuracy was 0.92. LVCQs determined by SSDL and VT were 4.6 ± 0.5 vs 4.6 ± 0.7 cm (C-Alpha = 0.93 and BAA = 0.5 ± 0.5 cm) for LVEDD, 58 ± 5 vs 58 ± 6 % (0.90, 1 ± 5%) for LVEF, 119 ± 17 vs 121 ± 14 g (0.93, 5 ± 8 g) for LV mass, while LLS was 14 ± 4 vs 14 ± 3 % (0.86, 0.2 ± 6%). Hence, equivalent LV dimensions, mass and strains measured by VT and DENSE imaging validate our unique automated analytic tool. Longitudinal strains in patients can then be analyzed without user bias to detect abnormalities for the indication of cardiotoxicity and the need for therapeutic intervention even if LVEF is not affected.


2020 ◽  
Vol 496 (1) ◽  
pp. L54-L58 ◽  
Author(s):  
Kana Moriwaki ◽  
Nina Filippova ◽  
Masato Shirasaki ◽  
Naoki Yoshida

ABSTRACT Line intensity mapping (LIM) is an emerging observational method to study the large-scale structure of the Universe and its evolution. LIM does not resolve individual sources but probes the fluctuations of integrated line emissions. A serious limitation with LIM is that contributions of different emission lines from sources at different redshifts are all confused at an observed wavelength. We propose a deep learning application to solve this problem. We use conditional generative adversarial networks to extract designated information from LIM. We consider a simple case with two populations of emission-line galaxies; H $\rm \alpha$ emitting galaxies at $z$ = 1.3 are confused with [O iii] emitters at $z$ = 2.0 in a single observed waveband at 1.5 $\mu{\textrm m}$. Our networks trained with 30 000 mock observation maps are able to extract the total intensity and the spatial distribution of H $\rm \alpha$ emitting galaxies at $z$ = 1.3. The intensity peaks are successfully located with 74 per cent precision. The precision increases to 91 per cent when we combine five networks. The mean intensity and the power spectrum are reconstructed with an accuracy of ∼10 per cent. The extracted galaxy distributions at a wider range of redshift can be used for studies on cosmology and on galaxy formation and evolution.


Author(s):  
Tony Lindeberg

AbstractThis paper presents a hybrid approach between scale-space theory and deep learning, where a deep learning architecture is constructed by coupling parameterized scale-space operations in cascade. By sharing the learnt parameters between multiple scale channels, and by using the transformation properties of the scale-space primitives under scaling transformations, the resulting network becomes provably scale covariant. By in addition performing max pooling over the multiple scale channels, or other permutation-invariant pooling over scales, a resulting network architecture for image classification also becomes provably scale invariant. We investigate the performance of such networks on the MNIST Large Scale dataset, which contains rescaled images from the original MNIST dataset over a factor of 4 concerning training data and over a factor of 16 concerning testing data. It is demonstrated that the resulting approach allows for scale generalization, enabling good performance for classifying patterns at scales not spanned by the training data.


2020 ◽  
Author(s):  
Xu Cheng ◽  
Chen Song ◽  
Yongxiang Gu ◽  
Beijing Chen ◽  
Lin Zhou ◽  
...  

Abstract Artificial intelligence has been widely studied on solving intelligent surveillance analysis and security problems in recent years. Although many multimedia security approaches have been proposed by using deep learning network model, there are still some challenges on their performances which deserve in-depth research. On one hand, high computational complexity of current deep learning methods makes it hard to be applied to real-time scenario. On the other hand, it is difficult to obtain the specific features of a video by fine-tuning the network online with the object state of the first frame, which fails to capture rich appearance variations of the object. To solve above two issues, in this paper, an effective object tracking method with learning attention is proposed to achieve the object localization and reduce the training time in adversarial learning framework. First, a prediction network is designed to track the object in video sequences. The object positions of the first ten frames are employed to fine-tune prediction network, which can fully mine a specific features of an object. Second, the prediction network is integrated into the generative adversarial network framework, which randomly generates masks to capture object appearance variations via adaptively dropout input features. Third, we present a spatial attention mechanism to improve the tracking performance. The proposed network can identify the mask that maintains the most robust features of the objects over a long temporal span. Extensive experiments on two large-scale benchmarks demonstrate that the proposed algorithm performs favorably against state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document