2d data
Recently Published Documents


TOTAL DOCUMENTS

235
(FIVE YEARS 79)

H-INDEX

15
(FIVE YEARS 4)

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Liyao Song ◽  
Quan Wang ◽  
Ting Liu ◽  
Haiwei Li ◽  
Jiancun Fan ◽  
...  

AbstractSpatial resolution is a key factor of quantitatively evaluating the quality of magnetic resonance imagery (MRI). Super-resolution (SR) approaches can improve its spatial resolution by reconstructing high-resolution (HR) images from low-resolution (LR) ones to meet clinical and scientific requirements. To increase the quality of brain MRI, we study a robust residual-learning SR network (RRLSRN) to generate a sharp HR brain image from an LR input. Due to the Charbonnier loss can handle outliers well, and Gradient Difference Loss (GDL) can sharpen an image, we combined the Charbonnier loss and GDL to improve the robustness of the model and enhance the texture information of SR results. Two MRI datasets of adult brain, Kirby 21 and NAMIC, were used to train and verify the effectiveness of our model. To further verify the generalizability and robustness of the proposed model, we collected eight clinical fetal brain MRI 2D data for evaluation. The experimental results have shown that the proposed deep residual-learning network achieved superior performance and high efficiency over other compared methods.


Eos ◽  
2022 ◽  
Vol 103 ◽  
Author(s):  
Morgan Rehnberg

Using 1D and 2D data sources as model constraints yields fine-scale insights into real-world aurorae.


Information ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 7
Author(s):  
Milena Vuckovic ◽  
Johanna Schmidt ◽  
Thomas Ortner ◽  
Daniel Cornel

The application potential of Visual Analytics (VA), with its supporting interactive 2D and 3D visualization techniques, in the environmental domain is unparalleled. Such advanced systems may enable an in-depth interactive exploration of multifaceted geospatial and temporal changes in very large and complex datasets. This is facilitated by a unique synergy of modules for simulation, analysis, and visualization, offering instantaneous visual feedback of transformative changes in the underlying data. However, even if the resulting knowledge holds great potential for supporting decision-making in the environmental domain, the consideration of such techniques still have to find their way to daily practice. To advance these developments, we demonstrate four case studies that portray different opportunities in data visualization and VA in the context of climate research and natural disaster management. Firstly, we focus on 2D data visualization and explorative analysis for climate change detection and urban microclimate development through a comprehensive time series analysis. Secondly, we focus on the combination of 2D and 3D representations and investigations for flood and storm water management through comprehensive flood and heavy rain simulations. These examples are by no means exhaustive, but serve to demonstrate how a VA framework may apply to practical research.


Author(s):  
Valeria Coenda ◽  
Martín de los Rios ◽  
Hernán Muriel ◽  
Sofía A Cora ◽  
Héctor J Martínez ◽  
...  

Abstract We connect galaxy properties with their orbital classification by analysing a sample of galaxies with stellar mass M⋆ ≥ 108.5h−1M⊙ residing in and around massive and isolated galaxy clusters with mass M200 > 1015h−1M⊙ at redshift z = 0. The galaxy population is generated by applying the semi-analytic model of galaxy formation sag on the cosmological simulation MultiDark Planck 2. We classify galaxies considering their real orbits (3D) and their projected phase-space position using the roger  code (2D). We define five categories: cluster galaxies, galaxies that have recently fallen into a cluster, backsplash galaxies, infalling galaxies, and interloper galaxies. For each class, we analyse the 0.1(g − r) colour, the specific star formation rate (sSFR), and the stellar age, as a function of the stellar mass. For the 3D classes, we find that cluster galaxies have the lowest sSFR, and are the reddest and the oldest, as expected from environmental effects. Backsplash galaxies have properties intermediate between the cluster and recent infaller galaxies. For each 2D class, we find an important contamination by other classes. We find it necessary to separate the galaxy populations in red and blue to perform a more realistic analysis of the 2D data. For the red population, the 2D results are in good agreement with the 3D predictions. Nevertheless, when the blue population is considered, the 2D analysis only provides reliable results for recent infallers, infalling galaxies and interloper galaxies.


2021 ◽  
Vol 54 (6) ◽  
Author(s):  
Chris M. Fancher ◽  
Jeff R. Bunn ◽  
Jean Bilheux ◽  
Wenduo Zhou ◽  
Ross E. Whitfield ◽  
...  

The pyRS (Python residual stress) analysis software was designed to address the data reduction and analysis needs of the High Intensity Diffractometer for Residual Stress Analysis (HIDRA) user community. pyRS implements frameworks for the calibration and reduction of measured 2D data into intensity versus scattering vector magnitude and subsequent single-peak-fitting analysis to facilitate texture and residual strain/stress analysis. pyRS components are accessible as standalone user interfaces for peak-fitting and stress/strain analysis or as Python scripts. The scripting interface facilitates automated data reduction and peak-fitting analysis using an autoreduction protocol. Details of the implemented functionality are discussed.


Solid Earth ◽  
2021 ◽  
Vol 12 (11) ◽  
pp. 2573-2596
Author(s):  
Maurizio Ercoli ◽  
Daniele Cirillo ◽  
Cristina Pauselli ◽  
Harry M. Jol ◽  
Francesco Brozzetti

Abstract. With the aim of unveiling evidence of Late Quaternary faulting, a series of ground-penetrating radar (GPR) profiles were acquired across the southern portion of the Fosso della Valle–Campotenese normal fault (VCT), located at the Campotenese continental basin (Mt. Pollino region) in the southern Apennines active extensional belt (Italy). A set of 49 GPR profiles, traced nearly perpendicular to this normal fault, was acquired using 300 and 500 MHz antennas and carefully processed through a customized workflow. The data interpretation allowed us to reconstruct a pseudo-3D model depicting the boundary between the Mesozoic bedrock and the sedimentary fill of the basin, which were in close proximity to the fault. Once the GPR signature of faulting was reviewed and defined, we interpret near-surface alluvial and colluvial sediments dislocated by a set of conjugate (W- and E-dipping) discontinuities that penetrate inside the underlying Triassic dolostones. Close to the contact between the continental deposits and the bedrock, some buried scarps which offset wedge-shaped deposits are interpreted as coseismic ruptures, subsequently sealed by later deposits. Our pseudo-3D GPR dataset represented a good trade-off between a dense 3D-GPR volume and conventional 2D data, which normally requires a higher degree of subjectivity during the interpretation. We have thus reconstructed a reliable subsurface fault pattern, discriminating master faults and a series of secondary splays. This contribution better characterizes active Quaternary faults in an area which falls within the Pollino seismic gap and is considered prone to severe surface faulting. Our results encourage further research at the study site, whilst we also recommend our workflow for similar regions characterized by high seismic hazard and scarcity of near-surface geophysical data.


Author(s):  
R. G. Kippers ◽  
M. Koeva ◽  
M. van Keulen ◽  
S. J. Oude Elberink

Abstract. In the past decade, a lot of effort is put into applying digital innovations to building life cycles. 3D Models have been proven to be efficient for decision making, scenario simulation and 3D data analysis during this life cycle. Creating such digital representation of a building can be a labour-intensive task, depending on the desired scale and level of detail (LOD). This research aims at creating a new automatic deep learning based method for building model reconstruction. It combines exterior and interior data sources: 1) 3D BAG, 2) archived floor plan images. To reconstruct 3D building models from the two data sources, an innovative combination of methods is proposed. In order to obtain the information needed from the floor plan images (walls, openings and labels), deep learning techniques have been used. In addition, post-processing techniques are introduced to transform the data in the required format. In order to fuse the extracted 2D data and the 3D exterior, a data fusion process is introduced. From the literature review, no prior research on automatic integration of CityGML/JSON and floor plan images has been found. Therefore, this method is a first approach to this data integration.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1261
Author(s):  
Ricardo Espinosa ◽  
Raquel Bailón ◽  
Pablo Laguna

Image processing has played a relevant role in various industries, where the main challenge is to extract specific features from images. Specifically, texture characterizes the phenomenon of the occurrence of a pattern along the spatial distribution, taking into account the intensities of the pixels for which it has been applied in classification and segmentation tasks. Therefore, several feature extraction methods have been proposed in recent decades, but few of them rely on entropy, which is a measure of uncertainty. Moreover, entropy algorithms have been little explored in bidimensional data. Nevertheless, there is a growing interest in developing algorithms to solve current limits, since Shannon Entropy does not consider spatial information, and SampEn2D generates unreliable values in small sizes. We introduce a proposed algorithm, EspEn (Espinosa Entropy), to measure the irregularity present in two-dimensional data, where the calculation requires setting the parameters as follows: m (length of square window), r (tolerance threshold), and ρ (percentage of similarity). Three experiments were performed; the first two were on simulated images contaminated with different noise levels. The last experiment was with grayscale images from the Normalized Brodatz Texture database (NBT). First, we compared the performance of EspEn against the entropy of Shannon and SampEn2D. Second, we evaluated the dependence of EspEn on variations of the values of the parameters m, r, and ρ. Third, we evaluated the EspEn algorithm on NBT images. The results revealed that EspEn could discriminate images with different size and degrees of noise. Finally, EspEn provides an alternative algorithm to quantify the irregularity in 2D data; the recommended parameters for better performance are m = 3, r = 20, and ρ = 0.7.


2021 ◽  
Author(s):  
Dean Edun ◽  
Olivia Cracchiolo ◽  
Arnaldo Serrano

The coupled amide-I vibrational modes in peptide systems such as fibrillar aggregates can often provide a wealth of structural information, though the associated spectra can be difficult to interpret. Using exciton scattering calculations, we characterized the polarization selective 2DIR peak patterns for cross-α peptide fibrils, a challenging system given the similarity between the monomeric and fibrillar structures, and interpret the results in light of recently collected 2D data on the cross-α peptide PSMα3. We find that stacking of α-helices into fibrils couples the bright modes across helical subunits, generating three new Bloch-like extended excitonic states that we designate A⏊, E∥, and E⏊. Coherent superpositions of these states in broad-band 2DIR simulations lead to characteristic signals that are sensitive to fibril length, and match the experimental 2DIR spectra.


2021 ◽  
Author(s):  
Qing Xie ◽  
Chengong Han ◽  
Victor Jin ◽  
Shili Lin

Single cell Hi-C techniques enable one to study cell to cell variability in chromatin interactions. However, single cell Hi-C (scHi-C) data suffer severely from sparsity, that is, the existence of excess zeros due to insufficient sequencing depth. Complicate things further is the fact that not all zeros are created equal, as some are due to loci truly not interacting because of the underlying biological mechanism (structural zeros), whereas others are indeed due to insufficient sequencing depth (sampling zeros), especially for loci that interact infrequently. Differentiating between structural zeros and sampling zeros is important since correct inference would improve downstream analyses such as clustering and discovery of subtypes. Nevertheless, distinguishing between these two types of zeros has received little attention in the single cell Hi-C literature, where the issue of sparsity has been addressed mainly as a data quality improvement problem. To fill this gap, in this paper, we propose HiCImpute, a Bayesian hierarchy model that goes beyond data quality improvement by also identifying observed zeros that are in fact structural zeros. HiCImpute takes spatial dependencies of scHi-C 2D data structure into account while also borrowing information from similar single cells and bulk data, when such are available. Through an extensive set of analyses of synthetic and real data, we demonstrate the ability of HiCImpute for identifying structural zeros with high sensitivity, and for accurate imputation of dropout values in sampling zeros. Downstream analyses using data improved from HiCImpute yielded much more accurate clustering of cell types compared to using observed data or data improved by several comparison methods. Most significantly, HiCImpute-improved data has led to the identification of subtypes within each of the excitatory neuronal cells of L4 and L5 in the prefrontal cortex.


Sign in / Sign up

Export Citation Format

Share Document