multiple resolutions
Recently Published Documents


TOTAL DOCUMENTS

114
(FIVE YEARS 36)

H-INDEX

16
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Mohamed Amgad ◽  
Roberto Salgado ◽  
Lee A.D. Cooper

Tumor-Infiltrating Lymphocytes (TILs) have strong prognostic and predictive value in breast cancer, but their visual assessment is subjective. We present MuTILs, a convolutional neural network architecture specifically optimized for the assessment of TILs in whole-slide image scans in accordance with clinical scoring recommendations. MuTILs is a concept bottleneck model, designed to be explainable and to encourage sensible predictions at multiple resolutions. Our computational scores match visual scores and have independent prognostic value in invasive breast cancers from the TCGA dataset.


2022 ◽  
Author(s):  
Andrew Jones ◽  
F. William Townes ◽  
Didong Li ◽  
Barbara E Engelhardt

Spatially-resolved genomic technologies have allowed us to study the physical organization of cells and tissues, and promise an understanding of the local interactions between cells. However, it remains difficult to precisely align spatial observations across slices, samples, scales, individuals, and technologies. Here, we propose a probabilistic model that aligns a set of spatially-resolved genomics and histology slices onto a known or unknown common coordinate system into which the samples are aligned both spatially and in terms of the phenotypic readouts (e.g., gene or protein expression levels, cell density, open chromatin regions). Our method consists of a two-layer Gaussian process: the first layer maps the observed samples' spatial locations into a common coordinate system, and the second layer maps from the common coordinate system to the observed readouts. Our approach also allows for slices to be mapped to a known template coordinate space if one exists. We show that our registration approach enables complex downstream spatially-aware analyses of spatial genomics data at multiple resolutions that are impossible or inaccurate with unaligned data, including an analysis of variance, differential expression across the z-axis, and association tests across multiple data modalities.


BMC Genomics ◽  
2022 ◽  
Vol 23 (1) ◽  
Author(s):  
Jason C. Hyun ◽  
Jonathan M. Monk ◽  
Bernhard O. Palsson

Abstract Background With the exponential growth of publicly available genome sequences, pangenome analyses have provided increasingly complete pictures of genetic diversity for many microbial species. However, relatively few studies have scaled beyond single pangenomes to compare global genetic diversity both within and across different species. We present here several methods for “comparative pangenomics” that can be used to contextualize multi-pangenome scale genetic diversity with gene function for multiple species at multiple resolutions: pangenome shape, genes, sequence variants, and positions within variants. Results Applied to 12,676 genomes across 12 microbial pathogenic species, we observed several shared resolution-specific patterns of genetic diversity: First, pangenome openness is associated with species’ phylogenetic placement. Second, relationships between gene function and frequency are conserved across species, with core genomes enriched for metabolic and ribosomal genes and accessory genomes for trafficking, secretion, and defense-associated genes. Third, genes in core genomes with the highest sequence diversity are functionally diverse. Finally, certain protein domains are consistently mutation enriched across multiple species, especially among aminoacyl-tRNA synthetases where the extent of a domain’s mutation enrichment is strongly function-dependent. Conclusions These results illustrate the value of each resolution at uncovering distinct aspects in the relationship between genetic and functional diversity across multiple species. With the continued growth of the number of sequenced genomes, these methods will reveal additional universal patterns of genetic diversity at the pangenome scale.


2021 ◽  
Author(s):  
Huan Zhang ◽  
Zhao Zhang ◽  
Haijun Zhang ◽  
Yi Yang ◽  
Shuicheng Yan ◽  
...  

<div>Deep learning based image inpainting methods have improved the performance greatly due to powerful representation ability of deep learning. However, current deep inpainting methods still tend to produce unreasonable structure and blurry texture, implying that image inpainting is still a challenging topic due to the ill-posed property of the task. To address these issues, we propose a novel deep multi-resolution learning-based progressive image inpainting method, termed MR-InpaintNet, which takes the damaged images of different resolutions as input and then fuses the multi-resolution features for repairing the damaged images. The idea is motivated by the fact that images of different resolutions can provide different levels of feature information. Specifically, the low-resolution image provides strong semantic information and the high-resolution image offers detailed texture information. The middle-resolution image can be used to reduce the gap between low-resolution and high-resolution images, which can further refine the inpainting result. To fuse and improve the multi-resolution features, a novel multi-resolution feature learning (MRFL) process is designed, which is consisted of a multi-resolution feature fusion (MRFF) module, an adaptive feature enhancement (AFE) module and a memory enhanced mechanism (MEM) module for information preservation. Then, the refined multi-resolution features contain both rich semantic information and detailed texture information from multiple resolutions. We further handle the refined multiresolution features by the decoder to obtain the recovered image. Extensive experiments on the Paris Street View, Places2 and CelebA-HQ datasets demonstrate that our proposed MRInpaintNet can effectively recover the textures and structures, and performs favorably against state-of-the-art methods.</div>


2021 ◽  
Author(s):  
Huan Zhang ◽  
Zhao Zhang ◽  
Haijun Zhang ◽  
Yi Yang ◽  
Shuicheng Yan ◽  
...  

<div>Deep learning based image inpainting methods have improved the performance greatly due to powerful representation ability of deep learning. However, current deep inpainting methods still tend to produce unreasonable structure and blurry texture, implying that image inpainting is still a challenging topic due to the ill-posed property of the task. To address these issues, we propose a novel deep multi-resolution learning-based progressive image inpainting method, termed MR-InpaintNet, which takes the damaged images of different resolutions as input and then fuses the multi-resolution features for repairing the damaged images. The idea is motivated by the fact that images of different resolutions can provide different levels of feature information. Specifically, the low-resolution image provides strong semantic information and the high-resolution image offers detailed texture information. The middle-resolution image can be used to reduce the gap between low-resolution and high-resolution images, which can further refine the inpainting result. To fuse and improve the multi-resolution features, a novel multi-resolution feature learning (MRFL) process is designed, which is consisted of a multi-resolution feature fusion (MRFF) module, an adaptive feature enhancement (AFE) module and a memory enhanced mechanism (MEM) module for information preservation. Then, the refined multi-resolution features contain both rich semantic information and detailed texture information from multiple resolutions. We further handle the refined multiresolution features by the decoder to obtain the recovered image. Extensive experiments on the Paris Street View, Places2 and CelebA-HQ datasets demonstrate that our proposed MRInpaintNet can effectively recover the textures and structures, and performs favorably against state-of-the-art methods.</div>


Robotics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 89
Author(s):  
Salvador Pacheco-Gutierrez ◽  
Hanlin Niu ◽  
Ipek Caliskanelli ◽  
Robert Skilton

In robotic teleoperation, the knowledge of the state of the remote environment in real time is paramount. Advances in the development of highly accurate 3D cameras able to provide high-quality point clouds appear to be a feasible solution for generating live, up-to-date virtual environments. Unfortunately, the exceptional accuracy and high density of these data represent a burden for communications requiring a large bandwidth affecting setups where the local and remote systems are particularly geographically distant. This paper presents a multiple level-of-detail (LoD) compression strategy for 3D data based on tree-like codification structures capable of compressing a single data frame at multiple resolutions using dynamically configured parameters. The level of compression (resolution) of objects is prioritised based on: (i) placement on the scene; and (ii) the type of object. For the former, classical point cloud fitting and segmentation techniques are implemented; for the latter, user-defined prioritisation is considered. The results obtained are compared using a single LoD (whole-scene) compression technique previously proposed by the authors. Results showed a considerable improvement to the transmitted data size and updated frame rate while maintaining low distortion after decompression.


2021 ◽  
Author(s):  
Adam R Pines ◽  
Bart Larsen ◽  
Zaixu Cui ◽  
Valerie J Sydnor ◽  
Maxwell A Bertolero ◽  
...  

The brain is organized into networks at multiple resolutions, or scales, yet studies of functional network development typically focus on a single scale. Here, we derived personalized functional networks across 29 scales in a large sample of youths (n=693, ages 8-23 years) to identify multi-scale patterns of network re-organization related to neurocognitive development. We found that developmental shifts in inter-network coupling systematically adhered to and strengthened a functional hierarchy of cortical organization. Furthermore, we observed that scale-dependent effects were present in lower-order, unimodal networks, but not higher-order, transmodal networks. Finally, we found that network maturation had clear behavioral relevance: the development of coupling in unimodal and transmodal networks dissociably mediated the emergence of executive function. These results delineate maturation of multi-scale brain networks, which varies according to a functional hierarchy and impacts cognitive development.


2021 ◽  
Author(s):  
Joachim Meyer ◽  
McKenzie Skiles ◽  
Jeffrey Deems ◽  
Kat Boremann ◽  
David Shean

Abstract. Time series mapping of water held as snow in the mountains at global scales is an unsolved challenge to date. In a few locations, lidar-based airborne campaigns have been used to provide valuable data sets that capture snow distribution in near real-time over multiple seasons. Here, an alternative method is presented to map snow depth and quantify snow volume using aerial images and Structure from Motion (SfM) photogrammetry over an alpine watershed (300 km2). The results were compared at multiple resolutions to the lidar-derived snow depth measurements from the Airborne Snow Observatory (ASO), collected simultaneously. Where snow was mapped by both ASO and SfM, the depths compared well, with a mean difference between −0.02 m and 0.03 m, NMAD of 0.22 m, and close snow volume agreement (+/−5 %). ASO mapped a larger snow area relative to SfM, with SfM missing ~14 % of total snow volume as a result. Analyzing the differences shows that challenges for SfM photogrammetry remain in vegetated areas, over shallow snow (< 1 m), and slope angles over 50 degrees. Our results indicate that capturing large scale snow depth and volume with airborne images and photogrammetry could be an additional viable resource for understanding and monitoring snow water resources in certain environments.


2021 ◽  
Vol 12 ◽  
Author(s):  
Guy Farjon ◽  
Yotam Itzhaky ◽  
Faina Khoroshevsky ◽  
Aharon Bar-Hillel

Leaf counting in potted plants is an important building block for estimating their health status and growth rate and has obtained increasing attention from the visual phenotyping community in recent years. Two novel deep learning approaches for visual leaf counting tasks are proposed, evaluated, and compared in this study. The first method performs counting via direct regression but using multiple image representation resolutions to attend leaves of multiple scales. The leaf count from multiple resolutions is fused using a novel technique to get the final count. The second method is detection with a regression model that counts the leaves after locating leaf center points and aggregating them. The algorithms are evaluated on the Leaf Counting Challenge (LCC) dataset of the Computer Vision Problems in Plant Phenotyping (CVPPP) conference 2017, and a new larger dataset of banana leaves. Experimental results show that both methods outperform previous CVPPP LCC challenge winners, based on the challenge evaluation metrics, and place this study as the state of the art in leaf counting. The detection with regression method is found to be preferable for larger datasets when the center-dot annotation is available, and it also enables leaf center localization with a 0.94 average precision. When such annotations are not available, the multiple scale regression model is a good option.


Sign in / Sign up

Export Citation Format

Share Document