Pyramidal search of maximum coherence direction for biomedical image interpolation

Author(s):  
Jingdan Zhang ◽  
Yongmei Wang ◽  
Baining Guo
2019 ◽  
Vol 2019 ◽  
pp. 1-14
Author(s):  
Berkay Kanberoglu ◽  
Dhritiman Das ◽  
Priya Nair ◽  
Pavan Turaga ◽  
David Frakes

Three-dimensional (3D) biomedical image sets are often acquired with in-plane pixel spacings that are far less than the out-of-plane spacings between images. The resultant anisotropy, which can be detrimental in many applications, can be decreased using image interpolation. Optical flow and/or other registration-based interpolators have proven useful in such interpolation roles in the past. When acquired images are comprised of signals that describe the flow velocity of fluids, additional information is available to guide the interpolation process. In this paper, we present an optical-flow based framework for image interpolation that also minimizes resultant divergence in the interpolated data.


Author(s):  
Wilian Fiirst ◽  
José Montero ◽  
ROGER RESMINI ◽  
Anselmo Antunes Montenegro ◽  
Trueman McHenry ◽  
...  

2021 ◽  
Vol 68 ◽  
pp. 102691
Author(s):  
Jinghua Xu ◽  
Mingzhe Tao ◽  
Shuyou Zhang ◽  
Xue Jiang ◽  
Jianrong Tan

Author(s):  
Donya Khaledyan ◽  
Abdolah Amirany ◽  
Kian Jafari ◽  
Mohammad Hossein Moaiyeri ◽  
Abolfazl Zargari Khuzani ◽  
...  

2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Xinyang Li ◽  
Guoxun Zhang ◽  
Hui Qiao ◽  
Feng Bao ◽  
Yue Deng ◽  
...  

AbstractThe development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation, which is gradually changing the landscape of optical imaging and biomedical research. However, current implementations of deep learning usually operate in a supervised manner, and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability. Here, we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy, even in some cases in which supervised models cannot be applied. Through the introduction of a saliency constraint, the unsupervised model, named Unsupervised content-preserving Transformation for Optical Microscopy (UTOM), can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content. UTOM shows promising performance in a wide range of biomedical image transformation tasks, including in silico histological staining, fluorescence image restoration, and virtual fluorescence labeling. Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities. We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.


Sign in / Sign up

Export Citation Format

Share Document