map estimation
Recently Published Documents


TOTAL DOCUMENTS

534
(FIVE YEARS 99)

H-INDEX

26
(FIVE YEARS 3)

2021 ◽  
Vol 38 (5) ◽  
pp. 1485-1493
Author(s):  
Yasasvy Tadepalli ◽  
Meenakshi Kollati ◽  
Swaraja Kuraparthi ◽  
Padmavathi Kora

Monocular depth estimation is a hot research topic in autonomous car driving. Deep convolution neural networks (DCNN) comprising encoder and decoder with transfer learning are exploited in the proposed work for monocular depth map estimation of two-dimensional images. Extracted CNN features from initial stages are later upsampled using a sequence of Bilinear UpSampling and convolution layers to reconstruct the depth map. The encoder forms the feature extraction part, and the decoder forms the image reconstruction part. EfficientNetB0, a new architecture is used with pretrained weights as encoder. It is a revolutionary architecture with smaller model parameters yet achieving higher efficiencies than the architectures of state-of-the-art, pretrained networks. EfficientNet-B0 is compared with two other pretrained networks, the DenseNet-121 and ResNet50 models. Each of these three models are used in encoding stage for features extraction followed by bilinear method of UpSampling in the decoder. The Monocular image is an ill-posed problem and is thus considered as a regression problem. So the metrics used in the proposed work are F1-score, Jaccard score and Mean Actual Error (MAE) etc., between the original and the reconstructed image. The results convey that EfficientNet-B0 outperforms in validation loss, F1-score and Jaccard score compared to DenseNet-121 and ResNet-50 models.


2021 ◽  
Author(s):  
Nicholas J Sisco ◽  
Ping Wang ◽  
Ashley M Stokes ◽  
Richard D Dortch

Background: Magnetic resonance imaging (MRI) is used extensively to quantify myelin content, however computational bottlenecks remain challenging for advanced imaging techniques in clinical settings. We present a fast, open-source toolkit for processing quantitative magnetization transfer derived from selective inversion recovery (SIR) acquisitions that allows parameter map estimation, including the myelin-sensitive macromolecular pool size ratio (PSR). Significant progress has been made in reducing SIR acquisition times to improve clinically feasibility. However, parameter map estimation from the resulting data remains computationally expensive. To overcome this computational limitation, we developed a computationally efficient, open-source toolkit implemented in the Julia language. Methods: To test the accuracy of this toolkit, we simulated SIR images with varying PSR and spin-lattice relaxation time of the free water pool (R1f) over a physiologically meaningful scale from 5 to 20% and 0.5 to 1.5 s-1, respectively. Rician noise was then added, and the parameter maps were estimated using our Julia toolkit. Probability density histogram plots and Lin's concordance correlation coefficients (LCCC) were used to assess accuracy and precision of the fits to our known simulation data. To further mimic biological tissue, we generated five cross-linked bovine serum albumin (BSA) phantoms with concentrations that ranged from 1.25 to 20%. The phantoms were imaged at 3T using SIR, and data were fit to estimate PSR and R1f. Similarly, a healthy volunteer was imaged at 3T, and SIR parameter maps were estimated to demonstrate the reduced computational time for a real-world clinical example. Results: Estimated SIR parameter maps from our Julia toolkit agreed with simulated values (LCCC> 0.98). This toolkit was further validated using BSA phantoms and a whole brain scan at 3T. In both cases, SIR parameter estimates were consistent with published values using MATLAB. However, compared to earlier work using MATLAB, our Julia toolkit provided an approximate 20-fold reduction in computational time. Conclusions: Presented here, we developed a fast, open-source, toolkit for rapid and accurate SIR MRI using Julia. The reduction in computational cost should allow SIR parameters to be accessible in clinical settings.


2021 ◽  
Vol 8 ◽  
Author(s):  
Qi Zhao ◽  
Ziqiang Zheng ◽  
Huimin Zeng ◽  
Zhibin Yu ◽  
Haiyong Zheng ◽  
...  

Underwater depth prediction plays an important role in underwater vision research. Because of the complex underwater environment, it is extremely difficult and expensive to obtain underwater datasets with reliable depth annotation. Thus, underwater depth map estimation with a data-driven manner is still a challenging task. To tackle this problem, we propose an end-to-end system including two different modules for underwater image synthesis and underwater depth map estimation, respectively. The former module aims to translate the hazy in-air RGB-D images to multi-style realistic synthetic underwater images while retaining the objects and the structural information of the input images. Then we construct a semi-real RGB-D underwater dataset using the synthesized underwater images and the original corresponding depth maps. We conduct supervised learning to perform depth estimation through the pseudo paired underwater RGB-D images. Comprehensive experiments have demonstrated that the proposed method can generate multiple realistic underwater images with high fidelity, which can be applied to enhance the performance of monocular underwater image depth estimation. Furthermore, the trained depth estimation model can be applied to real underwater image depth map estimation. We will release our codes and experimental setting in https://github.com/ZHAOQIII/UW_depth.


Author(s):  
Sayantan Bhattacharya ◽  
Ilias Bilionis ◽  
Pavlos Vlachos

Non-invasive flow velocity measurement techniques like volumetric Particle Image Velocimetry (PIV) (Elsinga et al., 2006; Adrian and Westerweel, 2011) and Particle Tracking Velocimetry (PTV) (Maas, Gruen and Papantoniou, 1993) use multi-camera projections of tracer particle motion to resolve three-dimensional flow structures. A key step in the measurement chain involves reconstructing the 3D intensity field (PIV) or particle positions (PTV) given the projected images and known camera correspondence. Due to limited number of camera-views the projected particle images are non-unique making the inverse problem of volumetric reconstruction underdetermined. Moreover, higher particle concentration (>0.05 ppp) increases erroneous reconstructions or “ghost” particles and decreases reconstruction accuracy. Current reconstruction methods either use voxel-based representation for intensity reconstruction (e.g. MART (Elsinga et al., 2006)) or a particle-based approach (e.g. IPR (Wieneke, 2013)) for 3D position estimation. The former method is computationally intensive and has a lesser positional accuracy due to stretched shape of the reconstructed particle along the line of sight. The latter compromises triangulation accuracy (Maas, Gruen and Papantoniou, 1993) due to overlapping particle images for higher particle concentrations. Thus, each method has its own challenges and the error in 3D reconstruction significantly affects the accuracy of the velocity measurement. Though, other methods like maximum-a-posteriori (MAP) estimation have been previously developed (Levitan and Herman, 1987; Bouman and Sauer, 1996) for computed Tomography data, it has not been explored for PIV/ PTV 3D reconstruction. Here, we use a MAP estimation framework to model and solve the inverse problem. The cost function is optimized using a stochastic gradient ascent (SGA) algorithm. Such an optimization can converge to a better local maximum and also use smaller image patches for efficient iterations.


Sign in / Sign up

Export Citation Format

Share Document