image planes
Recently Published Documents


TOTAL DOCUMENTS

97
(FIVE YEARS 11)

H-INDEX

16
(FIVE YEARS 1)

Author(s):  
Abhirup Banerjee ◽  
Julià Camps ◽  
Ernesto Zacur ◽  
Christopher M. Andrews ◽  
Yoram Rudy ◽  
...  

Cardiac magnetic resonance (CMR) imaging is a valuable modality in the diagnosis and characterization of cardiovascular diseases, since it can identify abnormalities in structure and function of the myocardium non-invasively and without the need for ionizing radiation. However, in clinical practice, it is commonly acquired as a collection of separated and independent 2D image planes, which limits its accuracy in 3D analysis. This paper presents a completely automated pipeline for generating patient-specific 3D biventricular heart models from cine magnetic resonance (MR) slices. Our pipeline automatically selects the relevant cine MR images, segments them using a deep learning-based method to extract the heart contours, and aligns the contours in 3D space correcting possible misalignments due to breathing or subject motion first using the intensity and contours information from the cine data and next with the help of a statistical shape model. Finally, the sparse 3D representation of the contours is used to generate a smooth 3D biventricular mesh. The computational pipeline is applied and evaluated in a CMR dataset of 20 healthy subjects. Our results show an average reduction of misalignment artefacts from 1.82 ± 1.60 mm to 0.72 ± 0.73 mm over 20 subjects, in terms of distance from the final reconstructed mesh. The high-resolution 3D biventricular meshes obtained with our computational pipeline are used for simulations of electrical activation patterns, showing agreement with non-invasive electrocardiographic imaging. The automatic methodologies presented here for patient-specific MR imaging-based 3D biventricular representations contribute to the efficient realization of precision medicine, enabling the enhanced interpretability of clinical data, the digital twin vision through patient-specific image-based modelling and simulation, and augmented reality applications. This article is part of the theme issue ‘Advanced computation in cardiovascular physiology: new challenges and opportunities’.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2415
Author(s):  
Hashim Yasin ◽  
Björn Krüger

We propose an efficient and novel architecture for 3D articulated human pose retrieval and reconstruction from 2D landmarks extracted from a 2D synthetic image, an annotated 2D image, an in-the-wild real RGB image or even a hand-drawn sketch. Given 2D joint positions in a single image, we devise a data-driven framework to infer the corresponding 3D human pose. To this end, we first normalize 3D human poses from Motion Capture (MoCap) dataset by eliminating translation, orientation, and the skeleton size discrepancies from the poses and then build a knowledge-base by projecting a subset of joints of the normalized 3D poses onto 2D image-planes by fully exploiting a variety of virtual cameras. With this approach, we not only transform 3D pose space to the normalized 2D pose space but also resolve the 2D-3D cross-domain retrieval task efficiently. The proposed architecture searches for poses from a MoCap dataset that are near to a given 2D query pose in a definite feature space made up of specific joint sets. These retrieved poses are then used to construct a weak perspective camera and a final 3D posture under the camera model that minimizes the reconstruction error. To estimate unknown camera parameters, we introduce a nonlinear, two-fold method. We exploit the retrieved similar poses and the viewing directions at which the MoCap dataset was sampled to minimize the projection error. Finally, we evaluate our approach thoroughly on a large number of heterogeneous 2D examples generated synthetically, 2D images with ground-truth, a variety of real in-the-wild internet images, and a proof of concept using 2D hand-drawn sketches of human poses. We conduct a pool of experiments to perform a quantitative study on PARSE dataset. We also show that the proposed system yields competitive, convincing results in comparison to other state-of-the-art methods.


F1000Research ◽  
2021 ◽  
Vol 9 ◽  
pp. 1380
Author(s):  
Romain Guiet ◽  
Olivier Burri ◽  
Nicolas Chiaruttini ◽  
Olivier Hagens ◽  
Arne Seitz

The number of grey values that can be displayed on monitors and be processed by the human eye is smaller than the dynamic range of image-based sensors. This makes the visualization of such data a challenge, especially with specimens where small dim structures are equally important as large bright ones, or whenever variations in intensity, such as non-homogeneous staining efficiencies or light depth penetration, becomes an issue. While simple intensity display mappings are easily possible, these fail to provide a one-shot observation that can display objects of varying intensities. In order to facilitate the visualization-based analysis of large volumetric datasets, we developed an easy-to-use ImageJ plugin enabling the compressed display of features within several magnitudes of intensities. The Display Enhancement for Visual Inspection of Large Stacks plugin (DEVILS) homogenizes the intensities by using a combination of local and global pixel operations to allow for high and low intensities to be visible simultaneously to the human eye. The plugin is based on a single, intuitively understandable parameter, features a preview mode, and uses parallelization to process multiple image planes. As output, the plugin is capable of producing a BigDataViewer-compatible dataset for fast visualization. We demonstrate the utility of the plugin for large volumetric image data.


2021 ◽  
Vol 18 (2) ◽  
pp. 172988142199654
Author(s):  
Joohyung Kim ◽  
Janghun Hyeon ◽  
Nakju Doh

As interest in image-based rendering increases, the need for multiview inpainting is emerging. Despite of rapid progresses in single-image inpainting based on deep learning approaches, they have no constraint in obtaining color consistency over multiple inpainted images. We target object removal in large-scale indoor spaces and propose a novel pipeline of multiview inpainting to achieve color consistency and boundary consistency in multiple images. The first step of the pipeline is to create color prior information on masks by coloring point clouds from multiple images and projecting the colored point clouds onto the image planes. Next, a generative inpainting network accepts a masked image, a color prior image, imperfect guideline, and two different masks as inputs and yields the refined guideline and inpainted image as outputs. The color prior and guideline input ensure color and boundary consistencies across multiple images. We validate our pipeline on real indoor data sets quantitatively using consistency distance and similarity distance, metrics we defined for comparing results of multiview inpainting and qualitatively.


F1000Research ◽  
2020 ◽  
Vol 9 ◽  
pp. 1380
Author(s):  
Romain Guiet ◽  
Olivier Burri ◽  
Nicolas Chiaruttini ◽  
Olivier Hagens ◽  
Arne Seitz

The number of grey values that can be displayed on monitors and be processed by the human eye is smaller than the dynamic range of image-based sensors. This makes the visualization of such data a challenge, especially with specimens where small dim structures are equally important as large bright ones, or whenever variations in intensity, such as non-homogeneous staining efficiencies or light depth penetration, becomes an issue. While simple intensity display mappings are easily possible, these fail to provide a one-shot observation that can display objects of varying intensities. In order to facilitate the visualization-based analysis of large volumetric datasets, we developed an easy-to-use ImageJ plugin enabling the compressed display of features within several magnitudes of intensities. The Display Enhancement for Visual Inspection of Large Stacks plugin (DEVILS) homogenizes the intensities by using a combination of local and global pixel operations to allow for high and low intensities to be visible simultaneously to the human eye. The plugin is based on a single, intuitively understandable parameter, features a preview mode, and uses parallelization to process multiple image planes. As output, the plugin is capable of producing a BigDataViewer-compatible dataset for fast visualization. We demonstrate the utility of the plugin for large volumetric image data.


Vision ◽  
2020 ◽  
Vol 4 (1) ◽  
pp. 10 ◽  
Author(s):  
George Mather

Research to date has not found strong evidence for a universal link between any single low-level image statistic, such as fractal dimension or Fourier spectral slope, and aesthetic ratings of images in general. This study assessed whether different image statistics are important for artistic images containing different subjects and used partial least squares regression (PLSR) to identify the statistics that correlated most reliably with ratings. Fourier spectral slope, fractal dimension and Shannon entropy were estimated separately for paintings containing landscapes, people, still life, portraits, nudes, animals, buildings and abstracts. Separate analyses were performed on the luminance and colour information in the images. PLSR fits showed shared variance of up to 75% between image statistics and aesthetic ratings. The most important statistics and image planes varied across genres. Variation in statistics may reflect characteristic properties of the different neural sub-systems that process different types of image.


Proceedings ◽  
2019 ◽  
Vol 33 (1) ◽  
pp. 21
Author(s):  
Fabrizia Guglielmetti ◽  
Eric Villard ◽  
Ed Fomalont

A stable and unique solution to the ill-posed inverse problem in radio synthesis image analysis is sought employing Bayesian probability theory combined with a probabilistic two-component mixture model. The solution of the ill-posed inverse problem is given by inferring the values of model parameters defined to describe completely the physical system arised by the data. The analysed data are calibrated visibilities, Fourier transformed from the ( u , v ) to image planes. Adaptive splines are explored to model the cumbersome background model corrupted by the largely varying dirty beam in the image plane. The de-convolution process of the dirty image from the dirty beam is tackled in probability space. Probability maps in source detection at several resolution values quantify the acquired knowledge on the celestial source distribution from a given state of information. The information available are data constrains, prior knowledge and uncertain information. The novel algorithm has the aim to provide an alternative imaging task for the use of the Atacama Large Millimeter/Submillimeter Array (ALMA) in support of the widely used Common Astronomy Software Applications (CASA) enhancing the capabilities in source detection.


2019 ◽  
Vol 2019 ◽  
pp. 1-10
Author(s):  
Timo Eppig ◽  
Kathrin Rubly ◽  
Antonia Rawer ◽  
Achim Langenbucher

The number of presbyopia correcting intraocular lenses (IOLs) is increasing and new technologies are constantly emerging with the aim of correcting the loss of accommodation after cataract surgery. Various optical designs have been proposed to implement multifocality or an extended depth of focus (EDOF). Depending on the optical principle of an implanted lens, the visual performance often is deteriorated by superposition of individual image planes and halos of varying intensity. This experimental study presents a concept to visualize the light fields and especially the halos of mono- and multifocal IOLs using the well known alcoholic beverage “ouzo” in order to obtain qualitative data on the imaging characteristics. We conclude that ouzo is a useful, cost effective, and nonpolluting medium for beam visualization and an alternative to fluorescein or milk, which could find an application for educational purposes.


2019 ◽  
Vol 9 (4) ◽  
pp. 659 ◽  
Author(s):  
Mateusz Surma ◽  
Izabela Ducin ◽  
Przemyslaw Zagrajek ◽  
Agnieszka Siemion

An advanced optical structure such as a synthetic hologram (also called a computer-generated hologram) is designed for sub-terahertz radiation. The detailed design process is carried out using the ping-pong method, which is based on the modified iterative Gerchberg–Saxton algorithm. The novelty lies in designing and manufacturing a single hologram structure creating two different images at two distances. The hologram area is small in relation to the wavelength used (the largest hologram dimension is equivalent to around 57 wavelengths). Thus, it consists of a small amount of coded information, but despite this fact, the reconstruction is successful. Moreover, one of the reconstructed images is larger than the hologram area. Good accordance between numerical simulations and experimental evaluation was obtained.


Sign in / Sign up

Export Citation Format

Share Document