residual image
Recently Published Documents


TOTAL DOCUMENTS

80
(FIVE YEARS 29)

H-INDEX

6
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Ryohei Yamauchi ◽  
Natsuki Murayoshi ◽  
Shinobu Akiyama ◽  
Norifumi Mizuno ◽  
Tomoyuki Masuda ◽  
...  

Abstract Introduction: External beam accelerated partial breast irradiation (APBI) is an alternative treatment for patients with early-stage breast cancer. The efficacy of image-guided radiotherapy (IGRT) using fiducial markers, such as gold markers or surgical clips, has been demonstrated. However, the effects of respiratory motion during a single fraction have not been reported. This study aimed to evaluate the residual image registration error of fiducial marker-based IGRT by respiratory motion and propose a suitable treatment strategy.Materials & Methods: We developed an acrylic phantom embedded with surgical clips to verify the registration error under moving conditions. The frequency of the phase difference in the respiratory cycle due to sequential acquisition was verified in a preliminary study. Fiducial marker-based IGRT was then performed in 10 scenarios. The residual registration error (RRE) was calculated on the basis of the differences in the coordinates of clips between the true position if not moved and the last position.Results: The frequencies of the phase differences in 0.0–0.99, 1.0–1.99, 2.0–2.99, 3.0–3.99, and 4.0–5.0 mm were 23%, 24%, 22%, 20%, and 11%, respectively. When assuming a clinical case, the mean RREs for all directions were within 1.0 mm, even if respiratory motion of 5 mm existed in two axes.Conclusions: For APBI with fiducial marker-based IGRT, the introduction of an image registration strategy that employs stepwise couch correction using at least three orthogonal images should be considered.


2021 ◽  
Vol 922 (1) ◽  
pp. 81
Author(s):  
Shutaro Ueda ◽  
Keiichi Umetsu ◽  
FanLam Ng ◽  
Yuto Ichinohe ◽  
Tetsu Kitayama ◽  
...  

Abstract We present an ensemble X-ray analysis of systematic perturbations in the central hot gas properties for a sample of 28 nearby strong cool-core systems, selected from the HIghest X-ray FLUx Galaxy Cluster Sample (HIFLUGCS). We analyze their cool-core features observed with the Chandra X-ray Observatory. All individual systems in our sample exhibit at least a pair of positive and negative excess perturbations in the X-ray residual image after subtracting the global brightness profile. We extract and analyze X-ray spectra of the intracluster medium (ICM) in the detected perturbed regions. To investigate possible origins of the gas perturbations, we characterize thermodynamic properties of the ICM in the perturbed regions and characterize their correlations between positive and negative excess regions. The best-fit relations for temperature and entropy show a clear offset from the one-to-one relation, T neg / T pos = 1.20 − 0.03 + 0.04 and K neg/K pos = 1.43 ± 0.07, whereas the best-fit relation for pressure is found to be remarkably consistent with the one-to-one relation P neg = P pos, indicating that the ICM in the perturbed regions is in pressure equilibrium. These observed features in the HIFLUGCS sample are in agreement with the hypothesis that the gas perturbations in cool cores are generated by gas sloshing. We also analyze synthetic observations of perturbed cluster cores created from binary merger simulations, finding that the observed temperature ratio agrees with the simulations, T neg/T pos ∼ 1.3. We conclude that gas sloshing induced by infalling substructures plays a major role in producing the characteristic gas perturbations in cool cores. The ubiquitous presence of gas perturbations in cool cores may suggest a significant contribution of gas sloshing to suppressing runaway cooling of the ICM.


Mathematics ◽  
2021 ◽  
Vol 9 (20) ◽  
pp. 2613
Author(s):  
Jin Seong Hong ◽  
Jiho Choi ◽  
Seung Gu Kim ◽  
Muhammad Owais ◽  
Kang Ryoung Park

When images are acquired for finger-vein recognition, images with nonuniformity of illumination are often acquired due to varying thickness of fingers or nonuniformity of illumination intensity elements. Accordingly, the recognition performance is significantly reduced as the features being recognized are deformed. To address this issue, previous studies have used image preprocessing methods, such as grayscale normalization or score-level fusion methods for multiple recognition models, which may improve performance in images with a low degree of nonuniformity of illumination. However, the performance cannot be improved drastically when certain parts of images are saturated due to a severe degree of nonuniformity of illumination. To overcome these drawbacks, this study newly proposes a generative adversarial network for the illumination normalization of finger-vein images (INF-GAN). In the INF-GAN, a one-channel image containing texture information is generated through a residual image generation block, and finger-vein texture information deformed by the severe nonuniformity of illumination is restored, thus improving the recognition performance. The proposed method using the INF-GAN exhibited a better performance compared with state-of-the-art methods when the experiment was conducted using two open databases, the Hong Kong Polytechnic University finger-image database version 1, and the Shandong University homologous multimodal traits finger-vein database.


2021 ◽  
Vol 7 (8) ◽  
pp. 160
Author(s):  
Alessandro Ortis ◽  
Marco Grisanti ◽  
Francesco Rundo ◽  
Sebastiano Battiato

A stereopair consists of two pictures related to the same subject taken by two different points of view. Since the two images contain a high amount of redundant information, new compression approaches and data formats are continuously proposed, which aim to reduce the space needed to store a stereoscopic image while preserving its quality. A standard for multi-picture image encoding is represented by the MPO format (Multi-Picture Object). The classic stereoscopic image compression approaches compute a disparity map between the two views, which is stored with one of the two views together with a residual image. An alternative approach, named adaptive stereoscopic image compression, encodes just the two views independently with different quality factors. Then, the redundancy between the two views is exploited to enhance the low quality image. In this paper, the problem of stereoscopic image compression is presented, with a focus on the adaptive stereoscopic compression approach, which allows us to obtain a standardized format of the compressed data. The paper presents a benchmark evaluation on large and standardized datasets including 60 stereopairs that differ by resolution and acquisition technique. The method is evaluated by varying the amount of compression, as well as the matching and optimization methods resulting in 16 different settings. The adaptive approach is also compared with other MPO-compliant methods. The paper also presents an Human Visual System (HVS)-based assessment experiment which involved 116 people in order to verify the perceived quality of the decoded images.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Chenglin Zuo ◽  
Jun Ma ◽  
Hao Xiong ◽  
Lin Ran

Digital images captured from CMOS/CCD image sensors are prone to noise due to inherent electronic fluctuations and low photon count. To efficiently reduce the noise in the image, a novel image denoising strategy is proposed, which exploits both nonlocal self-similarity and local shape adaptation. With wavelet thresholding, the residual image in method noise, derived from the initial estimate using nonlocal means (NLM), is exploited further. By incorporating the role of both the initial estimate and the residual image, spatially adaptive patch shapes are defined, and new weights are calculated, which thus results in better denoising performance for NLM. Experimental results demonstrate that our proposed method significantly outperforms original NLM and achieves competitive denoising performance compared with state-of-the-art denoising methods.


Author(s):  
Yong Du ◽  
Yangyang Xu ◽  
Taizhong Ye ◽  
Qiang Wen ◽  
Chufeng Xiao ◽  
...  

Color dimensionality reduction is believed as a non-invertible process, as re-colorization results in perceptually noticeable and unrecoverable distortion. In this article, we propose to convert a color image into a grayscale image that can fully recover its original colors, and more importantly, the encoded information is discriminative and sparse, which saves storage capacity. Particularly, we design an invertible deep neural network for color encoding and decoding purposes. This network learns to generate a residual image that encodes color information, and it is then combined with a base grayscale image for color recovering. In this way, the non-differentiable compression process (e.g., JPEG) of the base grayscale image can be integrated into the network in an end-to-end manner. To further reduce the size of the residual image, we present a specific layer to enhance Sparsity Enforcing Priors (SEP), thus leading to negligible storage space. The proposed method allows color embedding on a sparse residual image while keeping a high, 35dB PSNR on average. Extensive experiments demonstrate that the proposed method outperforms state-of-the-arts in terms of image quality and tolerability to compression.


2021 ◽  
Author(s):  
Wele Gedara Chaminda Bandara ◽  
Jeya Maria Jose Valanarasu ◽  
Vishal M. Patel

<div> \par Hyperspectral pansharpening aims to synthesize a low-resolution hyperspectral image (LR-HSI) with a registered panchromatic image (PAN) to generate an enhanced HSI with high spectral and spatial resolution. Recently proposed HS pansharpening methods have obtained remarkable results using deep convolutional networks (ConvNets), which typically consist of three steps: (1) up-sampling the LR-HSI, (2) predicting the residual image via a ConvNet, and (3) obtaining the final fused HSI by adding the outputs from first and second steps. Recent methods have leveraged Deep Image Prior (DIP) to up-sample the LR-HSI due to its excellent ability to preserve both spatial and spectral information, without learning from large data sets. However, we observed that the quality of up-sampled HSIs can be further improved by introducing an additional spatial-domain constraint to the conventional spectral-domain energy function. We define our spatial-domain constraint as the $L_1$ distance between the predicted PAN image and the actual PAN image. To estimate the PAN image of the up-sampled HSI, we also propose a learnable spectral response function (SRF). Moreover, we noticed that the residual image between the up-sampled HSI and the reference HSI mainly consists of edge information and very fine structures. In order to accurately estimate fine information, we propose a novel over-complete network, called HyperKite, which focuses on learning high-level features by constraining the receptive from increasing in the deep layers. We perform experiments on three HSI datasets to demonstrate the superiority of our DIP-HyperKite over the state-of-the-art pansharpening methods. The deployment codes, pre-trained models, and final fusion outputs of our DIP-HyperKite and the methods used for the comparisons will be publicly made available at \url{https://github.com/wgcban/DIP-HyperKite.git}</div><div><br></div>


2021 ◽  
Author(s):  
Wele Gedara Chaminda Bandara ◽  
Jeya Maria Jose Valanarasu ◽  
Vishal M. Patel

<div> \par Hyperspectral pansharpening aims to synthesize a low-resolution hyperspectral image (LR-HSI) with a registered panchromatic image (PAN) to generate an enhanced HSI with high spectral and spatial resolution. Recently proposed HS pansharpening methods have obtained remarkable results using deep convolutional networks (ConvNets), which typically consist of three steps: (1) up-sampling the LR-HSI, (2) predicting the residual image via a ConvNet, and (3) obtaining the final fused HSI by adding the outputs from first and second steps. Recent methods have leveraged Deep Image Prior (DIP) to up-sample the LR-HSI due to its excellent ability to preserve both spatial and spectral information, without learning from large data sets. However, we observed that the quality of up-sampled HSIs can be further improved by introducing an additional spatial-domain constraint to the conventional spectral-domain energy function. We define our spatial-domain constraint as the $L_1$ distance between the predicted PAN image and the actual PAN image. To estimate the PAN image of the up-sampled HSI, we also propose a learnable spectral response function (SRF). Moreover, we noticed that the residual image between the up-sampled HSI and the reference HSI mainly consists of edge information and very fine structures. In order to accurately estimate fine information, we propose a novel over-complete network, called HyperKite, which focuses on learning high-level features by constraining the receptive from increasing in the deep layers. We perform experiments on three HSI datasets to demonstrate the superiority of our DIP-HyperKite over the state-of-the-art pansharpening methods. The deployment codes, pre-trained models, and final fusion outputs of our DIP-HyperKite and the methods used for the comparisons will be publicly made available at \url{https://github.com/wgcban/DIP-HyperKite.git}</div><div><br></div>


Sign in / Sign up

Export Citation Format

Share Document