scholarly journals Super‐resolution with adversarial loss on the feature maps of the generated high‐resolution image

2021 ◽  
Author(s):  
I. Imanuel ◽  
S. Lee
Author(s):  
R. S. Hansen ◽  
D. W. Waldram ◽  
T. Q. Thai ◽  
R. B. Berke

Abstract Background High-resolution Digital Image Correlation (DIC) measurements have previously been produced by stitching of neighboring images, which often requires short working distances. Separately, the image processing community has developed super resolution (SR) imaging techniques, which improve resolution by combining multiple overlapping images. Objective This work investigates the novel pairing of super resolution with digital image correlation, as an alternative method to produce high-resolution full-field strain measurements. Methods First, an image reconstruction test is performed, comparing the ability of three previously published SR algorithms to replicate a high-resolution image. Second, an applied translation is compared against DIC measurement using both low- and super-resolution images. Third, a ring sample is mechanically deformed and DIC strain measurements from low- and super-resolution images are compared. Results SR measurements show improvements compared to low-resolution images, although they do not perfectly replicate the high-resolution image. SR-DIC demonstrates reduced error and improved confidence in measuring rigid body translation when compared to low resolution alternatives, and it also shows improvement in spatial resolution for strain measurements of ring deformation. Conclusions Super resolution imaging can be effectively paired with Digital Image Correlation, offering improved spatial resolution, reduced error, and increased measurement confidence.


Author(s):  
Dong Seon Cheng ◽  
Marco Cristani ◽  
Vittorio Murino

Image super-resolution is one of the most appealing applications of image processing, capable of retrieving a high resolution image by fusing several registered low resolution images depicting an object of interest. However, employing super-resolution in video data is challenging: a video sequence generally contains a lot of scattered information regarding several objects of interest in cluttered scenes. Especially with hand-held cameras, the overall quality may be poor due to low resolution or unsteadiness. The objective of this chapter is to demonstrate why standard image super-resolution fails in video data, which are the problems that arise, and how we can overcome these problems. In our first contribution, we propose a novel Bayesian framework for super-resolution of persistent objects of interest in video sequences. We call this process Distillation. In the traditional formulation of the image super-resolution problem, the observed target is (1) always the same, (2) acquired using a camera making small movements, and (3) found in a number of low resolution images sufficient to recover high-frequency information. These assumptions are usually unsatisfied in real world video acquisitions and often beyond the control of the video operator. With Distillation, we aim to extend and to generalize the image super-resolution task, embedding it in a structured framework that accurately distills all the informative bits of an object of interest. In practice, the Distillation process: i) individuates, in a semi supervised way, a set of objects of interest, clustering the related video frames and registering them with respect to global rigid transformations; ii) for each one, produces a high resolution image, by weighting each pixel according to the information retrieved about the object of interest. As a second contribution, we extend the Distillation process to deal with objects of interest whose transformations in the appearance are not (only) rigid. Such process, built on top of the Distillation, is hierarchical, in the sense that a process of clustering is applied recursively, beginning with the analysis of whole frames, and selectively focusing on smaller sub-regions whose isolated motion can be reasonably assumed as rigid. The ultimate product of the overall process is a strip of images that describe at high resolution the dynamics of the video, switching between alternative local descriptions in response to visual changes. Our approach is first tested on synthetic data, obtaining encouraging comparative results with respect to known super-resolution techniques, and a good robustness against noise. Second, real data coming from different videos are considered, trying to solve the major details of the objects in motion.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Mahmoud M. Khattab ◽  
Akram M. Zeki ◽  
Ali A. Alwan ◽  
Belgacem Bouallegue ◽  
Safaa S. Matter ◽  
...  

The primary goal of the multiframe super-resolution image reconstruction is to produce an image with a higher resolution by integrating information extracted from a set of corresponding images with low resolution, which is used in various fields. However, super-resolution image reconstruction approaches are typically affected by annoying restorative artifacts, including blurring, noise, and staircasing effect. Accordingly, it is always difficult to balance between smoothness and edge preservation. In this paper, we intend to enhance the efficiency of multiframe super-resolution image reconstruction in order to optimize both analysis and human interpretation processes by improving the pictorial information and enhancing the automatic machine perception. As a result, we propose new approaches that firstly rely on estimating the initial high-resolution image through preprocessing of the reference low-resolution image based on median, mean, Lucy-Richardson, and Wiener filters. This preprocessing stage is used to overcome the degradation present in the reference low-resolution image, which is a suitable kernel for producing the initial high-resolution image to be used in the reconstruction phase of the final image. Then, L2 norm is employed for the data-fidelity term to minimize the residual among the predicted high-resolution image and the observed low-resolution images. Finally, bilateral total variation prior model is utilized to restrict the minimization function to a stable state of the generated HR image. The experimental results of the synthetic data indicate that the proposed approaches have enhanced efficiency visually and quantitatively compared to other existing approaches.


2020 ◽  
Vol 10 (2) ◽  
pp. 718 ◽  
Author(s):  
K. Lakshminarayanan ◽  
R. Santhana Krishnan ◽  
E. Golden Julie ◽  
Y. Harold Robinson ◽  
Raghvendra Kumar ◽  
...  

This paper proposed and verified a new integrated approach based on the iterative super-resolution algorithm and expectation-maximization for face hallucination, which is a process of converting a low-resolution face image to a high-resolution image. The current sparse representation for super resolving generic image patches is not suitable for global face images due to its lower accuracy and time-consumption. To solve this, in the new method, training global face sparse representation was used to reconstruct images with misalignment variations after the local geometric co-occurrence matrix. In the testing phase, we proposed a hybrid method, which is a combination of the sparse global representation and the local linear regression using the Expectation Maximization (EM) algorithm. Therefore, this work recovered the high-resolution image of a corresponding low-resolution image. Experimental validation suggested improvement of the overall accuracy of the proposed method with fast identification of high-resolution face images without misalignment.


2014 ◽  
Vol 568-570 ◽  
pp. 652-655 ◽  
Author(s):  
Zhao Li ◽  
Le Wang ◽  
Tao Yu ◽  
Bing Liang Hu

This paper presents a novel method for solving single-image super-resolution problems, based upon low-rank representation (LRR). Given a set of a low-resolution image patches, LRR seeks the lowest-rank representation among all the candidates that represent all patches as the linear combination of the patches in a low-resolution dictionary. By jointly training two dictionaries for the low-resolution and high-resolution images, we can enforce the similarity of LLRs between the low-resolution and high-resolution image pair with respect to their own dictionaries. Therefore, the LRR of a low-resolution image can be applied with the high-resolution dictionary to generate a high-resolution image. Unlike the well-known sparse representation, which computes the sparsest representation of each image patch individually, LRR aims at finding the lowest-rank representation of a collection of patches jointly. LRR better captures the global structure of image. Experiments show that our method gives good results both visually and quantitatively.


Author(s):  
S. E. EL-KHAMY ◽  
M. M. HADHOUD ◽  
M. I. DESSOUKY ◽  
B. M. SALAM ◽  
F. E. ABD EL-SAMIE

This paper presents a wavelet-based computationally efficient implementation of the Linear Minimum Mean Square Error (LMMSE) algorithm in image super-resolution. The image super-resolution reconstruction problem is well-known to be an ill-posed inverse problem of large dimensions. The LMMSE estimator to be implemented in the image super-resolution reconstruction problem requires an inversion of a very large dimension matrix, which is practically impossible. Our suggested implementation is based on breaking the problem into four consecutive steps, a registration step, a multi-channel LMMSE restoration step, a wavelet-based image fusion step and an LMMSE image interpolation step. The objective of the wavelet fusion step is to integrate the data obtained from each observation into a single image, which is then interpolated to give a high-resolution image. The paper explains the implementation of each step. The proposed implementation has succeeded in obtaining a high-resolution image from multiple degraded observations with a high PSNR. The computation time of the suggested implementation is small when compared to traditional iterative image super-resolution algorithms.


2019 ◽  
Author(s):  
Yaohua Xie

Super-resolution microscopes (such as STED) illuminate samples with a tiny spot, and achieve very high resolution. But structures smaller than the spot cannot be resolved in this way. Therefore, we propose a technique to solve this problem. It is termed “Deconvolution after Dense Scan (DDS)”. First, a preprocessing stage is introduced to eliminate the optical uncertainty of the peripheral areas around the sample’s ROI (Region of Interest). Then, the ROI is scanned densely together with its peripheral areas. Finally, the high resolution image is recovered by deconvolution. The proposed technique does not need to modify the apparatus much, and is mainly performed by algorithm. Simulation experiments show that the technique can further improve the resolution of super-resolution microscopes.


Author(s):  
V. S. Sahithi ◽  
S. Agrawal

CHRIS /Proba is a multiviewing hyperspectral sensor that monitors the earth in five different zenith angles +55°, +36°, nadir, −36° and −55° with a spatial resolution of 17 m and within a spectral range of 400–1050 nm in mode 3. These multiviewing images are suitable for constructing a super resolved high resolution image that can reveal the mixed pixel of the hyperspectral image. In the present work, an attempt is made to find the location of various features constituted within the 17m mixed pixel of the CHRIS image using various super resolution reconstruction techniques. Four different super resolution reconstruction techniques namely interpolation, iterative back projection, projection on to convex sets (POCS) and robust super resolution were tried on the −36, nadir and +36 images to construct a super resolved high resolution 5.6 m image. The results of super resolution reconstruction were compared with the scaled nadir image and bicubic convoluted image for comparision of the spatial and spectral property preservance. A support vector machine classification of the best super resolved high resolution image was performed to analyse the location of the sub pixel features. Validation of the obtained results was performed using the spectral unmixing fraction images and the 5.6 m classified LISS IV image.


Sign in / Sign up

Export Citation Format

Share Document