scholarly journals XCycles Backprojection Acoustic Super-Resolution

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3453
Author(s):  
Feras Almasri ◽  
Jurgen Vandendriessche ◽  
Laurent Segers ◽  
Bruno da Silva ◽  
An Braeken ◽  
...  

The computer vision community has paid much attention to the development of visible image super-resolution (SR) using deep neural networks (DNNs) and has achieved impressive results. The advancement of non-visible light sensors, such as acoustic imaging sensors, has attracted much attention, as they allow people to visualize the intensity of sound waves beyond the visible spectrum. However, because of the limitations imposed on acquiring acoustic data, new methods for improving the resolution of the acoustic images are necessary. At this time, there is no acoustic imaging dataset designed for the SR problem. This work proposed a novel backprojection model architecture for the acoustic image super-resolution problem, together with Acoustic Map Imaging VUB-ULB Dataset (AMIVU). The dataset provides large simulated and real captured images at different resolutions. The proposed XCycles BackProjection model (XCBP), in contrast to the feedforward model approach, fully uses the iterative correction procedure in each cycle to reconstruct the residual error correction for the encoded features in both low- and high-resolution space. The proposed approach was evaluated on the dataset and showed high outperformance compared to the classical interpolation operators and to the recent feedforward state-of-the-art models. It also contributed to a drastically reduced sub-sampling error produced during the data acquisition.

2019 ◽  
Vol 11 (21) ◽  
pp. 2593
Author(s):  
Li ◽  
Zhang ◽  
Jiao ◽  
Liu ◽  
Yang ◽  
...  

In the convolutional sparse coding-based image super-resolution problem, the coefficients of low- and high-resolution images in the same position are assumed to be equivalent, which enforces an identical structure of low- and high-resolution images. However, in fact the structure of high-resolution images is much more complicated than that of low-resolution images. In order to reduce the coupling between low- and high-resolution representations, a semi-coupled convolutional sparse learning method (SCCSL) is proposed for image super-resolution. The proposed method uses nonlinear convolution operations as the mapping function between low- and high-resolution features, and conventional linear mapping can be seen as a special case of the proposed method. Secondly, the neighborhoods within the filter size are used to calculate the current pixel, improving the flexibility of our proposed model. In addition, the filter size is adjustable. In order to illustrate the effectiveness of SCCSL method, we compare it with four state-of-the-art methods of 15 commonly used images. Experimental results show that this work provides a more flexible and efficient approach for image super-resolution problem.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3234 ◽  
Author(s):  
Haopeng Zhang ◽  
Pengrui Wang ◽  
Cong Zhang ◽  
Zhiguo Jiang

In the case of space-based space surveillance (SBSS), images of the target space objects captured by space-based imaging sensors usually suffer from low spatial resolution due to the extremely long distance between the target and the imaging sensor. Image super-resolution is an effective data processing operation to get informative high resolution images. In this paper, we comparably study four recent popular models for single image super-resolution based on convolutional neural networks (CNNs) with the purpose of space applications. We specially fine-tune the super-resolution models designed for natural images using simulated images of space objects, and test the performance of different CNN-based models in different conditions that are mainly considered for SBSS. Experimental results show the advantages and drawbacks of these models, which could be helpful for the choice of proper CNN-based super-resolution method to deal with image data of space objects.


This project is an attempt to understand the suitability of the Single image super resolution models to video super resolution. Super Resolution refers to the process of enhancing the quality of low resolution images and video. Single image super resolution algorithms refer to those algorithms that can be applied on a single image to enhance its resolution. Whereas, video super resolution algorithms are those algorithms that are applied to a sequence of frames/images that constitute a video to enhance its resolution. In this paper we determine whether single image super resolution models can be applied to videos as well. When images are simply resized in Open CV, the traditional methods such as Interpolation are used which approximate the values of new pixels based on nearby pixel values which leave much to be desired in terms of visual quality, as the details (e.g. sharp edges) are often not preserved. We use deep learning techniques such as GANs (Generative Adversarial Networks) to train a model to output high resolution images from low resolution images. In this paper we analyse suitability of SRGAN and EDSR network architectures which are widely used and are popular for single image super resolution problem. We quantify the performance of these models, provide a method to evaluate and compare the models. We further draw a conclusion on the suitability and extent to which these models may be used for video super resolution. If found suitable this can have huge impact including but not limited to video compression, embedded models in end devices to enhance video output quality.


Author(s):  
Hyunduk KIM ◽  
Sang-Heon LEE ◽  
Myoung-Kyu SOHN ◽  
Dong-Ju KIM ◽  
Byungmin KIM

2017 ◽  
Vol 6 (4) ◽  
pp. 15
Author(s):  
JANARDHAN CHIDADALA ◽  
RAMANAIAH K.V. ◽  
BABULU K ◽  
◽  
◽  
...  

Sign in / Sign up

Export Citation Format

Share Document