output image
Recently Published Documents


TOTAL DOCUMENTS

51
(FIVE YEARS 18)

H-INDEX

5
(FIVE YEARS 4)

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Li Qiao ◽  
Mingfu Wang ◽  
Zheng Jin ◽  
Danbo Mao

AbstractThe non-uniformity of image directly affects the application of EMCCD in various disciplines. The proposed method can significantly improve the uniformity of EMCCD output image. The correction algorithm of "reverse split and forward recovery" is determined through analyzing the imaging model of EMCCD, and the comprehensive non-uniformity correction function model is established. The 8-tap EMCCD chip CCD220 of British e2v company is used for experimental verification. The results show that after the comprehensive correction the consistencies of the light response characteristic curve and the multiplication gain curve of each channel in EMCCD are obviously improved, and also the photo response non-uniformity (PRNU) of the output image is substantially reduced from 24.5 to 4.1%, which prove the effectiveness of the proposed method.


2021 ◽  
Vol 11 (12) ◽  
pp. 3024-3027
Author(s):  
J. Murugachandravel ◽  
S. Anand

Human brain can be viewed using MRI images. These images will be useful for physicians, only if their quality is good. We propose a new method called, Contourlet Based Two Stage Adaptive Histogram Equalization (CBTSA), that uses Nonsubsampled Contourlet Transform (NSCT) for smoothing images and adaptive histogram equalization (AHE), under two occasions, called stages, for enhancement of the low contrast MRI images. The given MRI image is fragmented into equal sized sub-images and NSCT is applied to each of the sub-images. AHE is imposed on each resultant sub-image. All processed images are merged and AHE is applied again to the merged image. The clarity of the output image obtained by our method has outperformed the output image produced by traditional methods. The quality was measured and compared using criteria like, Entropy, Absolute Mean Brightness Error (AMBE) and Peak Signal to Noise Ratio (PSNR).


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
N. Rabiee ◽  
H. Azad ◽  
N. Parhizgar

A common assumption in SAR image formation and processing algorithms is that the chirp rates of the transmitted and received radar signals are exactly the same. Dechirp processing is also done based on this common assumption. In real scenarios, the chirp rate of the received signal is different from that of the transmitted signal due to several reasons. In case the difference between the chirp rates of the transmitted and received signals is obvious, the demodulation and compression of the received pulse are not carried out precisely and defocusing the targets and the output images of the SAR processor results. In the present paper, a new technique is proposed to improve the image formation quality of SAR by exploiting chirp rate estimation methods. Based on the proposed technique, the chirp rate of the received signal is estimated, and then, dechirp is carried out by using a time-reversed complex conjugate filter constructed based on the estimated chirp rate. In this stage, the existing chirp rate estimation algorithms can be used. The quality of the output image is assessed using PSLR as a quantitative criterion and the average number of point target extension pixels along the azimuth direction. Simulation results indicated that the smaller the average number of point target extension pixels along with azimuth and the higher the PSLR average is, the better the output image quality would be. Therefore, output images obtained from the proposed method by exploiting chirp rate estimation algorithms would have a better quality with a higher PSLR average (14.1 and 13.6) and also the lower average number of point target extension pixels along the azimuth directions (2.1 and 4.9) than the common method with PSLR average (8.3) and an average number of point target extension pixels along the azimuth direction (7.1).


2021 ◽  
pp. 147387162110481
Author(s):  
Haijun Yu ◽  
Shengyang Li

Hyperspectral images (HSIs) have become increasingly prominent as they can maintain the subtle spectral differences of the imaged objects. Designing approaches and tools for analyzing HSIs presents a unique set of challenges due to their high-dimensional characteristics. An improved color visualization approach is proposed in this article to achieve communication between users and HSIs in the field of remote sensing. Under the real-time interactive control and color visualization, this approach can help users intuitively obtain the rich information hidden in original HSIs. Using the dimensionality reduction (DR) method based on band selection, high-dimensional HSIs are reduced to low-dimensional images. Through drop-down boxes, users can freely specify images that participate in the combination of RGB channels of the output image. Users can then interactively and independently set the fusion coefficient of each image within an interface based on concentric circles. At the same time, the output image will be calculated and visualized in real time, and the information it reflects will also be different. In this approach, channel combination and fusion coefficient setting are two independent processes, which allows users to interact more flexibly according to their needs. Furthermore, this approach is also applicable for interactive visualization of other types of multi-layer data.


2021 ◽  
Vol 10 (2) ◽  
pp. 750-758
Author(s):  
Mustafa Amer Obaid ◽  
Wesam M. Jasim

In this work, concept of the fashion-MNIST images classification constructed on convolutional neural networks is discussed. Whereas, 28×28 grayscale images of 70,000 fashion products from 10 classes, with 7,000 images per category, are in the fashion-MNIST dataset. There are 60,000 images in the training set and 10,000 images in the evaluation set. The data has been initially pre-processed for resizing and reducing the noise. Then, this data is normalized for ensuring that all the data are on the same scale and this usually improves the performance. After normalizing the data, it is augmented where one image will be in three forms of output. The first output image is obtained by rotating the actual one; the second output image is obtained as acute angle image; and the third is obtained as tilt image. The new data set is of 180,000 images for training phase and 30,000 images for the testing phase. Finally, data is sent to training process as input for training model of the pre-convolution network. The pre-convolution neural network with the five layered convoluted deep neural network and do the training with the augmented data, The performance of the proposed system shows 94% accuracy where it was 93% in VGG16 and 92% in AlexNetnetworks.


2021 ◽  
Vol 3 (3) ◽  
Author(s):  
Javad Abbasi Aghamaleki ◽  
Alireza Ghorbani

AbstractImage fusion is the combining process of complementary information of multiple same scene images into an output image. The resultant output image that is named fused image, produces more precise description of the scene than any of the individual input images. In this paper, we propose a novel simple and fast strategy for infrared (IR) and visible images based on local important areas of IR image. The fusion method is completed in three step approach. Firstly, only the segmented regions in the infrared image is extracted. Next, the image fusion is applied on segmented area and finally, contour lines are also used to improve the quality of the results of the second step of fusion method. Using a publicly available database, the proposed method is evaluated and compared to the other fusion methods. The experimental results show the effectiveness of the proposed method compared to the state of the art methods.


2020 ◽  
Vol 10 (7) ◽  
pp. 1597-1602
Author(s):  
Haozhong Hu

In order to segment breast tumor accurately, an improved Unit-Linking Pulse-Coupled Neural Networks based mammography image segmentation method is proposed. Firstly, the link input and coupled parameter in the original model are improved according to the relationship between this neuron and its neighbors. Then, the improved model is used to segment the breast tumor image to obtain multiple output images. Finally, the gradient algorithm is used to calculate the edges of the original image and each output image respectively, and the minimum mean square error (MMSE) of the two edge images is calculated to find the best output image. The final experimental results indicate that the improved method can accurately segment breast tumor images in different environments. In addition, based on the segmentation results, we use the SVM method to diagnose the type of tumor, and its classification accuracy is much higher than the existing deep classification algorithm.


Image-transformation problem is a problem in which an input image is transformed to an output image. In most of the recent methods, a feed-forward neural network is defined which utilizes per-pixel loss between the output image and the ground-truth image. In this paper we have showcased that highquality images can be generated by defining a feature-loss function which is based on high-level perceptual features extracted from pre-trained convolutional networks. We have combined both the approaches that have been formerly mentioned and have proposed a feature-loss function for training a feed-forward neural network capable of image transformation tasks. We have compared out method with that of an optimization based approach, similar to the one utilized in Generative Adversarial Networks (GANs) and our method produced visually appealing results whilst fully capturing the intricate details of the object in the image.


In semantic image-to-image translation, the goal will be to learn mapping between an input image and the output image. A model of semantic image to image translation problem using Cycle GAN algorithm is proposed. Given a set of paired or unpaired images a transformation is learned to translate the input image into the specified domain. The dataset considered is cityscape dataset. In the cityscape dataset, the semantic images are converted into photographic images. Here a Generative Adversarial Network algorithm called Cycle GAN algorithm with cycle consistency loss is used. The cycle GAN algorithm can be used to transform the semantic image into a photographic or real image. The cycle consistency loss compares the real image and the output image of the second generator and gives the loss functions. In this paper, the model shows that by considering more training time we get the accurate results and the image quality will be improved. The model can be used when images from one domain needs to be converted into another domain inorder to obtain high quality of images.


Sign in / Sign up

Export Citation Format

Share Document