imaging model
Recently Published Documents


TOTAL DOCUMENTS

316
(FIVE YEARS 103)

H-INDEX

23
(FIVE YEARS 3)

Micromachines ◽  
2022 ◽  
Vol 13 (1) ◽  
pp. 94
Author(s):  
Xiaozhen Ren ◽  
Yanwen Bai ◽  
Yingying Niu ◽  
Yuying Jiang

In order to solve the problems of long-term image acquisition time and massive data processing in a terahertz time domain spectroscopy imaging system, a novel fast terahertz imaging model, combined with group sparsity and nonlocal self-similarity (GSNS), is proposed in this paper. In GSNS, the structure similarity and sparsity of image patches in both two-dimensional and three-dimensional space are utilized to obtain high-quality terahertz images. It has the advantages of detail clarity and edge preservation. Furthermore, to overcome the high computational costs of matrix inversion in traditional split Bregman iteration, an acceleration scheme based on conjugate gradient method is proposed to solve the terahertz imaging model more efficiently. Experiments results demonstrate that the proposed approach can lead to better terahertz image reconstruction performance at low sampling rates.


2021 ◽  
Author(s):  
Yiyu Sun ◽  
Yanqiu Li ◽  
Guanghui Liao ◽  
Miao Yuan ◽  
Pengzhi Wei ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Li Qiao ◽  
Mingfu Wang ◽  
Zheng Jin ◽  
Danbo Mao

AbstractThe non-uniformity of image directly affects the application of EMCCD in various disciplines. The proposed method can significantly improve the uniformity of EMCCD output image. The correction algorithm of "reverse split and forward recovery" is determined through analyzing the imaging model of EMCCD, and the comprehensive non-uniformity correction function model is established. The 8-tap EMCCD chip CCD220 of British e2v company is used for experimental verification. The results show that after the comprehensive correction the consistencies of the light response characteristic curve and the multiplication gain curve of each channel in EMCCD are obviously improved, and also the photo response non-uniformity (PRNU) of the output image is substantially reduced from 24.5 to 4.1%, which prove the effectiveness of the proposed method.


2021 ◽  
Vol 11 (12) ◽  
pp. 2907-2917
Author(s):  
P. V. Deepa ◽  
S. Joseph Jawhar ◽  
J. Merry Geisa

The field of nanotechnology has lately acquired prominence according to the raised level of correct identification and performance in the patients using Computer-Aided Diagnosis (CAD). Nano-scale imaging model enables for a high level of precision and accuracy in determining if a brain tumour is malignant or benign. This contributes to people with brain tumours having a better standard of living. In this study, We present a revolutionary Semantic nano-segmentation methodology for the nanoscale classification of brain tumours. The suggested Advanced-Convolutional Neural Networks-based Semantic Nano-segmentation will aid radiologists in detecting brain tumours even when lesions are minor. ResNet-50 was employed in the suggested Advanced-Convolutional Neural Networks (A-CNN) approach. The tumour image is partitioned using Semantic Nano-segmentation, that has averaged dice and SSIM values of 0.9704 and 0.2133, correspondingly. The input is a nano-image, and the tumour image is segmented using Semantic Nano-segmentation, which has averaged dice and SSIM values of 0.9704 and 0.2133, respectively. The suggested Semantic nano segments achieves 93.2 percent and 92.7 percent accuracy for benign and malignant tumour pictures, correspondingly. For malignant or benign pictures, The accuracy of the A-CNN methodology of correct segmentation is 99.57 percent and 95.7 percent, respectively. This unique nano-method is designed to detect tumour areas in nanometers (nm) and hence accurately assess the illness. The suggested technique’s closeness to with regard to True Positive values, the ROC curve implies that it outperforms earlier approaches. A comparison analysis is conducted on ResNet-50 using testing and training data at rates of 90%–10%, 80%–20%, and 70%–30%, corresponding, indicating the utility of the suggested work.


Author(s):  
Hennadii Khudov ◽  
◽  
Igor Ruban ◽  
Oleksandr Makoveichuk ◽  
Yevhen Stepanenko ◽  
...  

The paper proposes an improved imaging model in the presence of multiplicative spatially extended cloaking interference. The model take into account the effect of multiplicative masking interference. To simplify the calculations of the image brightness in the distorted region the diagram technique is used. Unlike the known ones, the model takes into account the concentration of the distorting medium in a narrow squat layer, the primary reflection of solar radiation from the upper boundary of the distorting layer and subsequent multiple re-reflections of the transmitted radiation of the visible wavelength range from the earth’s surface and the upper boundary of the distorting medium layer. A technique for finding and taking into account the reflection and re-reflection coefficients of radiation to restore distorted images is proposed. The results of experimental studies are presented. For the experiment, the image of the territory of Iraq during the 2003 "Freedom for Iraq" hostilities was selected. Keywords— image, model, multiplicative, extended cloaking interference, spacecraft, reflection, coefficient


2021 ◽  
Vol 13 (21) ◽  
pp. 4429
Author(s):  
Siyuan Zhao ◽  
Jiacheng Ni ◽  
Jia Liang ◽  
Shichao Xiong ◽  
Ying Luo

Synthetic aperture radar (SAR) imaging has developed rapidly in recent years. Although the traditional sparse optimization imaging algorithm has achieved effective results, its shortcomings are slow imaging speed, large number of parameters, and high computational complexity. To solve the above problems, an end-to-end SAR deep learning imaging algorithm is proposed. Based on the existing SAR sparse imaging algorithm, the SAR imaging model is first rewritten to the SAR complex signal form based on the real-value model. Second, instead of arranging the two-dimensional echo data into a vector to continuously construct an observation matrix, the algorithm only derives the neural network imaging model based on the iteration soft threshold algorithm (ISTA) sparse algorithm in the two-dimensional data domain, and then reconstructs the observation scene through the superposition and expansion of the multi-layer network. Finally, through the experiment of simulation data and measured data of the three targets, it is verified that our algorithm is superior to the traditional sparse algorithm in terms of imaging quality, imaging time, and the number of parameters.


2021 ◽  
Vol 2074 (1) ◽  
pp. 012051
Author(s):  
Ni Yin

Abstract In order to improve the ability of environmental art design, a method of environmental art design based on computer three-dimensional animation technology is proposed, and a three-dimensional animation imaging model of environmental art design is constructed. Combining the RGB decomposition technology to extract the color components of the environmental art design three-dimensional animation image, use the color template space projection algorithm to perform the block fusion processing of the environmental art design three-dimensional animation image. The simulation results show that the three-dimensional recognition ability of environmental art design using this method is better, and the performance of feature reconstruction is better, which improves the three-dimensional visual presentation ability of environmental art design.


Author(s):  
Sai Gokul Hariharan ◽  
Christian Kaethner ◽  
Norbert Strobel ◽  
Markus Kowarschik ◽  
Rebecca Fahrig ◽  
...  

Abstract Purpose: Since guidance based on X-ray imaging is an integral part of interventional procedures, continuous efforts are taken towards reducing the exposure of patients and clinical staff to ionizing radiation. Even though a reduction in the X-ray dose may lower associated radiation risks, it is likely to impair the quality of the acquired images, potentially making it more difficult for physicians to carry out their procedures. Method: We present a robust learning-based denoising strategy involving model- based simulations of low-dose X-ray images during the training phase. The method also utilizes a data-driven normalization step - based on an X-ray imaging model - to stabilize the mixed signal-dependent noise associated with X-ray images. We thoroughly analyze the method's sensitivity to a mismatch in dose levels used for training and application. We also study the impact of differing noise models used when training for low and very low-dose X-ray images on the denoising results. Results: A quantitative and qualitative analysis based on acquired phantom and clinical data has shown that the proposed learning-based strategy is stable across different dose levels and yields excellent denoising results, if an accurate noise model is applied. We also found that there can be severe artifacts when the noise characteristics of the training images are significantly different from those in the actual images to be processed. This problem can be especially acute at very low dose levels. During a thorough analysis of our experimental results, we further discovered that viewing the results from the perspective of denoising via thresholding of sub-band co efficients can be very beneficial to get a better understanding of the proposed learning-based denoising strategy. Conclusion: The proposed learning-based denoising strategy provides scope for significant X-ray dose reduction without the loss of important image information if the characteristics of noise is accurately accounted for during the training ph


Sign in / Sign up

Export Citation Format

Share Document