A new P-wave reconstruction method for VSP data using conditional generative adversarial network

Author(s):  
Yanwen Wei ◽  
Haohuan Fu ◽  
Yunyue Elita Li ◽  
Jizhong Yang
Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4365
Author(s):  
Kwangyong Jung ◽  
Jae-In Lee ◽  
Nammoon Kim ◽  
Sunjin Oh ◽  
Dong-Wook Seo

Radar target classification is an important task in the missile defense system. State-of-the-art studies using micro-doppler frequency have been conducted to classify the space object targets. However, existing studies rely highly on feature extraction methods. Therefore, the generalization performance of the classifier is limited and there is room for improvement. Recently, to improve the classification performance, the popular approaches are to build a convolutional neural network (CNN) architecture with the help of transfer learning and use the generative adversarial network (GAN) to increase the training datasets. However, these methods still have drawbacks. First, they use only one feature to train the network. Therefore, the existing methods cannot guarantee that the classifier learns more robust target characteristics. Second, it is difficult to obtain large amounts of data that accurately mimic real-world target features by performing data augmentation via GAN instead of simulation. To mitigate the above problem, we propose a transfer learning-based parallel network with the spectrogram and the cadence velocity diagram (CVD) as the inputs. In addition, we obtain an EM simulation-based dataset. The radar-received signal is simulated according to a variety of dynamics using the concept of shooting and bouncing rays with relative aspect angles rather than the scattering center reconstruction method. Our proposed model is evaluated on our generated dataset. The proposed method achieved about 0.01 to 0.39% higher accuracy than the pre-trained networks with a single input feature.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3941 ◽  
Author(s):  
Li ◽  
Cai ◽  
Wang ◽  
Zhang ◽  
Tang ◽  
...  

Limited-angle computed tomography (CT) image reconstruction is a challenging problem in the field of CT imaging. In some special applications, limited by the geometric space and mechanical structure of the imaging system, projections can only be collected with a scanning range of less than 90°. We call this kind of serious limited-angle problem the ultra-limited-angle problem, which is difficult to effectively alleviate by traditional iterative reconstruction algorithms. With the development of deep learning, the generative adversarial network (GAN) performs well in image inpainting tasks and can add effective image information to restore missing parts of an image. In this study, given the characteristic of GAN to generate missing information, the sinogram-inpainting-GAN (SI-GAN) is proposed to restore missing sinogram data to suppress the singularity of the truncated sinogram for ultra-limited-angle reconstruction. We propose the U-Net generator and patch-design discriminator in SI-GAN to make the network suitable for standard medical CT images. Furthermore, we propose a joint projection domain and image domain loss function, in which the weighted image domain loss can be added by the back-projection operation. Then, by inputting a paired limited-angle/180° sinogram into the network for training, we can obtain the trained model, which has extracted the continuity feature of sinogram data. Finally, the classic CT reconstruction method is used to reconstruct the images after obtaining the estimated sinograms. The simulation studies and actual data experiments indicate that the proposed method performed well to reduce the serious artifacts caused by ultra-limited-angle scanning.


2021 ◽  
Vol 11 (19) ◽  
pp. 9065
Author(s):  
Myungjin Choi ◽  
Jee-Hyeok Park ◽  
Qimeng Zhang ◽  
Byeung-Sun Hong ◽  
Chang-Hun Kim

We propose a novel method for addressing the problem of efficiently generating a highly refined normal map for screen-space fluid rendering. Because the process of filtering the normal map is crucially important to ensure the quality of the final screen-space fluid rendering, we employ a conditional generative adversarial network (cGAN) as a filter that learns a deep normal map representation, thereby refining the low-quality normal map. In particular, we have designed a novel loss function dedicated to refining the normal map information, and we use a specific set of auxiliary features to train the cGAN generator to learn features that are more robust with respect to edge details. Additionally, we constructed a dataset of six different typical scenes to enable effective demonstrations of multitype fluid simulation. Experiments indicated that our generator was able to infer clearer and more detailed features for this dataset than a basic screen-space fluid rendering method. Moreover, in some cases, the results generated by our method were even smoother than those generated by the conventional surface reconstruction method. Our method improves the fluid rendering results via the high-quality normal map while preserving the advantages of the screen-space fluid rendering methods and the traditional surface reconstruction methods, including that of the computation time being independent of the number of simulation particles and the spatial resolution being related only to image resolution.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yuqing Zhao ◽  
Guangyuan Fu ◽  
Hongqiao Wang ◽  
Shaolei Zhang ◽  
Min Yue

The convolutional neural network has achieved good results in the superresolution reconstruction of single-frame images. However, due to the shortcomings of infrared images such as lack of details, poor contrast, and blurred edges, superresolution reconstruction of infrared images that preserves the edge structure and better visual quality is still challenging. Aiming at the problems of low resolution and unclear edges of infrared images, this work proposes a two-stage generative adversarial network model to reconstruct realistic superresolution images from four times downsampled infrared images. In the first stage of the generative adversarial network, it focuses on recovering the overall contour information of the image to obtain clear image edges; the second stage of the generative adversarial network focuses on recovering the detailed feature information of the image and has a stronger ability to express details. The infrared image superresolution reconstruction method proposed in this work has highly realistic visual effects and good objective quality evaluation results.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2978
Author(s):  
Hongtao Zhang ◽  
Yuki Shinomiya ◽  
Shinichi Yoshida

The diagnosis of brain pathologies usually involves imaging to analyze the condition of the brain. Magnetic resonance imaging (MRI) technology is widely used in brain disorder diagnosis. The image quality of MRI depends on the magnetostatic field strength and scanning time. Scanners with lower field strengths have the disadvantages of a low resolution and high imaging cost, and scanning takes a long time. The traditional super-resolution reconstruction method based on MRI generally states an optimization problem in terms of prior information. It solves the problem using an iterative approach with a large time cost. Many methods based on deep learning have emerged to replace traditional methods. MRI super-resolution technology based on deep learning can effectively improve MRI resolution through a three-dimensional convolutional neural network; however, the training costs are relatively high. In this paper, we propose the use of two-dimensional super-resolution technology for the super-resolution reconstruction of MRI images. In the first reconstruction, we choose a scale factor of 2 and simulate half the volume of MRI slices as input. We utilize a receiving field block enhanced super-resolution generative adversarial network (RFB-ESRGAN), which is superior to other super-resolution technologies in terms of texture and frequency information. We then rebuild the super-resolution reconstructed slices in the MRI. In the second reconstruction, the image after the first reconstruction is composed of only half of the slices, and there are still missing values. In our previous work, we adopted the traditional interpolation method, and there was still a gap in the visual effect of the reconstructed images. Therefore, we propose a noise-based super-resolution network (nESRGAN). The noise addition to the network can provide additional texture restoration possibilities. We use nESRGAN to further restore MRI resolution and high-frequency information. Finally, we achieve the 3D reconstruction of brain MRI images through two super-resolution reconstructions. Our proposed method is superior to 3D super-resolution technology based on deep learning in terms of perception range and image quality evaluation standards.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8021
Author(s):  
Raul Castaneda ◽  
Carlos Trujillo ◽  
Ana Doblas

The conventional reconstruction method of off-axis digital holographic microscopy (DHM) relies on computational processing that involves spatial filtering of the sample spectrum and tilt compensation between the interfering waves to accurately reconstruct the phase of a biological sample. Additional computational procedures such as numerical focusing may be needed to reconstruct free-of-distortion quantitative phase images based on the optical configuration of the DHM system. Regardless of the implementation, any DHM computational processing leads to long processing times, hampering the use of DHM for video-rate renderings of dynamic biological processes. In this study, we report on a conditional generative adversarial network (cGAN) for robust and fast quantitative phase imaging in DHM. The reconstructed phase images provided by the GAN model present stable background levels, enhancing the visualization of the specimens for different experimental conditions in which the conventional approach often fails. The proposed learning-based method was trained and validated using human red blood cells recorded on an off-axis Mach–Zehnder DHM system. After proper training, the proposed GAN yields a computationally efficient method, reconstructing DHM images seven times faster than conventional computational approaches.


Sign in / Sign up

Export Citation Format

Share Document