High-speed multimode fiber imaging system based on conditional generative adversarial network

2021 ◽  
Vol 19 (8) ◽  
pp. 081101
Author(s):  
Zhenming Yu ◽  
Zhenyu Ju ◽  
Xinlei Zhang ◽  
Ziyi Meng ◽  
Feifei Yin ◽  
...  
Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3941 ◽  
Author(s):  
Li ◽  
Cai ◽  
Wang ◽  
Zhang ◽  
Tang ◽  
...  

Limited-angle computed tomography (CT) image reconstruction is a challenging problem in the field of CT imaging. In some special applications, limited by the geometric space and mechanical structure of the imaging system, projections can only be collected with a scanning range of less than 90°. We call this kind of serious limited-angle problem the ultra-limited-angle problem, which is difficult to effectively alleviate by traditional iterative reconstruction algorithms. With the development of deep learning, the generative adversarial network (GAN) performs well in image inpainting tasks and can add effective image information to restore missing parts of an image. In this study, given the characteristic of GAN to generate missing information, the sinogram-inpainting-GAN (SI-GAN) is proposed to restore missing sinogram data to suppress the singularity of the truncated sinogram for ultra-limited-angle reconstruction. We propose the U-Net generator and patch-design discriminator in SI-GAN to make the network suitable for standard medical CT images. Furthermore, we propose a joint projection domain and image domain loss function, in which the weighted image domain loss can be added by the back-projection operation. Then, by inputting a paired limited-angle/180° sinogram into the network for training, we can obtain the trained model, which has extracted the continuity feature of sinogram data. Finally, the classic CT reconstruction method is used to reconstruct the images after obtaining the estimated sinograms. The simulation studies and actual data experiments indicate that the proposed method performed well to reduce the serious artifacts caused by ultra-limited-angle scanning.


Author(s):  
Chuyu Wang ◽  
Lei Xie ◽  
Yuancan Lin ◽  
Wei Wang ◽  
Yingying Chen ◽  
...  

The unprecedented success of speech recognition methods has stimulated the wide usage of intelligent audio systems, which provides new attack opportunities for stealing the user privacy through eavesdropping on the loudspeakers. Effective eavesdropping methods employ a high-speed camera, relying on LOS to measure object vibrations, or utilize WiFi MIMO antenna array, requiring to eavesdrop in quiet environments. In this paper, we explore the possibility of eavesdropping on the loudspeaker based on COTS RFID tags, which are prevalently deployed in many corners of our daily lives. We propose Tag-Bug that focuses on the human voice with complex frequency bands and performs the thru-the-wall eavesdropping on the loudspeaker by capturing sub-mm level vibration. Tag-Bug extracts sound characteristics through two means: (1) Vibration effect, where a tag directly vibrates caused by sounds; (2) Reflection effect, where a tag does not vibrate but senses the reflection signals from nearby vibrating objects. To amplify the influence of vibration signals, we design a new signal feature referred as Modulated Signal Difference (MSD) to reconstruct the sound from RF-signals. To improve the quality of the reconstructed sound for human voice recognition, we apply a Conditional Generative Adversarial Network (CGAN) to recover the full-frequency band from the partial-frequency band of the reconstructed sound. Extensive experiments on the USRP platform show that Tag-Bug can successfully capture the monotone sound when the loudness is larger than 60dB. Tag-Bug can efficiently recognize the numbers of human voice with 95.3%, 85.3% and 87.5% precision in the free-space eavesdropping, thru-the-brick-wall eavesdropping and thru-the-insulating-glass eavesdropping, respectively. Tag-Bug can also accurately recognize the letters with 87% precision in the free-space eavesdropping.


2021 ◽  
Author(s):  
Danyang Zhang ◽  
Junhui Zhao ◽  
Lihua Yang ◽  
Yiwen Nie ◽  
Xiangcheng Lin

Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3075 ◽  
Author(s):  
Baoqing Guo ◽  
Gan Geng ◽  
Liqiang Zhu ◽  
Hongmei Shi ◽  
Zujun Yu

Foreign object intrusion is a great threat to high-speed railway safety operations. Accurate foreign object intrusion detection is particularly important. As a result of the lack of intruding foreign object samples during the operational period, artificially generated ones will greatly benefit the development of the detection methods. In this paper, we propose a novel method to generate railway intruding object images based on an improved conditional deep convolutional generative adversarial network (C-DCGAN). It consists of a generator and multi-scale discriminators. Loss function is also improved so as to generate samples with a high quality and authenticity. The generator is extracted in order to generate foreign object images from input semantic labels. We synthesize the generated objects to the railway scene. To make the generated objects more similar to real objects, on scale in different positions of a railway scene, a scale estimation algorithm based on the gauge constant is proposed. The experimental results on the railway intruding object dataset show that the proposed C-DCGAN model outperforms several state-of-the-art methods and achieves a higher quality (the pixel-wise accuracy, mean intersection-over-union (mIoU), and mean average precision (mAP) are 80.46%, 0.65, and 0.69, respectively) and diversity (the Fréchet-Inception Distance (FID) score is 26.87) of generated samples. The mIoU of the real-generated pedestrian pairs reaches 0.85, and indicates a higher scale of accuracy for the generated intruding objects in the railway scene.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3317
Author(s):  
Xiaotian Wu ◽  
Jiongcheng Li ◽  
Guanxing Zhou ◽  
Bo Lü ◽  
Qingqing Li ◽  
...  

The simple lens computational imaging method provides an alternative way to achieve high-quality photography. It simplifies the design of the optical-front-end to a single-convex-lens and delivers the correction of optical aberration to a dedicated computational restoring algorithm. Traditional single-convex-lens image restoration is based on optimization theory, which has some shortcomings in efficiency and efficacy. In this paper, we propose a novel Recursive Residual Groups network under Generative Adversarial Network framework (RRG-GAN) to generate a clear image from the aberrations-degraded blurry image. The RRG-GAN network includes dual attention module, selective kernel network module, and residual resizing module to make it more suitable for the non-uniform deblurring task. To validate the evaluation algorithm, we collect sharp/aberration-degraded datasets by CODE V simulation. To test the practical application performance, we built a display-capture lab setup and reconstruct a manual registering dataset. Relevant experimental comparisons and actual tests verify the effectiveness of our proposed method.


2019 ◽  
Vol 2019 ◽  
pp. 1-10
Author(s):  
Dong Zhao ◽  
Long Xu ◽  
Linjie Chen ◽  
Yihua Yan ◽  
Ling-Yu Duan

Overexposure may happen for imaging of solar observation as extremely violet solar bursts occur, which means that signal intensity goes beyond the dynamic range of imaging system of a telescope, resulting in loss of signal. For example, during solar flare, Atmospheric Imaging Assembly (AIA) of Solar Dynamics Observatory (SDO) often records overexposed images/videos, resulting loss of fine structures of solar flare. This paper makes effort to retrieve/recover missing information of overexposure by exploiting deep learning for its powerful nonlinear representation which makes it widely used in image reconstruction/restoration. First, a new model, namely, mask-Pix2Pix network, is proposed for overexposure recovery. It is built on a well-known Pix2Pix network of conditional generative adversarial network (cGAN). In addition, a hybrid loss function, including an adversarial loss, a masked L1 loss and a edge mass loss/smoothness, are integrated together for addressing challenges of overexposure relative to conventional image restoration. Moreover, a new database of overexposure is established for training the proposed model. Extensive experimental results demonstrate that the proposed mask-Pix2Pix network can well recover missing information of overexposure and outperforms the state of the arts originally designed for image reconstruction tasks.


Author(s):  
Changfan Zhang ◽  
◽  
Hongrun Chen ◽  
Jing He ◽  
Haonan Yang

Focusing on the issue of missing measurement data caused by complex and changeable working conditions during the operation of high-speed trains, in this paper, a framework for the reconstruction of missing measurement data based on a generative adversarial network is proposed. Suitable parameters were set for each frame. Discrete measurement data are taken as the input of the frame for preprocessing the data dimensionality. The convolutional neural network then learns the correlation between different characteristic values of each device in an unsupervised pattern and constrains and improves the reconstruction accuracy by taking advantage of the context similarity of authenticity. It was determined experimentally that when there are different extents of missing measurement data, the model described in the present paper can still maintain a high reconstruction accuracy. In addition, the reconstruction data also conform well to the distribution law of the measurement data.


Sign in / Sign up

Export Citation Format

Share Document