scholarly journals C-SURE: Shrinkage Estimator and Prototype Classifier for Complex-Valued Deep Learning

Author(s):  
Rudrasis Chakraborty ◽  
Yifei Xing ◽  
Minxuan Duan ◽  
Stella X. Yu
Author(s):  
Yibin Zhang ◽  
Jie Wang ◽  
Jinlong Sun ◽  
Bamidele Adebisi ◽  
Haris Gacanin ◽  
...  

Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 4050 ◽  
Author(s):  
Vahab Khoshdel ◽  
Ahmed Ashraf ◽  
Joe LoVetri

We present a deep learning method used in conjunction with dual-modal microwave-ultrasound imaging to produce tomographic reconstructions of the complex-valued permittivity of numerical breast phantoms. We also assess tumor segmentation performance using the reconstructed permittivity as a feature. The contrast source inversion (CSI) technique is used to create the complex-permittivity images of the breast with ultrasound-derived tissue regions utilized as prior information. However, imaging artifacts make the detection of tumors difficult. To overcome this issue we train a convolutional neural network (CNN) that takes in, as input, the dual-modal CSI reconstruction and attempts to produce the true image of the complex tissue permittivity. The neural network consists of successive convolutional and downsampling layers, followed by successive deconvolutional and upsampling layers based on the U-Net architecture. To train the neural network, the input-output pairs consist of CSI’s dual-modal reconstructions, along with the true numerical phantom images from which the microwave scattered field was synthetically generated. The reconstructed permittivity images produced by the CNN show that the network is not only able to remove the artifacts that are typical of CSI reconstructions, but can also improve the detectability of tumors. The performance of the CNN is assessed using a four-fold cross-validation on our dataset that shows improvement over CSI both in terms of reconstruction error and tumor segmentation performance.


2021 ◽  
Vol 9 ◽  
Author(s):  
Hassan Haji-Valizadeh ◽  
Rui Guo ◽  
Selcuk Kucukseymen ◽  
Yankama Tuyen ◽  
Jennifer Rodriguez ◽  
...  

Propose: The purpose of this study was to compare the performance of deep learning networks trained with complex-valued and magnitude images in suppressing the aliasing artifact for highly accelerated real-time cine MRI.Methods: Two 3D U-net models (Complex-Valued-Net and Magnitude-Net) were implemented to suppress aliasing artifacts in real-time cine images. ECG-segmented cine images (n = 503) generated from both complex k-space data and magnitude-only DICOM were used to synthetize radial real-time cine MRI. Complex-Valued-Net and Magnitude-Net were trained with fully sampled and synthetized radial real-time cine pairs generated from highly undersampled (12-fold) complex k-space and DICOM images, respectively. Real-time cine was prospectively acquired in 29 patients with 12-fold accelerated free-breathing tiny golden-angle radial sequence and reconstructed with both Complex-Valued-Net and Magnitude-Net. Cardiac function, left-ventricular (LV) structure, and subjective image quality [1(non-diagnostic)-5(excellent)] were calculated from Complex-Valued-Net– and Magnitude-Net–reconstructed real-time cine datasets and compared to those of ECG-segmented cine (reference).Results: Free-breathing real-time cine reconstructed by both networks had high correlation (all R2 > 0.7) and good agreement (all p > 0.05) with standard clinical ECG-segmented cine with respect to LV function and structural parameters. Real-time cine reconstructed by Complex-Valued-Net had superior image quality compared to images from Magnitude-Net in terms of myocardial edge sharpness (Complex-Valued-Net = 3.5 ± 0.5; Magnitude-Net = 2.6 ± 0.5), temporal fidelity (Complex-Valued-Net = 3.1 ± 0.4; Magnitude-Net = 2.1 ± 0.4), and artifact suppression (Complex-Valued-Net = 3.1 ± 0.5; Magnitude-Net = 2.0 ± 0.0), which were all inferior to those of ECG-segmented cine (4.1 ± 1.4, 3.9 ± 1.0, and 4.0 ± 1.1).Conclusion: Compared to Magnitude-Net, Complex-Valued-Net produced improved subjective image quality for reconstructed real-time cine images and did not show any difference in quantitative measures of LV function and structure.


2020 ◽  
Author(s):  
Hao Gu ◽  
Guangwei Qing ◽  
Yu Wang ◽  
Sheng Hong ◽  
Guan Gui ◽  
...  

<div>Drones-aided ubiquitous applications play more and more important roles in our daily life. Accurate recognition of drones is required in aviation management due to their potential risks and even disasters.</div><div>Radio frequency (RF) fingerprinting-based recognition technology based on deep learning is considered as one of the effective approaches to extract hidden abstract features from RF data of drones. Existing deep learning-based methods are either a high computational burden or low accuracy.</div><div>In this paper, we propose a deep complex-valued convolutional neural network (DC-CNN) method based on RF fingerprinting for recognizing different drones.</div><div>Compared with existing recognition methods, the DC-CNN method has the advantages of high recognition accuracy, fast running time and small network complexity.</div><div>Nine algorithm models and two datasets are used to represent the superior performance of our system.</div><div>Experimental results show that our proposed DC-CNN can achieve recognition accuracy of 99.5\% and 74.1\% respectively on 4 and 8 classes of RF drone datasets.</div>


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Thomas Küstner ◽  
Niccolo Fuin ◽  
Kerstin Hammernik ◽  
Aurelien Bustin ◽  
Haikun Qi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document