scholarly journals End-to-End SAR Deep Learning Imaging Method Based on Sparse Optimization

2021 ◽  
Vol 13 (21) ◽  
pp. 4429
Author(s):  
Siyuan Zhao ◽  
Jiacheng Ni ◽  
Jia Liang ◽  
Shichao Xiong ◽  
Ying Luo

Synthetic aperture radar (SAR) imaging has developed rapidly in recent years. Although the traditional sparse optimization imaging algorithm has achieved effective results, its shortcomings are slow imaging speed, large number of parameters, and high computational complexity. To solve the above problems, an end-to-end SAR deep learning imaging algorithm is proposed. Based on the existing SAR sparse imaging algorithm, the SAR imaging model is first rewritten to the SAR complex signal form based on the real-value model. Second, instead of arranging the two-dimensional echo data into a vector to continuously construct an observation matrix, the algorithm only derives the neural network imaging model based on the iteration soft threshold algorithm (ISTA) sparse algorithm in the two-dimensional data domain, and then reconstructs the observation scene through the superposition and expansion of the multi-layer network. Finally, through the experiment of simulation data and measured data of the three targets, it is verified that our algorithm is superior to the traditional sparse algorithm in terms of imaging quality, imaging time, and the number of parameters.

2021 ◽  
Vol 13 (19) ◽  
pp. 3817
Author(s):  
Yimeng Zou ◽  
Jiahao Tian ◽  
Guanghu Jin ◽  
Yongsheng Zhang

Distributed radar array brings several new forthcoming advantages in aerospace target detection and imaging. The two-dimensional distributed array avoids the imperfect motion compensation in coherent processing along slow time and can achieve single snapshot 3D imaging. Some difficulties exist in the 3D imaging processing. The first one is that the distributed array may be only in small amount. This means that the sampling does not meet the Nyquist sample theorem. The second one refers to echoes of objects in the same beam that will be mixed together, which makes sparse optimization dictionary too long for it to bring the huge computation burden in the imaging process. In this paper, we propose an innovative method on 3D imaging of the aerospace targets in the wide airspace with sparse radar array. Firstly, the case of multiple targets is not suitable to be processed uniformly in the imaging process. A 3D Hough transform is proposed based on the range profiles plane difference, which can detect and separate the echoes of different targets. Secondly, in the subsequent imaging process, considering the non-uniform sparse sampling of the distributed array in space, the migration through range cell (MTRC)-tolerated imaging method is proposed to process the signal of the two-dimensional sparse array. The uniformized method combining compressed sensing (CS) imaging in the azimuth direction and matched filtering in the range direction can realize the 3D imaging effectively. Before imaging in the azimuth direction, interpolation in the range direction is carried out. The main contributions of the proposed method are: (1) echo separation based on 3D transform avoids the huge amount of computation of direct sparse optimization imaging of three-dimensional data, and ensures the realizability of the algorithm; and (2) uniformized sparse solving imaging is proposed, which can remove the difficulty cause by MTRC. Simulation experiments verified the effectiveness and feasibility of the proposed method.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4549
Author(s):  
Mingqian Liu ◽  
Bingchen Zhang ◽  
Zhongqiu Xu ◽  
Yirong Wu

Sparse signal processing theory has been applied to synthetic aperture radar (SAR) imaging. In compressive sensing (CS), the sparsity is usually considered as a known parameter. However, it is unknown practically. For many functions of CS, we need to know this parameter. Therefore, the estimation of sparsity is crucial for sparse SAR imaging. The sparsity is determined by the size of regularization parameter. Several methods have been presented for automatically estimating the regularization parameter, and have been applied to sparse SAR imaging. However, these methods are deduced based on an observation matrix, which will entail huge computational and memory costs. In this paper, to enhance the computational efficiency, an efficient adaptive parameter estimation method for sparse SAR imaging is proposed. The complex image-based sparse SAR imaging method only considers the threshold operation of the complex image, which can reduce the computational costs significantly. By utilizing this feature, the parameter is pre-estimated based on a complex image. In order to estimate the sparsity accurately, adaptive parameter estimation is then processed in the raw data domain, combining with the pre-estimated parameter and azimuth-range decouple operators. The proposed method can reduce the computational complexity from a quadratic square order to a linear logarithm order, which can be used in the large-scale scene. Simulated and Gaofen-3 SAR data processing results demonstrate the validity of the proposed method.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 320 ◽  
Author(s):  
Zhilin Xu ◽  
Bingchen Zhang ◽  
Hui Bi ◽  
Chenyang Wu ◽  
Zhonghao Wei

Sparse signal processing has already been introduced to synthetic aperture radar (SAR), which shows potential in improving imaging performance based on raw data or a complex image. In this paper, the relationship between a raw data-based sparse SAR imaging method (RD-SIM) and a complex image-based sparse SAR imaging method (CI-SIM) is compared and analyzed in detail, which is important to select appropriate algorithms in different cases. It is found that they are equivalent when the raw data is fully sampled. Both of them can effectively suppress noise and sidelobes, and hence improve the image performance compared with a matched filtering (MF) method. In addition, the target-to-background ratio (TBR) or azimuth ambiguity-to-signal ratio (AASR) performance indicators of RD-SIM are superior to those of CI-SIM in down-sampling data-based imaging, nonuniform displace phase center sampling, and sparse SAR imaging model-based azimuth ambiguity suppression.


2021 ◽  
Vol 13 (2) ◽  
pp. 274
Author(s):  
Guobiao Yao ◽  
Alper Yilmaz ◽  
Li Zhang ◽  
Fei Meng ◽  
Haibin Ai ◽  
...  

The available stereo matching algorithms produce large number of false positive matches or only produce a few true-positives across oblique stereo images with large baseline. This undesired result happens due to the complex perspective deformation and radiometric distortion across the images. To address this problem, we propose a novel affine invariant feature matching algorithm with subpixel accuracy based on an end-to-end convolutional neural network (CNN). In our method, we adopt and modify a Hessian affine network, which we refer to as IHesAffNet, to obtain affine invariant Hessian regions using deep learning framework. To improve the correlation between corresponding features, we introduce an empirical weighted loss function (EWLF) based on the negative samples using K nearest neighbors, and then generate deep learning-based descriptors with high discrimination that is realized with our multiple hard network structure (MTHardNets). Following this step, the conjugate features are produced by using the Euclidean distance ratio as the matching metric, and the accuracy of matches are optimized through the deep learning transform based least square matching (DLT-LSM). Finally, experiments on Large baseline oblique stereo images acquired by ground close-range and unmanned aerial vehicle (UAV) verify the effectiveness of the proposed approach, and comprehensive comparisons demonstrate that our matching algorithm outperforms the state-of-art methods in terms of accuracy, distribution and correct ratio. The main contributions of this article are: (i) our proposed MTHardNets can generate high quality descriptors; and (ii) the IHesAffNet can produce substantial affine invariant corresponding features with reliable transform parameters.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Kwang-Hyun Uhm ◽  
Seung-Won Jung ◽  
Moon Hyung Choi ◽  
Hong-Kyu Shin ◽  
Jae-Ik Yoo ◽  
...  

AbstractIn 2020, it is estimated that 73,750 kidney cancer cases were diagnosed, and 14,830 people died from cancer in the United States. Preoperative multi-phase abdominal computed tomography (CT) is often used for detecting lesions and classifying histologic subtypes of renal tumor to avoid unnecessary biopsy or surgery. However, there exists inter-observer variability due to subtle differences in the imaging features of tumor subtypes, which makes decisions on treatment challenging. While deep learning has been recently applied to the automated diagnosis of renal tumor, classification of a wide range of subtype classes has not been sufficiently studied yet. In this paper, we propose an end-to-end deep learning model for the differential diagnosis of five major histologic subtypes of renal tumors including both benign and malignant tumors on multi-phase CT. Our model is a unified framework to simultaneously identify lesions and classify subtypes for the diagnosis without manual intervention. We trained and tested the model using CT data from 308 patients who underwent nephrectomy for renal tumors. The model achieved an area under the curve (AUC) of 0.889, and outperformed radiologists for most subtypes. We further validated the model on an independent dataset of 184 patients from The Cancer Imaging Archive (TCIA). The AUC for this dataset was 0.855, and the model performed comparably to the radiologists. These results indicate that our model can achieve similar or better diagnostic performance than radiologists in differentiating a wide range of renal tumors on multi-phase CT.


Diagnostics ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 215
Author(s):  
Gurpreet Singh ◽  
Subhi Al’Aref ◽  
Benjamin Lee ◽  
Jing Lee ◽  
Swee Tan ◽  
...  

Conventional scoring and identification methods for coronary artery calcium (CAC) and aortic calcium (AC) result in information loss from the original image and can be time-consuming. In this study, we sought to demonstrate an end-to-end deep learning model as an alternative to the conventional methods. Scans of 377 patients with no history of coronary artery disease (CAD) were obtained and annotated. A deep learning model was trained, tested and validated in a 60:20:20 split. Within the cohort, mean age was 64.2 ± 9.8 years, and 33% were female. Left anterior descending, right coronary artery, left circumflex, triple vessel, and aortic calcifications were present in 74.87%, 55.82%, 57.41%, 46.03%, and 85.41% of patients respectively. An overall Dice score of 0.952 (interquartile range 0.921, 0.981) was achieved. Stratified by subgroups, there was no difference between male (0.948, interquartile range 0.920, 0.981) and female (0.965, interquartile range 0.933, 0.980) patients (p = 0.350), or, between age <65 (0.950, interquartile range 0.913, 0.981) and age ≥65 (0.957, interquartile range 0.930, 0.9778) (p = 0.742). There was good correlation and agreement for CAC prediction (rho = 0.876, p < 0.001), with a mean difference of 11.2% (p = 0.100). AC correlated well (rho = 0.947, p < 0.001), with a mean difference of 9% (p = 0.070). Automated segmentation took approximately 4 s per patient. Taken together, the deep-end learning model was able to robustly identify vessel-specific CAC and AC with high accuracy, and predict Agatston scores that correlated well with manual annotation, facilitating application into areas of research and clinical importance.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Yi Sun ◽  
Jianfeng Wang ◽  
Jindou Shi ◽  
Stephen A. Boppart

AbstractPolarization-sensitive optical coherence tomography (PS-OCT) is a high-resolution label-free optical biomedical imaging modality that is sensitive to the microstructural architecture in tissue that gives rise to form birefringence, such as collagen or muscle fibers. To enable polarization sensitivity in an OCT system, however, requires additional hardware and complexity. We developed a deep-learning method to synthesize PS-OCT images by training a generative adversarial network (GAN) on OCT intensity and PS-OCT images. The synthesis accuracy was first evaluated by the structural similarity index (SSIM) between the synthetic and real PS-OCT images. Furthermore, the effectiveness of the computational PS-OCT images was validated by separately training two image classifiers using the real and synthetic PS-OCT images for cancer/normal classification. The similar classification results of the two trained classifiers demonstrate that the predicted PS-OCT images can be potentially used interchangeably in cancer diagnosis applications. In addition, we applied the trained GAN models on OCT images collected from a separate OCT imaging system, and the synthetic PS-OCT images correlate well with the real PS-OCT image collected from the same sample sites using the PS-OCT imaging system. This computational PS-OCT imaging method has the potential to reduce the cost, complexity, and need for hardware-based PS-OCT imaging systems.


Sign in / Sign up

Export Citation Format

Share Document