scholarly journals A Deep Learning Approach for the Photoacoustic Tomography Recovery From Undersampled Measurements

2021 ◽  
Vol 15 ◽  
Author(s):  
Husnain Shahid ◽  
Adnan Khalid ◽  
Xin Liu ◽  
Muhammad Irfan ◽  
Dean Ta

Photoacoustic tomography (PAT) is a propitious imaging modality, which is helpful for biomedical study. However, fast PAT imaging and denoising is an exigent task in medical research. To address the problem, recently, methods based on compressed sensing (CS) have been proposed, which accede the low computational cost and high resolution for implementing PAT. Nevertheless, the imaging results of the sparsity-based methods strictly rely on sparsity and incoherence conditions. Furthermore, it is onerous to ensure that the experimentally acquired photoacoustic data meets CS’s prerequisite conditions. In this work, a deep learning–based PAT (Deep-PAT)method is instigated to overcome these limitations. By using a neural network, Deep-PAT is not only able to reconstruct PAT from a fewer number of measurements without considering the prerequisite conditions of CS, but also can eliminate undersampled artifacts effectively. The experimental results demonstrate that Deep-PAT is proficient at recovering high-quality photoacoustic images using just 5% of the original measurement data. Besides this, compared with the sparsity-based method, it can be seen through statistical analysis that the quality is significantly improved by 30% (approximately), having average SSIM = 0.974 and PSNR = 29.88 dB with standard deviation ±0.007 and ±0.089, respectively, by the proposed Deep-PAT method. Also, a comparsion of multiple neural networks provides insights into choosing the best one for further study and practical implementation.

2021 ◽  
Author(s):  
Soumick Chatterjee ◽  
Faraz Ahmed Nizamani ◽  
Andreas Nürnberger ◽  
Oliver Speck

Abstract A brain tumour is a mass or cluster of abnormal cells in the brain, which has the possibility of becoming life-threatening because of its ability to invade neighbouring tissues and also form metastases. An accurate diagnosis is essential for successful treatment planning and magnetic resonance imaging is the principal imaging modality for diagnostic of brain tumours and their extent. Deep Learning methods in computer vision applications have shown significant improvement in recent years, most of which can be credited to the fact that a sizeable amount of data is available to train models on, and the improvements in the model architectures yielding better approximations in a supervised setting. Classifying tumours using such deep learning methods has made significant progress with the availability of open datasets with reliable annotations. Typically those methods are either 3D models, which use 3D volumetric MRIs or even 2D models considering each slice separately. However, by treating one spatial dimension separately or by considering the slices as a sequence of images over time, spatiotemporal models can be employed as "spatiospatial" models for this task. These models have the capabilities of learning specific spatial and temporal relationship, while reducing computational costs. This paper uses two spatiotemporal models, ResNet (2+1)D and ResNet Mixed Convolution, to classify different types of brain tumours. It was observed that both these models performed superior to the pure 3D convolutional model, ResNet18. Furthermore, it was also observed that pre-training the models on a different, even unrelated dataset before training them for the task of tumour classification improves the performance. Finally, Pre-trained ResNet Mixed Convolution was observed to be the best model in these experiments, achieving a macro F1-score of 0.9345 and a test accuracy of 96.98%, while at the same time being the model with the least computational cost.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Madallah Alruwaili ◽  
Abdulaziz Shehab ◽  
Sameh Abd El-Ghany

The COVID-19 pandemic has a significant negative effect on people’s health, as well as on the world’s economy. Polymerase chain reaction (PCR) is one of the main tests used to detect COVID-19 infection. However, it is expensive, time-consuming, and lacks sufficient accuracy. In recent years, convolutional neural networks have grabbed many researchers’ attention in the machine learning field, due to its high diagnosis accuracy, especially the medical image recognition. Many architectures such as Inception, ResNet, DenseNet, and VGG16 have been proposed and gained an excellent performance at a low computational cost. Moreover, in a way to accelerate the training of these traditional architectures, residual connections are combined with inception architecture. Therefore, many hybrid architectures such as Inception-ResNetV2 are further introduced. This paper proposes an enhanced Inception-ResNetV2 deep learning model that can diagnose chest X-ray (CXR) scans with high accuracy. Besides, a Grad-CAM algorithm is used to enhance the visualization of the infected regions of the lungs in CXR images. Compared with state-of-the-art methods, our proposed paper proves superiority in terms of accuracy, recall, precision, and F1-measure.


2020 ◽  
Vol 13 (4) ◽  
pp. 627-640 ◽  
Author(s):  
Avinash Chandra Pandey ◽  
Dharmveer Singh Rajpoot

Background: Sentiment analysis is a contextual mining of text which determines viewpoint of users with respect to some sentimental topics commonly present at social networking websites. Twitter is one of the social sites where people express their opinion about any topic in the form of tweets. These tweets can be examined using various sentiment classification methods to find the opinion of users. Traditional sentiment analysis methods use manually extracted features for opinion classification. The manual feature extraction process is a complicated task since it requires predefined sentiment lexicons. On the other hand, deep learning methods automatically extract relevant features from data hence; they provide better performance and richer representation competency than the traditional methods. Objective: The main aim of this paper is to enhance the sentiment classification accuracy and to reduce the computational cost. Method: To achieve the objective, a hybrid deep learning model, based on convolution neural network and bi-directional long-short term memory neural network has been introduced. Results: The proposed sentiment classification method achieves the highest accuracy for the most of the datasets. Further, from the statistical analysis efficacy of the proposed method has been validated. Conclusion: Sentiment classification accuracy can be improved by creating veracious hybrid models. Moreover, performance can also be enhanced by tuning the hyper parameters of deep leaning models.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 645
Author(s):  
Muhammad Farooq ◽  
Sehrish Sarfraz ◽  
Christophe Chesneau ◽  
Mahmood Ul Hassan ◽  
Muhammad Ali Raza ◽  
...  

Expectiles have gained considerable attention in recent years due to wide applications in many areas. In this study, the k-nearest neighbours approach, together with the asymmetric least squares loss function, called ex-kNN, is proposed for computing expectiles. Firstly, the effect of various distance measures on ex-kNN in terms of test error and computational time is evaluated. It is found that Canberra, Lorentzian, and Soergel distance measures lead to minimum test error, whereas Euclidean, Canberra, and Average of (L1,L∞) lead to a low computational cost. Secondly, the performance of ex-kNN is compared with existing packages er-boost and ex-svm for computing expectiles that are based on nine real life examples. Depending on the nature of data, the ex-kNN showed two to 10 times better performance than er-boost and comparable performance with ex-svm regarding test error. Computationally, the ex-kNN is found two to five times faster than ex-svm and much faster than er-boost, particularly, in the case of high dimensional data.


Forests ◽  
2021 ◽  
Vol 12 (3) ◽  
pp. 294
Author(s):  
Nicholas F. McCarthy ◽  
Ali Tohidi ◽  
Yawar Aziz ◽  
Matt Dennie ◽  
Mario Miguel Valero ◽  
...  

Scarcity in wildland fire progression data as well as considerable uncertainties in forecasts demand improved methods to monitor fire spread in real time. However, there exists at present no scalable solution to acquire consistent information about active forest fires that is both spatially and temporally explicit. To overcome this limitation, we propose a statistical downscaling scheme based on deep learning that leverages multi-source Remote Sensing (RS) data. Our system relies on a U-Net Convolutional Neural Network (CNN) to downscale Geostationary (GEO) satellite multispectral imagery and continuously monitor active fire progression with a spatial resolution similar to Low Earth Orbit (LEO) sensors. In order to achieve this, the model trains on LEO RS products, land use information, vegetation properties, and terrain data. The practical implementation has been optimized to use cloud compute clusters, software containers and multi-step parallel pipelines in order to facilitate real time operational deployment. The performance of the model was validated in five wildfires selected from among the most destructive that occurred in California in 2017 and 2018. These results demonstrate the effectiveness of the proposed methodology in monitoring fire progression with high spatiotemporal resolution, which can be instrumental for decision support during the first hours of wildfires that may quickly become large and dangerous. Additionally, the proposed methodology can be leveraged to collect detailed quantitative data about real-scale wildfire behaviour, thus supporting the development and validation of fire spread models.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Gaoyang Li ◽  
Haoran Wang ◽  
Mingzi Zhang ◽  
Simon Tupin ◽  
Aike Qiao ◽  
...  

AbstractThe clinical treatment planning of coronary heart disease requires hemodynamic parameters to provide proper guidance. Computational fluid dynamics (CFD) is gradually used in the simulation of cardiovascular hemodynamics. However, for the patient-specific model, the complex operation and high computational cost of CFD hinder its clinical application. To deal with these problems, we develop cardiovascular hemodynamic point datasets and a dual sampling channel deep learning network, which can analyze and reproduce the relationship between the cardiovascular geometry and internal hemodynamics. The statistical analysis shows that the hemodynamic prediction results of deep learning are in agreement with the conventional CFD method, but the calculation time is reduced 600-fold. In terms of over 2 million nodes, prediction accuracy of around 90%, computational efficiency to predict cardiovascular hemodynamics within 1 second, and universality for evaluating complex arterial system, our deep learning method can meet the needs of most situations.


2021 ◽  
Vol 13 (12) ◽  
pp. 2326
Author(s):  
Xiaoyong Li ◽  
Xueru Bai ◽  
Feng Zhou

A deep-learning architecture, dubbed as the 2D-ADMM-Net (2D-ADN), is proposed in this article. It provides effective high-resolution 2D inverse synthetic aperture radar (ISAR) imaging under scenarios of low SNRs and incomplete data, by combining model-based sparse reconstruction and data-driven deep learning. Firstly, mapping from ISAR images to their corresponding echoes in the wavenumber domain is derived. Then, a 2D alternating direction method of multipliers (ADMM) is unrolled and generalized to a deep network, where all adjustable parameters in the reconstruction layers, nonlinear transform layers, and multiplier update layers are learned by an end-to-end training through back-propagation. Since the optimal parameters of each layer are learned separately, 2D-ADN exhibits more representation flexibility and preferable reconstruction performance than model-driven methods. Simultaneously, it is able to better facilitate ISAR imaging with limited training samples than data-driven methods owing to its simple structure and small number of adjustable parameters. Additionally, benefiting from the good performance of 2D-ADN, a random phase error estimation method is proposed, through which well-focused imaging can be acquired. It is demonstrated by experiments that although trained by only a few simulated images, the 2D-ADN shows good adaptability to measured data and favorable imaging results with a clear background can be obtained in a short time.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Yi Sun ◽  
Jianfeng Wang ◽  
Jindou Shi ◽  
Stephen A. Boppart

AbstractPolarization-sensitive optical coherence tomography (PS-OCT) is a high-resolution label-free optical biomedical imaging modality that is sensitive to the microstructural architecture in tissue that gives rise to form birefringence, such as collagen or muscle fibers. To enable polarization sensitivity in an OCT system, however, requires additional hardware and complexity. We developed a deep-learning method to synthesize PS-OCT images by training a generative adversarial network (GAN) on OCT intensity and PS-OCT images. The synthesis accuracy was first evaluated by the structural similarity index (SSIM) between the synthetic and real PS-OCT images. Furthermore, the effectiveness of the computational PS-OCT images was validated by separately training two image classifiers using the real and synthetic PS-OCT images for cancer/normal classification. The similar classification results of the two trained classifiers demonstrate that the predicted PS-OCT images can be potentially used interchangeably in cancer diagnosis applications. In addition, we applied the trained GAN models on OCT images collected from a separate OCT imaging system, and the synthetic PS-OCT images correlate well with the real PS-OCT image collected from the same sample sites using the PS-OCT imaging system. This computational PS-OCT imaging method has the potential to reduce the cost, complexity, and need for hardware-based PS-OCT imaging systems.


2021 ◽  
Vol 7 (6) ◽  
pp. 99
Author(s):  
Daniela di Serafino ◽  
Germana Landi ◽  
Marco Viola

We are interested in the restoration of noisy and blurry images where the texture mainly follows a single direction (i.e., directional images). Problems of this type arise, for example, in microscopy or computed tomography for carbon or glass fibres. In order to deal with these problems, the Directional Total Generalized Variation (DTGV) was developed by Kongskov et al. in 2017 and 2019, in the case of impulse and Gaussian noise. In this article we focus on images corrupted by Poisson noise, extending the DTGV regularization to image restoration models where the data fitting term is the generalized Kullback–Leibler divergence. We also propose a technique for the identification of the main texture direction, which improves upon the techniques used in the aforementioned work about DTGV. We solve the problem by an ADMM algorithm with proven convergence and subproblems that can be solved exactly at a low computational cost. Numerical results on both phantom and real images demonstrate the effectiveness of our approach.


Sign in / Sign up

Export Citation Format

Share Document