Novel U-net based deep neural networks for transmission tomography

2021 ◽  
pp. 1-19
Author(s):  
Csaba Olasz ◽  
László G. Varga ◽  
Antal Nagy

BACKGROUND: The fusion of computer tomography and deep learning is an effective way of achieving improved image quality and artifact reduction in reconstructed images. OBJECTIVE: In this paper, we present two novel neural network architectures for tomographic reconstruction with reduced effects of beam hardening and electrical noise. METHODS: In the case of the proposed novel architectures, the image reconstruction step is located inside the neural networks, which allows the network to be trained by taking the mathematical model of the projections into account. This strong connection enables us to enhance the projection data and the reconstructed image together. We tested the two proposed models against three other methods on two datasets. The datasets contain physically correct simulated data, and they show strong signs of beam hardening and electrical noise. We also performed a numerical evaluation of the neural networks on the reconstructed images according to three error measurements and provided a scoring system of the methods derived from the three measures. RESULTS: The results showed the superiority of the novel architecture called TomoNet2. TomoNet2 improved the quality of the images according to the average Structural Similarity Index from 0.9372 to 0.9977 and 0.9519 to 0.9886 on the two data sets, when compared to the FBP method. This network also yielded the best results for 79.2 and 53.0 percent for the two datasets according to Peak-Signal-to-Noise-Ratio compared to the other improvement techniques. CONCLUSIONS: Our experimental results showed that the reconstruction step used in skip connections in deep neural networks improves the quality of the reconstructions. We are confident that our proposed method can be effectively applied to other datasets for tomographic purposes.

Author(s):  
Yun-Peng Liu ◽  
Ning Xu ◽  
Yu Zhang ◽  
Xin Geng

The performances of deep neural networks (DNNs) crucially rely on the quality of labeling. In some situations, labels are easily corrupted, and therefore some labels become noisy labels. Thus, designing algorithms that deal with noisy labels is of great importance for learning robust DNNs. However, it is difficult to distinguish between clean labels and noisy labels, which becomes the bottleneck of many methods. To address the problem, this paper proposes a novel method named Label Distribution based Confidence Estimation (LDCE). LDCE estimates the confidence of the observed labels based on label distribution. Then, the boundary between clean labels and noisy labels becomes clear according to confidence scores. To verify the effectiveness of the method, LDCE is combined with the existing learning algorithm to train robust DNNs. Experiments on both synthetic and real-world datasets substantiate the superiority of the proposed algorithm against state-of-the-art methods.


2018 ◽  
Vol 74 (1-2) ◽  
pp. 47-56 ◽  
Author(s):  
Diogo R. Ferreira ◽  
Pedro J. Carvalho ◽  
Horácio Fernandes ◽  
JET Contributors

Author(s):  
V. N. Gridin ◽  
I. A. Evdokimov ◽  
B. R. Salem ◽  
V. I. Solodovnikov

The analysis of key stages, implementation features and functioning principles of the neural networks, including deep neural networks, has been carried out. The problems of choosing the number of hidden elements, methods for the internal topology selection and setting parameters are considered. It is shown that in the training and validation process it is possible to control the capacity of a neural network and evaluate the qualitative characteristics of the constructed model. The issues of construction processes automation and hyperparameters optimization of the neural network structures are considered depending on the user's tasks and the available source data. A number of approaches based on the use of probabilistic programming, evolutionary algorithms, and recurrent neural networks are presented.


2016 ◽  
Vol 2016 ◽  
pp. 1-17 ◽  
Author(s):  
Diego José Luis Botia Valderrama ◽  
Natalia Gaviria Gómez

The measurement and evaluation of the QoE (Quality of Experience) have become one of the main focuses in the telecommunications to provide services with the expected quality for their users. However, factors like the network parameters and codification can affect the quality of video, limiting the correlation between the objective and subjective metrics. The above increases the complexity to evaluate the real quality of video perceived by users. In this paper, a model based on artificial neural networks such as BPNNs (Backpropagation Neural Networks) and the RNNs (Random Neural Networks) is applied to evaluate the subjective quality metrics MOS (Mean Opinion Score) and the PSNR (Peak Signal Noise Ratio), SSIM (Structural Similarity Index Metric), VQM (Video Quality Metric), and QIBF (Quality Index Based Frame). The proposed model allows establishing the QoS (Quality of Service) based in the strategyDiffserv. The metrics were analyzed through Pearson’s and Spearman’s correlation coefficients, RMSE (Root Mean Square Error), and outliers rate. Correlation values greater than 90% were obtained for all the evaluated metrics.


2021 ◽  
Vol 37 (2) ◽  
pp. 123-143
Author(s):  
Tuan Minh Luu ◽  
Huong Thanh Le ◽  
Tan Minh Hoang

Deep neural networks have been applied successfully to extractive text summarization tasks with the accompany of large training datasets. However, when the training dataset is not large enough, these models reveal certain limitations that affect the quality of the system’s summary. In this paper, we propose an extractive summarization system basing on a Convolutional Neural Network and a Fully Connected network for sentence selection. The pretrained BERT multilingual model is used to generate embeddings vectors from the input text. These vectors are combined with TF-IDF values to produce the input of the text summarization system. Redundant sentences from the output summary are eliminated by the Maximal Marginal Relevance method. Our system is evaluated with both English and Vietnamese languages using CNN and Baomoi datasets, respectively. Experimental results show that our system achieves better results comparing to existing works using the same dataset. It confirms that our approach can be effectively applied to summarize both English and Vietnamese languages.


2021 ◽  
Author(s):  
Viktória Burkus ◽  
Attila Kárpáti ◽  
László Szécsi

Surface reconstruction for particle-based fluid simulation is a computational challenge on par with the simula- tion itself. In real-time applications, splatting-style rendering approaches based on forward rendering of particle impostors are prevalent, but they suffer from noticeable artifacts. In this paper, we present a technique that combines forward rendering simulated features with deep-learning image manipulation to improve the rendering quality of splatting-style approaches to be perceptually similar to ray tracing solutions, circumventing the cost, complexity, and limitations of exact fluid surface rendering by replacing it with the flat cost of a neural network pass. Our solution is based on the idea of training generative deep neural networks with image pairs consisting of cheap particle impostor renders and ground truth high quality ray-traced images.


10.29007/p655 ◽  
2018 ◽  
Author(s):  
Sai Prabhakar Pandi Selvaraj ◽  
Manuela Veloso ◽  
Stephanie Rosenthal

Significant advances in the performance of deep neural networks, such as Convolutional Neural Networks (CNNs) for image classification, have created a drive for understanding how they work. Different techniques have been proposed to determine which features (e.g., image pixels) are most important for a CNN’s classification. However, the important features output by these techniques have typically been judged subjectively by a human to assess whether the important features capture the features relevant to the classification and not whether the features were actually important to classifier itself. We address the need for an objective measure to assess the quality of different feature importance measures. In particular, we propose measuring the ratio of a CNN’s accuracy on the whole image com- pared to an image containing only the important features. We also consider scaling this ratio by the relative size of the important region in order to measure the conciseness. We demonstrate that our measures correlate well with prior subjective comparisons of important features, but importantly do not require their human studies. We also demonstrate that the features on which multiple techniques agree are important have a higher impact on accuracy than those features that only one technique finds.


The authors apply deep neural networks, a type of machine learning method, to model agency mortgage-backed security (MBS) 30-year, fixed-rate pool prepayment behaviors. The neural networks model (NNM) is able to produce highly accurate model fits to the historical prepayment patterns as well as accurate sensitivities to economic and pool-level risk factors. These results are comparable with model results and intuitions obtained from a traditional agency pool-level prepayment model that is in production and was built via many iterations of trial and error over many months and years. This example shows NNM can process large datasets efficiently, capture very complex prepayment patterns, and model large group of risk factors that are highly nonlinear and interactive. The authors also examine various potential shortcomings of this approach, including nontransparency/“black-box” issues, model overfitting, and regime shift issues.


Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 151
Author(s):  
Xintao Duan ◽  
Lei Li ◽  
Yao Su ◽  
Wenxin Wang ◽  
En Zhang ◽  
...  

Data hiding is the technique of embedding data into video or audio media. With the development of deep neural networks (DNN), the quality of images generated by novel data hiding methods based on DNN is getting better. However, there is still room for the similarity between the original images and the images generated by the DNN models which were trained based on the existing hiding frameworks to improve, and it is hard for the receiver to distinguish whether the container image is from the real sender. We propose a framework by introducing a key_img for using the over-fitting characteristic of DNN and combined with difference image grafting symmetrically, named difference image grafting deep hiding (DIGDH). The key_img can be used to identify whether the container image is from the real sender easily. The experimental results show that without changing the structures of networks, the models trained based on the proposed framework can generate images with higher similarity to original cover and secret images. According to the analysis results of the steganalysis tool named StegExpose, the container images generated by the hiding model trained based on the proposed framework is closer to the random distribution.


2021 ◽  
Author(s):  
Murat Seckin Ayhan ◽  
Louis Benedikt Kuemmerle ◽  
Laura Kuehlewein ◽  
Werner Inhoffen ◽  
Gulnar Aliyeva ◽  
...  

Deep neural networks (DNNs) have achieved physician-level accuracy on many imaging-based medical diagnostic tasks, for example classification of retinal images in ophthalmology. However, their decision mechanisms are often considered impenetrable leading to a lack of trust by clinicians and patients. To alleviate this issue, a range of explanation methods have been proposed to expose the inner workings of DNNs leading to their decisions. For imaging-based tasks, this is often achieved via saliency maps. The quality of these maps are typically evaluated via perturbation analysis without experts involved. To facilitate the adoption and success of such automated systems, however, it is crucial to validate saliency maps against clinicians. In this study, we used two different network architectures and developed ensembles of DNNs to detect diabetic retinopathy and neovascular age-related macular degeneration from retinal fundus images and optical coherence tomography scans, respectively. We used a variety of explanation methods and obtained a comprehensive set of saliency maps for explaining the ensemble-based diagnostic decisions. Then, we systematically validated saliency maps against clinicians through two main analyses --- a direct comparison of saliency maps with the expert annotations of disease-specific pathologies and perturbation analyses using also expert annotations as saliency maps. We found the choice of DNN architecture and explanation method to significantly influence the quality of saliency maps. Guided Backprop showed consistently good performance across disease scenarios and DNN architectures, suggesting that it provides a suitable starting point for explaining the decisions of DNNs on retinal images.


Sign in / Sign up

Export Citation Format

Share Document