scholarly journals Brightness Invariant Deep Spectral Super-Resolution

Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5789
Author(s):  
Tarek Stiebel ◽  
Dorit Merhof

Spectral reconstruction from RGB or spectral super-resolution (SSR) offers a cheap alternative to otherwise costly and more complex spectral imaging devices. In recent years, deep learning based methods consistently achieved the best reconstruction quality in terms of spectral error metrics. However, there are important properties that are not maintained by deep neural networks. This work is primarily dedicated to scale invariance, also known as brightness invariance or exposure invariance. When RGB signals only differ in their absolute scale, they should lead to identical spectral reconstructions apart from the scaling factor. Scale invariance is an essential property that signal processing must guarantee for a wide range of practical applications. At the moment, scale invariance can only be achieved by relying on a diverse database during network training that covers all possibly occurring signal intensities. In contrast, we propose and evaluate a fundamental approach for deep learning based SSR that holds the property of scale invariance by design and is independent of the training data. The approach is independent of concrete network architectures and instead focuses on reevaluating what neural networks should actually predict. The key insight is that signal magnitudes are irrelevant for acquiring spectral reconstructions from camera signals and are only useful for a potential signal denoising.

2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Malte Seemann ◽  
Lennart Bargsten ◽  
Alexander Schlaefer

AbstractDeep learning methods produce promising results when applied to a wide range of medical imaging tasks, including segmentation of artery lumen in computed tomography angiography (CTA) data. However, to perform sufficiently, neural networks have to be trained on large amounts of high quality annotated data. In the realm of medical imaging, annotations are not only quite scarce but also often not entirely reliable. To tackle both challenges, we developed a two-step approach for generating realistic synthetic CTA data for the purpose of data augmentation. In the first step moderately realistic images are generated in a purely numerical fashion. In the second step these images are improved by applying neural domain adaptation. We evaluated the impact of synthetic data on lumen segmentation via convolutional neural networks (CNNs) by comparing resulting performances. Improvements of up to 5% in terms of Dice coefficient and 20% for Hausdorff distance represent a proof of concept that the proposed augmentation procedure can be used to enhance deep learning-based segmentation for artery lumen in CTA images.


2021 ◽  
Vol 12 (3) ◽  
pp. 46-47
Author(s):  
Nikita Saxena

Space-borne satellite radiometers measure Sea Surface Temperature (SST), which is pivotal to studies of air-sea interactions and ocean features. Under clear sky conditions, high resolution measurements are obtainable. But under cloudy conditions, data analysis is constrained to the available low resolution measurements. We assess the efficiency of Deep Learning (DL) architectures, particularly Convolutional Neural Networks (CNN) to downscale oceanographic data from low spatial resolution (SR) to high SR. With a focus on SST Fields of Bay of Bengal, this study proves that Very Deep Super Resolution CNN can successfully reconstruct SST observations from 15 km SR to 5km SR, and 5km SR to 1km SR. This outcome calls attention to the significance of DL models explicitly trained for the reconstruction of high SR SST fields by using low SR data. Inference on DL models can act as a substitute to the existing computationally expensive downscaling technique: Dynamical Downsampling. The complete code is available on this Github Repository.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Xinyang Li ◽  
Guoxun Zhang ◽  
Hui Qiao ◽  
Feng Bao ◽  
Yue Deng ◽  
...  

AbstractThe development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation, which is gradually changing the landscape of optical imaging and biomedical research. However, current implementations of deep learning usually operate in a supervised manner, and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability. Here, we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy, even in some cases in which supervised models cannot be applied. Through the introduction of a saliency constraint, the unsupervised model, named Unsupervised content-preserving Transformation for Optical Microscopy (UTOM), can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content. UTOM shows promising performance in a wide range of biomedical image transformation tasks, including in silico histological staining, fluorescence image restoration, and virtual fluorescence labeling. Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities. We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.


Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

For the past few years, deep learning (DL) robustness (i.e. the ability to maintain the same decision when inputs are subject to perturbations) has become a question of paramount importance, in particular in settings where misclassification can have dramatic consequences. To address this question, authors have proposed different approaches, such as adding regularizers or training using noisy examples. In this paper we introduce a regularizer based on the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DL architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness on classical supervised learning vision datasets for various types of perturbations. We also show it can be combined with existing methods to increase overall robustness.


2021 ◽  
Vol 11 (3) ◽  
pp. 1013
Author(s):  
Zvezdan Lončarević ◽  
Rok Pahič ◽  
Aleš Ude ◽  
Andrej Gams

Autonomous robot learning in unstructured environments often faces the problem that the dimensionality of the search space is too large for practical applications. Dimensionality reduction techniques have been developed to address this problem and describe motor skills in low-dimensional latent spaces. Most of these techniques require the availability of a sufficiently large database of example task executions to compute the latent space. However, the generation of many example task executions on a real robot is tedious, and prone to errors and equipment failures. The main result of this paper is a new approach for efficient database gathering by performing a small number of task executions with a real robot and applying statistical generalization, e.g., Gaussian process regression, to generate more data. We have shown in our experiments that the data generated this way can be used for dimensionality reduction with autoencoder neural networks. The resulting latent spaces can be exploited to implement robot learning more efficiently. The proposed approach has been evaluated on the problem of robotic throwing at a target. Simulation and real-world results with a humanoid robot TALOS are provided. They confirm the effectiveness of generalization-based database acquisition and the efficiency of learning in a low-dimensional latent space.


2021 ◽  
Vol 13 (7) ◽  
pp. 1236
Author(s):  
Yuanjun Shu ◽  
Wei Li ◽  
Menglong Yang ◽  
Peng Cheng ◽  
Songchen Han

Convolutional neural networks (CNNs) have been widely used in change detection of synthetic aperture radar (SAR) images and have been proven to have better precision than traditional methods. A two-stage patch-based deep learning method with a label updating strategy is proposed in this paper. The initial label and mask are generated at the pre-classification stage. Then a two-stage updating strategy is applied to gradually recover changed areas. At the first stage, diversity of training data is gradually restored. The output of the designed CNN network is further processed to generate a new label and a new mask for the following learning iteration. As the diversity of data is ensured after the first stage, pixels within uncertain areas can be easily classified at the second stage. Experiment results on several representative datasets show the effectiveness of our proposed method compared with several existing competitive methods.


2020 ◽  
Vol 36 (2) ◽  
pp. 265-310 ◽  
Author(s):  
Morteza Asghari ◽  
Amir Dashti ◽  
Mashallah Rezakazemi ◽  
Ebrahim Jokar ◽  
Hadi Halakoei

AbstractArtificial neural networks (ANNs) as a powerful technique for solving complicated problems in membrane separation processes have been employed in a wide range of chemical engineering applications. ANNs can be used in the modeling of different processes more easily than other modeling methods. Besides that, the computing time in the design of a membrane separation plant is shorter compared to many mass transfer models. The membrane separation field requires an alternative model that can work alone or in parallel with theoretical or numerical types, which can be quicker and, many a time, much more reliable. They are helpful in cases when scientists do not thoroughly know the physical and chemical rules that govern systems. In ANN modeling, there is no requirement for a deep knowledge of the processes and mathematical equations that govern them. Neural networks are commonly used for the estimation of membrane performance characteristics such as the permeate flux and rejection over the entire range of the process variables, such as pressure, solute concentration, temperature, superficial flow velocity, etc. This review investigates the important aspects of ANNs such as methods of development and training, and modeling strategies in correlation with different types of applications [microfiltration (MF), ultrafiltration (UF), nanofiltration (NF), reverse osmosis (RO), electrodialysis (ED), etc.]. It also deals with particular types of ANNs that have been confirmed to be effective in practical applications and points out the advantages and disadvantages of using them. The combination of ANN with accurate model predictions and a mechanistic model with less accurate predictions that render physical and chemical laws can provide a thorough understanding of a process.


Author(s):  
Herman Njoroge Chege

Point 1: Deep learning algorithms are revolutionizing how hypothesis generation, pattern recognition, and prediction occurs in the sciences. In the life sciences, particularly biology and its subfields,  the use of deep learning is slowly but steadily increasing. However, prototyping or development of tools for practical applications remains in the domain of experienced coders. Furthermore, many tools can be quite costly and difficult to put together without expertise in Artificial intelligence (AI) computing. Point 2: We built a biological species classifier that leverages existing open-source tools and libraries. We designed the corresponding tutorial for users with basic skills in python and a small, but well-curated image dataset. We included annotated code in form of a Jupyter Notebook that can be adapted to any image dataset, ranging from satellite images, animals to bacteria. The prototype developer is publicly available and can be adapted for citizen science as well as other applications not envisioned in this paper. Point 3: We illustrate our approach with a case study of 219 images of 3 three seastar species. We show that with minimal parameter tuning of the AI pipeline we can create a classifier with superior accuracy. We include additional approaches to understand the misclassified images and to curate the dataset to increase accuracy. Point 4: The power of AI approaches is becoming increasingly accessible. We can now readily build and prototype species classifiers that can have a great impact on research that requires species identification and other types of image analysis. Such tools have implications for citizen science, biodiversity monitoring, and a wide range of ecological applications.


2021 ◽  
Author(s):  
Timothy Tiggeloven ◽  
Anaïs Couasnon ◽  
Chiem van Straaten ◽  
Sanne Muis ◽  
Philip Ward

<p>In order to better understand current coastal flood risk, it is critical to be able to predict the characteristics of non-tidal residuals (from here on referred to as surges), such as their temporal variation and the influence of coastal complexities on the magnitude of storm surge levels. In this study, we use an ensemble of Deep Learning (DL) models to predict hourly surge levels using four different types of neural networks and evaluate their performance. Among deep learning models, artificial neural networks (ANN) have been popular neural network models for surge level prediction, but other DL model types have not been investigated yet. In this contribution, we use three DL approaches - CNN, LSTM, and a combined CNN-LSTM model- , to capture temporal dependencies, spatial dependencies and spatio-temporal dependencies between atmospheric conditions and surges for 736 tide gauge locations. Using the high temporal and spatial resolution atmospheric reanalysis datasets ERA5 from ECMWF as predictors, we train, validate and test surge based on observed hourly surge levels derived from the GESLA-2 dataset. We benchmark our results obtained with DL to those provided by a simple probabilistic reference model based on climatology. This study shows promising results for predicting the temporal evolution of surges with DL approaches, and gives insight into the capability to gain skill using DL approaches with different Architectures for surge prediction. We therefore foresee a wide range of advantages in using DL models for coastal applications: probabilistic coastal flood hazard assessment, rapid prediction of storm surge estimates, future predictions of surge levels.</p>


Geophysics ◽  
2021 ◽  
pp. 1-45
Author(s):  
Runhai Feng ◽  
Dario Grana ◽  
Niels Balling

Segmentation of faults based on seismic images is an important step in reservoir characterization. With the recent developments of deep-learning methods and the availability of massive computing power, automatic interpretation of seismic faults has become possible. The likelihood of occurrence for a fault can be quantified using a sigmoid function. Our goal is to quantify the fault model uncertainty that is generally not captured by deep-learning tools. We propose to use the dropout approach, a regularization technique to prevent overfitting and co-adaptation in hidden units, to approximate the Bayesian inference and estimate the principled uncertainty over functions. Particularly, the variance of the learned model has been decomposed into aleatoric and epistemic parts. The proposed method is applied to a real dataset from the Netherlands F3 block with two different dropout ratios in convolutional neural networks. The aleatoric uncertainty is irreducible since it relates to the stochastic dependency within the input observations. As the number of Monte-Carlo realizations increases, the epistemic uncertainty asymptotically converges and the model standard deviation decreases, because the variability of model parameters is better simulated or explained with a larger sample size. This analysis can quantify the confidence to use fault predictions with less uncertainty. Additionally, the analysis suggests where more training data are needed to reduce the uncertainty in low confidence regions.


Sign in / Sign up

Export Citation Format

Share Document