Proceedings of the Northern Lights Deep Learning Workshop
Latest Publications


TOTAL DOCUMENTS

17
(FIVE YEARS 17)

H-INDEX

0
(FIVE YEARS 0)

Published By Uit The Arctic University Of Norway

2703-6928

Author(s):  
Frederik Boe Hüttel ◽  
Line Katrine Harder Clemmensen

Consistent and accurate estimation of stellar parameters is of great importance for information retrieval in astrophysical research. The parameters span a wide range from effective temperature to rotational velocity. We propose to estimate the stellar parameters directly from spectral signals coming from the HARPS-N spectrograph pipeline before any spectrum-processing steps are applied to extract the 1D spectrum. We propose an attention-based model to estimate the stellar parameters, which estimate both mean and uncertainty of the stellar parameters through estimation of the parameters of a Gaussian distribution. The estimated distributions create a basis to generate data-driven Gaussian confidence intervals for the estimated stellar parameters. We show that residual networks and attention-based models can estimate the stellar parameters with high accuracy for low Signal-to-noise ratio (SNR) compared to previous methods. With an observation of the Sun from the HARPS-N spectrograph, we show that the models can estimate stellar parameters from real observational data.


Author(s):  
Daniel J. Trosten ◽  
Robert Jenssen ◽  
Michael C. Kampffmeyer

Preservation of local similarity structure is a key challenge in deep clustering. Many recent deep clustering methods therefore use autoencoders to help guide the model's neural network towards an embedding which is more reflective of the input space geometry. However, recent work has shown that autoencoder-based deep clustering models can suffer from objective function mismatch (OFM). In order to improve the preservation of local similarity structure, while simultaneously having a low OFM, we develop a new auxiliary objective function for deep clustering. Our Unsupervised Companion Objective (UCO) encourages a consistent clustering structure at intermediate layers in the network -- helping the network learn an embedding which is more reflective of the similarity structure in the input space. Since a clustering-based auxiliary objective has the same goal as the main clustering objective, it is less prone to introduce objective function mismatch between itself and the main objective. Our experiments show that attaching the UCO to a deep clustering model improves the performance of the model, and exhibits a lower OFM, compared to an analogous autoencoder-based model.


Author(s):  
Vemund Sigmundson Schøyen ◽  
Narada Dilp Warakagoda ◽  
Øivind Midtgaard

This paper presents fast, accurate, and automatic methods for detecting seafloor pipelines in multibeam echo sounder data with deep learning. The proposed methods take inspiration from the highly successful ResNet and YOLO deep learning models and tailor them to the idiosyncrasies of the seafloor pipeline detection task. We use the area between lines and Hausdorff line distance functions to accurately evaluate how well methods can localize (pipe)lines. The same functions also show promise as loss functions compared to standard mean squared error, which does not include the regression variables' geometrical interpretation. The model outperforms the highest likelihood baseline by more than 35% on a region-wise F1-score classification evaluation while being more than eight times more accurate than the baseline in locating pipelines. It is efficient, operating at over eighteen 32-ping image segments per second, which is far beyond real-time requirements.


Author(s):  
Jakeoung Koo ◽  
Elise Otterlei Brenne ◽  
Anders Bjorholm Dahl ◽  
Vedrana Andersen Dahl

Tomographic reconstruction is concerned with computing the cross-sections of an object from a finite number of projections. Many conventional methods represent the cross-sections as images on a regular grid. In this paper, we study a recent coordinate-based neural network for tomographic reconstruction, where the network inputs a spatial coordinate and outputs the attenuation coefficient on the coordinate. This coordinate-based network allows the continuous representation of an object. Based on this network, we propose a spatial regularization term, to obtain a high-quality reconstruction. Experimental results on synthetic data show that the regularization term improves the reconstruction quality significantly, compared to the baseline. We also provide an ablation study for different architecture configurations and hyper-parameters.


2020 ◽  
Vol 1 ◽  
pp. 6
Author(s):  
Miguel Angel Tejedor Hernandez ◽  
Jonas Nordhaug Myhre

Reinforcement learning (RL) is a promising direction in adaptive and personalized type 1 diabetes (T1D) treatment. However, the reward function – a most critical component in RL – is a component that is in most cases hand designed and often overlooked. In this paper we show that different reward functions can dramatically influence the final result when using RL to treat in-silico T1D patients.


2020 ◽  
Vol 1 ◽  
pp. 6
Author(s):  
Alexandra Vedeler ◽  
Narada Warakagoda

The task of obstacle avoidance using maritime vessels, such as Unmanned Surface Vehicles (USV), has traditionally been solved using specialized modules that are designed and optimized separately. However, this approach requires a deep insight into the environment, the vessel, and their complex dynamics. We propose an alternative method using Imitation Learning (IL) through Deep Reinforcement Learning (RL) and Deep Inverse Reinforcement Learning (IRL) and present a system that learns an end-to-end steering model capable of mapping radar-like images directly to steering actions in an obstacle avoidance scenario. The USV used in the work is equipped with a Radar sensor and we studied the problem of generating a single action parameter, heading. We apply an IL algorithm known as generative adversarial imitation learning (GAIL) to develop an end-to-end steering model for a scenario where avoidance of an obstacle is the goal. The performance of the system was studied for different design choices and compared to that of a system that is based on pure RL. The IL system produces results that indicate it is able to grasp the concept of the task and that in many ways are on par with the RL system. We deem this to be promising for future use in tasks that are not as easily described by a reward function.  


2020 ◽  
Vol 1 ◽  
pp. 6
Author(s):  
Henning Petzka ◽  
Martin Trimmel ◽  
Cristian Sminchisescu

Symmetries in neural networks allow different weight configurations leading to the same network function. For odd activation functions, the set of transformations mapping between such configurations have been studied extensively, but less is known for neural networks with ReLU activation functions. We give a complete characterization for fully-connected networks with two layers. Apart from two well-known transformations, only degenerated situations allow additional transformations that leave the network function unchanged. Reduction steps can remove only part of the degenerated cases. Finally, we present a non-degenerate situation for deep neural networks leading to new transformations leaving the network function intact.


2020 ◽  
Vol 1 ◽  
pp. 6
Author(s):  
Polina Kurtser ◽  
Ola Ringdahl ◽  
Nati Rotstein ◽  
Henrik Andreasson

In this paper we present the usage of PointNet, a deep neural network that consumes raw un-ordered point clouds, for detection of grape vine clusters in outdoor conditions. We investigate the added value of feeding the detection network with both RGB and depth, contradictory to common practice in agricultural robotics of relying on RGB only. A total of 5057 pointclouds (1033 manually annotated and 4024 annotated using geometric reasoning) were collected in a field experiment conducted in outdoor conditions on 9 grape vines and 5 plants. The detection results show overall accuracy of 91% (average class accuracy of 74%, precision 53% recall 48%) for RGBXYZ data and a significant drop in recall for RGB or XYZ data only. These results suggest the usage of depth cameras for vision in agricultural robotics is crucial for crops where the color contrast between the crop and the background is complex. The results also suggest geometric reasoning can be used for increased training set size, a major bottleneck in the development of agricultural vision systems.


2020 ◽  
Vol 1 ◽  
pp. 6
Author(s):  
Thomas Haugland Johansen ◽  
Steffen Aagaard Sørensen

Foraminifera are single-celled marine organisms, which may have a planktic or benthic lifestyle. During their life cycle they construct shells consisting of one or more chambers, and these shells remain as fossils in marine sediments. Classifying and counting these fossils have become an important tool in e.g. oceanography and climatology.Currently the process of identifying and counting microfossils is performed manually using a microscope and is very time consuming. Developing methods to automate this process is therefore considered important across a range of research fields.The first steps towards developing a deep learning model that can detect and classify microscopic foraminifera are proposed. The proposed model is based on a VGG16 model that has been pretrained on the ImageNet dataset, and adapted to the foraminifera task using transfer learning. Additionally, a novel image dataset consisting of microscopic foraminifera and sediments from the Barents Sea region is introduced.


2020 ◽  
Vol 1 ◽  
pp. 6
Author(s):  
Alexandra Albu ◽  
Alina Enescu ◽  
Luigi Malagò

The ability to automatically detect anomalies in brain MRI scans is of great importance in computer-aided diagnosis. Unsupervised anomaly detection methods work primarily by learning the distribution of healthy images and identifying abnormal tissues as outliers. We propose a slice-wise detection method which first trains a pair of autoencoders on two different datasets, one with healthy individuals and the other one with images of normal and tumoral tissues. Next, it classifies slices based on the distance in the latent space between the enconding of the image and the encoding of the reconstructed image, obtained through the autoencoder trained on healthy images only. We validate our approach with a series of preliminary experiments on the HCP and BRATS-15 datasets.


Sign in / Sign up

Export Citation Format

Share Document