scholarly journals A neuromorphic model of olfactory processing and sparse coding in the Drosophila larva brain

Author(s):  
Anna-Maria Jürgensen ◽  
Afshin Khalili ◽  
Elisabetta Chicca ◽  
Giacomo Indiveri ◽  
Martin Paul Nawrot

Abstract Animal nervous systems are highly efficient in processing sensory input. The neuromorphic computing paradigm aims at the hardware implementation of neural network computations to support novel solutions for building brain-inspired computing systems. Here, we take inspiration from sensory processing in the nervous system of the fruit fly larva. With its strongly limited computational resources of <200 neurons and <1.000 synapses the larval olfactory pathway employs fundamental computations to transform broadly tuned receptor input at the periphery into an energy efficient sparse code in the central brain. We show how this approach allows us to achieve sparse coding and increased separability of stimulus patterns in a spiking neural network, validated with both software simulation and hardware emulation on mixed-signal real-time neuromorphic hardware. We verify that feedback inhibition is the central motif to support sparseness in the spatial domain, across the neuron population, while the combination of spike frequency adaptation and feedback inhibition determines sparseness in the temporal domain. Our experiments demonstrate that such small-sized, biologically realistic neural networks, efficiently implemented on neuromorphic hardware, can achieve parallel processing and efficient encoding of sensory input at full temporal resolution.

2021 ◽  
Author(s):  
Anna-Maria Jürgensen ◽  
Afshin Khalili ◽  
Elisabetta Chicca ◽  
Giacomo Indiveri ◽  
Martin Paul Nawrot

Animal nervous systems are highly efficient in processing sensory input. The neuromorphic computing paradigm aims at the hardware implementation of similar mechanism to support novel solutions for building brain-inspired computing systems. Here, we take inspiration from sensory processing in the nervous system of the fruit fly larva. With its strongly limited computational resources of <200 neurons and <1.000 synapses the larval olfactory pathway employs fundamental computations to transform broadly tuned receptor input at the periphery into an energy efficient sparse code in the central brain. We show how this approach allows us to achieve sparse coding and increased separability of stimulus patterns in a spiking neural network, validated with both software simulation and hardware emulation on mixed-signal real-time neuromorphic hardware. We verify that feedback inhibition is the central motif to support sparseness in the spatial domain, across the neuron population, while the combination of spike frequency adaptation and feedback inhibition determines sparseness in the temporal domain. Our experiments demonstrate that such small-sized, biologically realistic neural networks, efficiently implemented on neuromorphic hardware, can achieve parallel processing and efficient encoding of sensory input at full temporal resolution.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2756 ◽  
Author(s):  
Anup Vanarse ◽  
Josafath Israel Espinosa-Ramos ◽  
Adam Osseiran ◽  
Alexander Rassau ◽  
Nikola Kasabov

Existing methods in neuromorphic olfaction mainly focus on implementing the data transformation based on the neurobiological architecture of the olfactory pathway. While the transformation is pivotal for the sparse spike-based representation of odor data, classification techniques based on the bio-computations of the higher brain areas, which process the spiking data for identification of odor, remain largely unexplored. This paper argues that brain-inspired spiking neural networks constitute a promising approach for the next generation of machine intelligence for odor data processing. Inspired by principles of brain information processing, here we propose the first spiking neural network method and associated deep machine learning system for classification of odor data. The paper demonstrates that the proposed approach has several advantages when compared to the current state-of-the-art methods. Based on results obtained using a benchmark dataset, the model achieved a high classification accuracy for a large number of odors and has the capacity for incremental learning on new data. The paper explores different spike encoding algorithms and finds that the most suitable for the task is the step-wise encoding function. Further directions in the brain-inspired study of odor machine classification include investigation of more biologically plausible algorithms for mapping, learning, and interpretation of odor data along with the realization of these algorithms on some highly parallel and low power consuming neuromorphic hardware devices for real-world applications.


2020 ◽  
Author(s):  
Dianbo Liu

BACKGROUND Applications of machine learning (ML) on health care can have a great impact on people’s lives. At the same time, medical data is usually big, requiring a significant amount of computational resources. Although it might not be a problem for wide-adoption of ML tools in developed nations, availability of computational resource can very well be limited in third-world nations and on mobile devices. This can prevent many people from benefiting of the advancement in ML applications for healthcare. OBJECTIVE In this paper we explored three methods to increase computational efficiency of either recurrent neural net-work(RNN) or feedforward (deep) neural network (DNN) while not compromising its accuracy. We used in-patient mortality prediction as our case analysis upon intensive care dataset. METHODS We reduced the size of RNN and DNN by applying pruning of “unused” neurons. Additionally, we modified the RNN structure by adding a hidden-layer to the RNN cell but reduce the total number of recurrent layers to accomplish a reduction of total parameters in the network. Finally, we implemented quantization on DNN—forcing the weights to be 8-bits instead of 32-bits. RESULTS We found that all methods increased implementation efficiency–including training speed, memory size and inference speed–without reducing the accuracy of mortality prediction. CONCLUSIONS This improvements allow the implementation of sophisticated NN algorithms on devices with lower computational resources.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Mina Salehi ◽  
Siamak Farhadi ◽  
Ahmad Moieni ◽  
Naser Safaie ◽  
Mohsen Hesami

Abstract Background Paclitaxel is a well-known chemotherapeutic agent widely applied as a therapy for various types of cancers. In vitro culture of Corylus avellana has been named as a promising and low-cost strategy for paclitaxel production. Fungal elicitors have been reported as an impressive strategy for improving paclitaxel biosynthesis in cell suspension culture (CSC) of C. avellana. The objectives of this research were to forecast and optimize growth and paclitaxel biosynthesis based on four input variables including cell extract (CE) and culture filtrate (CF) concentration levels, elicitor adding day and CSC harvesting time in C. avellana cell culture, as a case study, using general regression neural network-fruit fly optimization algorithm (GRNN-FOA) via data mining approach for the first time. Results GRNN-FOA models (0.88–0.97) showed the superior prediction performances as compared to regression models (0.57–0.86). Comparative analysis of multilayer perceptron-genetic algorithm (MLP-GA) and GRNN-FOA showed very slight difference between two models for dry weight (DW), intracellular and extracellular paclitaxel in testing subset, the unseen data. However, MLP-GA was slightly more accurate as compared to GRNN-FOA for total paclitaxel and extracellular paclitaxel portion in testing subset. The slight difference was observed in maximum growth and paclitaxel biosynthesis optimized by FOA and GA. The optimization analysis using FOA on developed GRNN-FOA models showed that optimal CE [4.29% (v/v)] and CF [5.38% (v/v)] concentration levels, elicitor adding day (17) and harvesting time (88 h and 19 min) can lead to highest paclitaxel biosynthesis (372.89 µg l−1). Conclusions Great accordance between the predicted and observed values of DW, intracellular, extracellular and total yield of paclitaxel, and also extracellular paclitaxel portion support excellent performance of developed GRNN-FOA models. Overall, GRNN-FOA as new mathematical tool may pave the way for forecasting and optimizing secondary metabolite production in plant in vitro culture.


Diagnostics ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 110 ◽  
Author(s):  
Pius Kwao Gadosey ◽  
Yujian Li ◽  
Enock Adjei Agyekum ◽  
Ting Zhang ◽  
Zhaoying Liu ◽  
...  

During image segmentation tasks in computer vision, achieving high accuracy performance while requiring fewer computations and faster inference is a big challenge. This is especially important in medical imaging tasks but one metric is usually compromised for the other. To address this problem, this paper presents an extremely fast, small and computationally effective deep neural network called Stripped-Down UNet (SD-UNet), designed for the segmentation of biomedical data on devices with limited computational resources. By making use of depthwise separable convolutions in the entire network, we design a lightweight deep convolutional neural network architecture inspired by the widely adapted U-Net model. In order to recover the expected performance degradation in the process, we introduce a weight standardization algorithm with the group normalization method. We demonstrate that SD-UNet has three major advantages including: (i) smaller model size (23x smaller than U-Net); (ii) 8x fewer parameters; and (iii) faster inference time with a computational complexity lower than 8M floating point operations (FLOPs). Experiments on the benchmark dataset of the Internatioanl Symposium on Biomedical Imaging (ISBI) challenge for segmentation of neuronal structures in electron microscopic (EM) stacks and the Medical Segmentation Decathlon (MSD) challenge brain tumor segmentation (BRATs) dataset show that the proposed model achieves comparable and sometimes better results compared to the current state-of-the-art.


2021 ◽  
Vol 336 ◽  
pp. 08013
Author(s):  
Zhaosheng Xu

Based on the author's research time, this paper studies the software credibility algorithm based on deep convolutional sparse coding. Firstly, it summarizes the convolutional sparse coding and trust classification system, and then constructs the algorithm from two aspects: factor processing based on deep convolution neural network and trust classification based on sparse representation.


2021 ◽  
Author(s):  
Marco Luca Sbodio ◽  
Natasha Mulligan ◽  
Stefanie Speichert ◽  
Vanessa Lopez ◽  
Joao Bettencourt-Silva

There is a growing trend in building deep learning patient representations from health records to obtain a comprehensive view of a patient’s data for machine learning tasks. This paper proposes a reproducible approach to generate patient pathways from health records and to transform them into a machine-processable image-like structure useful for deep learning tasks. Based on this approach, we generated over a million pathways from FAIR synthetic health records and used them to train a convolutional neural network. Our initial experiments show the accuracy of the CNN on a prediction task is comparable or better than other autoencoders trained on the same data, while requiring significantly less computational resources for training. We also assess the impact of the size of the training dataset on autoencoders performances. The source code for generating pathways from health records is provided as open source.


2012 ◽  
pp. 881-898
Author(s):  
J.R. Bilbao-Castro ◽  
I. García ◽  
J.J. Fernández

Three-dimensional electron microscopy allows scientists to study biological specimens and to understand how they behave and interact with each other depending on their structural conformation. Electron microscopy projections of the specimens are taken from different angles and are processed to obtain a virtual three-dimensional reconstruction for further studies. Nevertheless, the whole reconstruction process, which is composed of many different subtasks from the microscope to the reconstructed volume, is not straightforward nor cheap in terms of computational costs. Different computing paradigms have been applied in order to overcome such high costs. While classic parallel computing using mainframes and clusters of workstations is usually enough for average requirements, there are some tasks which would fit better into a different computing paradigm – such as grid computing. Such tasks can be split up into a myriad of subtasks, which can then be run independently using as many computational resources as are available. This chapter explores two of these tasks present in a typical three-dimensional electron microscopy reconstruction process. In addition, important aspects like fault-tolerance are widely covered; given that the distributed nature of a grid infrastructure makes it inherently unstable and difficult to predict.


Sign in / Sign up

Export Citation Format

Share Document