scholarly journals A Sparsity-Driven Backpropagation-Less Learning Framework Using Populations of Spiking Growth Transform Neurons

2021 ◽  
Vol 15 ◽  
Author(s):  
Ahana Gangopadhyay ◽  
Shantanu Chakrabartty

Growth-transform (GT) neurons and their population models allow for independent control over the spiking statistics and the transient population dynamics while optimizing a physically plausible distributed energy functional involving continuous-valued neural variables. In this paper we describe a backpropagation-less learning approach to train a network of spiking GT neurons by enforcing sparsity constraints on the overall network spiking activity. The key features of the model and the proposed learning framework are: (a) spike responses are generated as a result of constraint violation and hence can be viewed as Lagrangian parameters; (b) the optimal parameters for a given task can be learned using neurally relevant local learning rules and in an online manner; (c) the network optimizes itself to encode the solution with as few spikes as possible (sparsity); (d) the network optimizes itself to operate at a solution with the maximum dynamic range and away from saturation; and (e) the framework is flexible enough to incorporate additional structural and connectivity constraints on the network. As a result, the proposed formulation is attractive for designing neuromorphic tinyML systems that are constrained in energy, resources, and network structure. In this paper, we show how the approach could be used for unsupervised and supervised learning such that minimizing a training error is equivalent to minimizing the overall spiking activity across the network. We then build on this framework to implement three different multi-layer spiking network architectures with progressively increasing flexibility in training and consequently, sparsity. We demonstrate the applicability of the proposed algorithm for resource-efficient learning using a publicly available machine olfaction dataset with unique challenges like sensor drift and a wide range of stimulus concentrations. In all of these case studies we show that a GT network trained using the proposed learning approach is able to minimize the network-level spiking activity while producing classification accuracy that are comparable to standard approaches on the same dataset.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ibtissame Khaoua ◽  
Guillaume Graciani ◽  
Andrey Kim ◽  
François Amblard

AbstractFor a wide range of purposes, one faces the challenge to detect light from extremely faint and spatially extended sources. In such cases, detector noises dominate over the photon noise of the source, and quantum detectors in photon counting mode are generally the best option. Here, we combine a statistical model with an in-depth analysis of detector noises and calibration experiments, and we show that visible light can be detected with an electron-multiplying charge-coupled devices (EM-CCD) with a signal-to-noise ratio (SNR) of 3 for fluxes less than $$30\,{\text{photon}}\,{\text{s}}^{ - 1} \,{\text{cm}}^{ - 2}$$ 30 photon s - 1 cm - 2 . For green photons, this corresponds to 12 aW $${\text{cm}}^{ - 2}$$ cm - 2 ≈ $$9{ } \times 10^{ - 11}$$ 9 × 10 - 11 lux, i.e. 15 orders of magnitude less than typical daylight. The strong nonlinearity of the SNR with the sampling time leads to a dynamic range of detection of 4 orders of magnitude. To detect possibly varying light fluxes, we operate in conditions of maximal detectivity $${\mathcal{D}}$$ D rather than maximal SNR. Given the quantum efficiency $$QE\left( \lambda \right)$$ Q E λ of the detector, we find $${ \mathcal{D}} = 0.015\,{\text{photon}}^{ - 1} \,{\text{s}}^{1/2} \,{\text{cm}}$$ D = 0.015 photon - 1 s 1 / 2 cm , and a non-negligible sensitivity to blackbody radiation for T > 50 °C. This work should help design highly sensitive luminescence detection methods and develop experiments to explore dynamic phenomena involving ultra-weak luminescence in biology, chemistry, and material sciences.


2021 ◽  
Vol 30 (1) ◽  
pp. 774-792
Author(s):  
Mazin Abed Mohammed ◽  
Dheyaa Ahmed Ibrahim ◽  
Akbal Omran Salman

Abstract Spam electronic mails (emails) refer to harmful and unwanted commercial emails sent to corporate bodies or individuals to cause harm. Even though such mails are often used for advertising services and products, they sometimes contain links to malware or phishing hosting websites through which private information can be stolen. This study shows how the adaptive intelligent learning approach, based on the visual anti-spam model for multi-natural language, can be used to detect abnormal situations effectively. The application of this approach is for spam filtering. With adaptive intelligent learning, high performance is achieved alongside a low false detection rate. There are three main phases through which the approach functions intelligently to ascertain if an email is legitimate based on the knowledge that has been gathered previously during the course of training. The proposed approach includes two models to identify the phishing emails. The first model has proposed to identify the type of the language. New trainable model based on Naive Bayes classifier has also been proposed. The proposed model is trained on three types of languages (Arabic, English and Chinese) and the trained model has used to identify the language type and use the label for the next model. The second model has been built by using two classes (phishing and normal email for each language) as a training data. The second trained model (Naive Bayes classifier) has been applied to identify the phishing emails as a final decision for the proposed approach. The proposed strategy is implemented using the Java environments and JADE agent platform. The testing of the performance of the AIA learning model involved the use of a dataset that is made up of 2,000 emails, and the results proved the efficiency of the model in accurately detecting and filtering a wide range of spam emails. The results of our study suggest that the Naive Bayes classifier performed ideally when tested on a database that has the biggest estimate (having a general accuracy of 98.4%, false positive rate of 0.08%, and false negative rate of 2.90%). This indicates that our Naive Bayes classifier algorithm will work viably on the off chance, connected to a real-world database, which is more common but not the largest.


2011 ◽  
Vol 2011 ◽  
pp. 1-12 ◽  
Author(s):  
Karim El-Laithy ◽  
Martin Bogdan

An integration of both the Hebbian-based and reinforcement learning (RL) rules is presented for dynamic synapses. The proposed framework permits the Hebbian rule to update the hidden synaptic model parameters regulating the synaptic response rather than the synaptic weights. This is performed using both the value and the sign of the temporal difference in the reward signal after each trial. Applying this framework, a spiking network with spike-timing-dependent synapses is tested to learn the exclusive-OR computation on a temporally coded basis. Reward values are calculated with the distance between the output spike train of the network and a reference target one. Results show that the network is able to capture the required dynamics and that the proposed framework can reveal indeed an integrated version of Hebbian and RL. The proposed framework is tractable and less computationally expensive. The framework is applicable to a wide class of synaptic models and is not restricted to the used neural representation. This generality, along with the reported results, supports adopting the introduced approach to benefit from the biologically plausible synaptic models in a wide range of intuitive signal processing.


1987 ◽  
Vol 121 ◽  
pp. 287-293
Author(s):  
C.J. Schalinski ◽  
P. Biermann ◽  
A. Eckart ◽  
K.J. Johnston ◽  
T.Ph. Krichbaum ◽  
...  

A complete sample of 13 flat spectrum radio sources is investigated over a wide range of frequencies and spatial resolutions. SSC-calculations lead to the prediction of bulk relativistic motion in all sources. So far 6 out of 7 sources observed with sufficient dynamic range by means of VLBI show evidence for apparent superluminal motion.


Synlett ◽  
2020 ◽  
Author(s):  
Akira Yada ◽  
Kazuhiko Sato ◽  
Tarojiro Matsumura ◽  
Yasunobu Ando ◽  
Kenji Nagata ◽  
...  

AbstractThe prediction of the initial reaction rate in the tungsten-catalyzed epoxidation of alkenes by using a machine learning approach is demonstrated. The ensemble learning framework used in this study consists of random sampling with replacement from the training dataset, the construction of several predictive models (weak learners), and the combination of their outputs. This approach enables us to obtain a reasonable prediction model that avoids the problem of overfitting, even when analyzing a small dataset.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Jie Liao ◽  
Lan Yang

AbstractTemperature is one of the most fundamental physical properties to characterize various physical, chemical, and biological processes. Even a slight change in temperature could have an impact on the status or dynamics of a system. Thus, there is a great need for high-precision and large-dynamic-range temperature measurements. Conventional temperature sensors encounter difficulties in high-precision thermal sensing on the submicron scale. Recently, optical whispering-gallery mode (WGM) sensors have shown promise for many sensing applications, such as thermal sensing, magnetic detection, and biosensing. However, despite their superior sensitivity, the conventional sensing method for WGM resonators relies on tracking the changes in a single mode, which limits the dynamic range constrained by the laser source that has to be fine-tuned in a timely manner to follow the selected mode during the measurement. Moreover, we cannot derive the actual temperature from the spectrum directly but rather derive a relative temperature change. Here, we demonstrate an optical WGM barcode technique involving simultaneous monitoring of the patterns of multiple modes that can provide a direct temperature readout from the spectrum. The measurement relies on the patterns of multiple modes in the WGM spectrum instead of the changes of a particular mode. It can provide us with more information than the single-mode spectrum, such as the precise measurement of actual temperatures. Leveraging the high sensitivity of WGMs and eliminating the need to monitor particular modes, this work lays the foundation for developing a high-performance temperature sensor with not only superior sensitivity but also a broad dynamic range.


2010 ◽  
Vol 1 (SRMS-7) ◽  
Author(s):  
David Pennicard ◽  
Heinz Graafsma ◽  
Michael Lohmann

The new synchrotron light source PETRA-III produced its first beam last year. The extremely high brilliance of PETRA-III and the large energy range of many of its beamlines make it useful for a wide range of experiments, particularly in materials science. The detectors at PETRA-III will need to meet several requirements, such as operation across a wide dynamic range, high-speed readout and good quantum efficiency even at high photon energies. PETRA-III beamlines with lower photon energies will typically be equipped with photon-counting silicon detectors for two-dimensional detection and silicon drift detectors for spectroscopy and higher-energy beamlines will use scintillators coupled to cameras or photomultiplier tubes. Longer-term developments include ‘high-Z’ semiconductors for detecting high-energy X-rays, photon-counting readout chips with smaller pixels and higher frame rates and pixellated avalanche photodiodes for time-resolved experiments.


2021 ◽  
Vol 12 (6) ◽  
pp. 1-23
Author(s):  
Shuo Tao ◽  
Jingang Jiang ◽  
Defu Lian ◽  
Kai Zheng ◽  
Enhong Chen

Mobility prediction plays an important role in a wide range of location-based applications and services. However, there are three problems in the existing literature: (1) explicit high-order interactions of spatio-temporal features are not systemically modeled; (2) most existing algorithms place attention mechanisms on top of recurrent network, so they can not allow for full parallelism and are inferior to self-attention for capturing long-range dependence; (3) most literature does not make good use of long-term historical information and do not effectively model the long-term periodicity of users. To this end, we propose MoveNet and RLMoveNet. MoveNet is a self-attention-based sequential model, predicting each user’s next destination based on her most recent visits and historical trajectory. MoveNet first introduces a cross-based learning framework for modeling feature interactions. With self-attention on both the most recent visits and historical trajectory, MoveNet can use an attention mechanism to capture the user’s long-term regularity in a more efficient way. Based on MoveNet, to model long-term periodicity more effectively, we add the reinforcement learning layer and named RLMoveNet. RLMoveNet regards the human mobility prediction as a reinforcement learning problem, using the reinforcement learning layer as the regularization part to drive the model to pay attention to the behavior with periodic actions, which can help us make the algorithm more effective. We evaluate both of them with three real-world mobility datasets. MoveNet outperforms the state-of-the-art mobility predictor by around 10% in terms of accuracy, and simultaneously achieves faster convergence and over 4x training speedup. Moreover, RLMoveNet achieves higher prediction accuracy than MoveNet, which proves that modeling periodicity explicitly from the perspective of reinforcement learning is more effective.


2007 ◽  
Vol 16 (1) ◽  
pp. 119-122 ◽  
Author(s):  
Patrick Ledda

In the natural world, the human eye is confronted with a wide range of colors and luminances. A surface lit by moonlight might have a luminance level of around 10−3 cd/m2, while surfaces lit during a sunny day could reach values larger than 105 cd/m2. A good quality CRT (cathode ray tube) or LCD (liquid crystal display) monitor is only able to achieve a maximum luminance of around 200 to 300 cd/m2 and a contrast ratio of not more than two orders of magnitude. In this context the contrast ratio or dynamic range is defined as the ratio of the highest to the lowest luminance. We call high dynamic range (HDR) images, those images (or scenes) in which the contrast ratio is larger than what a display can reproduce. In practice, any scene that contains some sort of light source and shadows is HDR. The main problem with HDR images is that they cannot be displayed, therefore although methods to create them do exist (by taking multiple photographs at different exposure times or using computer graphics 3D software for example) it is not possible to see both bright and dark areas simultaneously. (See Figure 1.) There is data that suggests that our eyes can see detail at any given adaptation level within a contrast of 10,000:1 between the brightest and darkest regions of a scene. Therefore an ideal display should be able to reproduce this range. In this review, we present two high dynamic range displays developed by Brightside Technologies (formerly Sunnybrook Technologies) which are capable, for the first time, of linearly displaying high contrast images. These displays are of great use for both researchers in the vision/graphics/VR/medical fields as well as professionals in the VFX/gaming/architectural industry.


2015 ◽  
Vol 61 (225) ◽  
pp. 89-100 ◽  
Author(s):  
Cameron Lewis ◽  
Sivaprasad Gogineni ◽  
Fernando Rodriguez-Morales ◽  
Ben Panzer ◽  
Theresa Stumpf ◽  
...  

AbstractWe have built and operated an ultra-wideband UHF pulsed-chirp radar for measuring firn stratigraphy from airborne platforms over the ice sheets of Greenland and West Antarctica. Our analysis found a wide range of capabilities, including imaging of post firn–ice transition horizons and sounding of shallow glaciers and ice shelves. Imaging of horizons to depths exceeding 600 m was possible in the colder interior regions of the ice sheet, where scattering from the ice surface and inclusions was minimal. The radar’s high sensitivity and large dynamic range point to loss tangent variations as the dominant mechanism for these englacial reflective horizons. The radar is capable of mapping interfaces with reflection coefficients as low as −80 dB near the firn–ice transition and as low as −64 dB at depths of 600 m. We found that firn horizon reflectivity strongly mirrored density variance, a result of the near-unity interfacial transmission coefficients. Zones with differing compaction mechanisms were also apparent in the data. We were able to sound many ice shelves and areas of shallow ice. We estimated ice attenuation rates for a few locations, and our attenuation estimates for the Ross Ice Shelf, West Antarctica, appear to agree well with earlier reported results.


Sign in / Sign up

Export Citation Format

Share Document