Managing spatio-temporal complexity in Hopfield neural network simulations for large-scale static optimization

2004 ◽  
Vol 64 (2) ◽  
pp. 279-293 ◽  
Author(s):  
Gursel Serpen
2019 ◽  
Vol 11 (18) ◽  
pp. 2077 ◽  
Author(s):  
Fung ◽  
Wong ◽  
Chan

Spatio-temporal data fusion refers to the technique of combining high temporal resolution from coarse satellite images and high spatial resolution from fine satellite images. However, data availability remains a major limitation in algorithm development. Existing spatio-temporal data fusion algorithms require at least one known image pair between the fine and coarse resolution image. However, data which come from two different satellite platforms do not necessarily have an overlap in their overpass times, hence restricting the application of spatio-temporal data fusion. In this paper, a new algorithm named Hopfield Neural Network SPatio-tempOral daTa fusion model (HNN-SPOT) is developed by utilizing the optimization concept in the Hopfield neural network (HNN) for spatio-temporal image fusion. The algorithm derives a synthesized fine resolution image from a coarse spatial resolution satellite image (similar to downscaling), with the use of one fine resolution image taken on an arbitrary date and one coarse image taken on a predicted date. The HNN-SPOT particularly addresses the problem when the fine resolution and coarse resolution images are acquired from different satellite overpass times over the same geographic extent. Both simulated datasets and real datasets over Hong Kong and Australia have been used in the evaluation of HNN-SPOT. Results showed that HNN-SPOT was comparable with an existing fusion algorithm, the spatial and temporal adaptive reflectance fusion model (STARFM). HNN-SPOT assumes consistent spatial structure for the target area between the date of data acquisition and the prediction date. Therefore, it is more applicable to geographical areas with little or no land cover change. It is shown that HNN-SPOT can produce accurate fusion results with >90% of correlation coefficient over consistent land covers. For areas that have undergone land cover changes, HNN-SPOT can still produce a prediction about the outlines and the tone of the features, if they are large enough to be recorded in the coarse resolution image at the prediction date. HNN-SPOT provides a relatively new approach in spatio-temporal data fusion, and further improvements can be made by modifying or adding new goals and constraints in its HNN architecture. Owing to its lower demand for data prerequisites, HNN-SPOT is expected to increase the applicability of fine-scale applications in remote sensing, such as environmental modeling and monitoring.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6622
Author(s):  
Barış Bayram ◽  
Gökhan İnce

Acoustic scene analysis (ASA) relies on the dynamic sensing and understanding of stationary and non-stationary sounds from various events, background noises and human actions with objects. However, the spatio-temporal nature of the sound signals may not be stationary, and novel events may exist that eventually deteriorate the performance of the analysis. In this study, a self-learning-based ASA for acoustic event recognition (AER) is presented to detect and incrementally learn novel acoustic events by tackling catastrophic forgetting. The proposed ASA framework comprises six elements: (1) raw acoustic signal pre-processing, (2) low-level and deep audio feature extraction, (3) acoustic novelty detection (AND), (4) acoustic signal augmentations, (5) incremental class-learning (ICL) (of the audio features of the novel events) and (6) AER. The self-learning on different types of audio features extracted from the acoustic signals of various events occurs without human supervision. For the extraction of deep audio representations, in addition to visual geometry group (VGG) and residual neural network (ResNet), time-delay neural network (TDNN) and TDNN based long short-term memory (TDNN–LSTM) networks are pre-trained using a large-scale audio dataset, Google AudioSet. The performances of ICL with AND using Mel-spectrograms, and deep features with TDNNs, VGG, and ResNet from the Mel-spectrograms are validated on benchmark audio datasets such as ESC-10, ESC-50, UrbanSound8K (US8K), and an audio dataset collected by the authors in a real domestic environment.


2021 ◽  
Vol 15 ◽  
Author(s):  
Corentin Delacour ◽  
Aida Todri-Sanial

Oscillatory Neural Network (ONN) is an emerging neuromorphic architecture with oscillators representing neurons and information encoded in oscillator's phase relations. In an ONN, oscillators are coupled with electrical elements to define the network's weights and achieve massive parallel computation. As the weights preserve the network functionality, mapping weights to coupling elements plays a crucial role in ONN performance. In this work, we investigate relaxation oscillators based on VO2 material, and we propose a methodology to map Hebbian coefficients to ONN coupling resistances, allowing a large-scale ONN design. We develop an analytical framework to map weight coefficients into coupling resistor values to analyze ONN architecture performance. We report on an ONN with 60 fully-connected oscillators that perform pattern recognition as a Hopfield Neural Network.


2011 ◽  
Vol 1346 ◽  
Author(s):  
Hayri E. Akin ◽  
Dundar Karabay ◽  
Allen P. Mills ◽  
Cengiz S. Ozkan ◽  
Mihrimah Ozkan

ABSTRACTDNA Computing is a rapidly-developing interdisciplinary area which could benefit from more experimental results to solve problems with the current biological tools. In this study, we have integrated microelectronics and molecular biology techniques for showing the feasibility of Hopfield Neural Network using DNA molecules. Adleman’s seminal paper in 1994 showed that DNA strands using specific molecular reactions can be used to solve the Hamiltonian Path Problem. This accomplishment opened the way for possibilities of massively parallel processing power, remarkable energy efficiency and compact data storage ability with DNA. However, in various studies, small departures from the ideal selectivity of DNA hybridization lead to significant undesired pairings of strands and that leads to difficulties in schemes for implementing large Boolean functions using DNA. Therefore, these error prone reactions in the Boolean architecture of the first DNA computers will benefit from fault tolerance or error correction methods and these methods would be essential for large scale applications. In this study, we demonstrate the operation of six dimensional Hopfield associative memory storing various memories as an archetype fault tolerant neural network implemented using DNA molecular reactions. The response of the network suggests that the protocols could be scaled to a network of significantly larger dimensions. In addition the results are read on a Silicon CMOS platform exploiting the semiconductor processing knowledge for fast and accurate hybridization rates.


2010 ◽  
Vol 6 (6) ◽  
pp. 2593-2623 ◽  
Author(s):  
D. M. Roche ◽  
H. Renssen ◽  
D. Paillard

Abstract. Understanding the sequence of events occuring during the last major glacial to interglacial transition (21 ka BP to 9 ka BP) is a challenging task that has the potential to unveil the mechanisms behind large scale climate changes. Though many studies have focused at a complex understanding of the sequence of rapid climatic change that accompanied or interrupted the deglaciation, few have analysed it in a more theoretical framework with simple forcings. In the following, we address when and where the first significant temperature anomalies appear when using slow varying forcing of the last deglaciation. We use here coupled transient simulations of the last deglaciation, including ocean, atmosphere and vegetation components to analyse the spatial timing of the deglaciation. To keep the analysis in a simple framework, we do not include rapid freshwater forcings that have led to rapid climate shifts during that time period. We aim to disentangle the direct and subsequent response of the climate system to slow forcing and moreover the location where those changes are more clearly expressed. In a data-modelling comparison perspective this could help understanding the physically plausible phasing between known forcings and recorded climatic changes. Our analysis of climate variability could also help to distinguish deglacial warming signals from internal climate variability. We thus are able to better pinpoint the onset of local deglaciation, as defined by the first significant local warming, and further show that there is a large regional variability associated with it, even with the set of slow forcings used here.


2021 ◽  
Author(s):  
Raymond Pavloski

<p>Demonstrating that an understanding of how neural networks produce a specific quality of experience has been achieved would provide a foundation for new research programs and neurotechnologies. The phenomena that comprise cortical prosthetic vision have two desirable properties for the pursuit of this goal: 1) Models of the subjective qualities of cortical prosthetic vision can be constructed; and 2) These models can be related in a natural way to models of the objective aspects of cortical prosthetic vision. Sense element engagement theory portrays the qualities of cortical prosthetic vision together with coordinated objective neural phenomena as constituting sensible spatiotemporal patterns that are produced by neural interactions. Small-scale neural network simulations are used to illustrate how these patterns are thought to arise. It is proposed that simulations and an electronic neural network (ENN) should be employed in devising tests of the theory. Large-scale simulations can provide estimates of parameter values that are required to construct an ENN. The ENN will be used to develop a prosthetic device that is predicted by the theory to produce visual forms in a novel fashion. According to the theory, confirmation of this prediction would also provide evidence that this ENN is a sentient device.</p>


Sign in / Sign up

Export Citation Format

Share Document