scholarly journals Computational Efficiency of a Modular Reservoir Network for Image Recognition

2021 ◽  
Vol 15 ◽  
Author(s):  
Yifan Dai ◽  
Hideaki Yamamoto ◽  
Masao Sakuraba ◽  
Shigeo Sato

Liquid state machine (LSM) is a type of recurrent spiking network with a strong relationship to neurophysiology and has achieved great success in time series processing. However, the computational cost of simulations and complex dynamics with time dependency limit the size and functionality of LSMs. This paper presents a large-scale bioinspired LSM with modular topology. We integrate the findings on the visual cortex that specifically designed input synapses can fit the activation of the real cortex and perform the Hough transform, a feature extraction algorithm used in digital image processing, without additional cost. We experimentally verify that such a combination can significantly improve the network functionality. The network performance is evaluated using the MNIST dataset where the image data are encoded into spiking series by Poisson coding. We show that the proposed structure can not only significantly reduce the computational complexity but also achieve higher performance compared to the structure of previous reported networks of a similar size. We also show that the proposed structure has better robustness against system damage than the small-world and random structures. We believe that the proposed computationally efficient method can greatly contribute to future applications of reservoir computing.

Author(s):  
Mahdi Esmaily Moghadam ◽  
Yuri Bazilevs ◽  
Tain-Yen Hsia ◽  
Alison Marsden

A closed-loop lumped parameter network (LPN) coupled to a 3D domain is a powerful tool that can be used to model the global dynamics of the circulatory system. Coupling a 0D LPN to a 3D CFD domain is a numerically challenging problem, often associated with instabilities, extra computational cost, and loss of modularity. A computationally efficient finite element framework has been recently proposed that achieves numerical stability without sacrificing modularity [1]. This type of coupling introduces new challenges in the linear algebraic equation solver (LS), producing an strong coupling between flow and pressure that leads to an ill-conditioned tangent matrix. In this paper we exploit this strong coupling to obtain a novel and efficient algorithm for the linear solver (LS). We illustrate the efficiency of this method on several large-scale cardiovascular blood flow simulation problems.


Author(s):  
Shiyan Jayanath ◽  
Ajit Achuthan

Macroscale finite element (FE) models, with their ability to simulate additive manufacturing (AM) processes of metal parts and accurately predict residual stress distribution, are potentially powerful design tools. However, these simulations require enormous computational cost, even for a small part only a few orders larger than the melt pool size. The existing adaptive meshing techniques to reduce computational cost substantially by selectively coarsening are not well suited for AM process simulations due to the continuous modification of model geometry as material is added to the system. To address this limitation, a new FE framework is developed. The new FE framework is based on introducing updated discretized geometries at regular intervals during the simulation process, allowing greater flexibility to control the degree of mesh coarsening than a technique based on element merging recently reported in the literature. The new framework is evaluated by simulating direct metal deposition (DMD) of a thin-walled rectangular and a thin-walled cylindrical part, and comparing the computational speed and predicted results with those predicted by simulations using the conventional framework. The comparison shows excellent agreement in the predicted stress and plastic strain fields, with substantial savings in the simulation time. The method is then validated by comparing the predicted residual elastic strain with that measured experimentally by neutron diffraction of the thin-walled rectangular part. Finally, the new framework's capability to substantially reduce the simulation time for large-scale AM parts is demonstrated by simulating a one-half foot thin-walled cylindrical part.


Author(s):  
Jing Li ◽  
Xiaorun Li ◽  
Liaoying Zhao

The minimization problem of reconstruction error over large hyperspectral image data is one of the most important problems in unsupervised hyperspectral unmixing. A variety of algorithms based on nonnegative matrix factorization (NMF) have been proposed in the literature to solve this minimization problem. One popular optimization method for NMF is the projected gradient descent (PGD). However, as the algorithm must compute the full gradient on the entire dataset at every iteration, the PGD suffers from high computational cost in the large-scale real hyperspectral image. In this paper, we try to alleviate this problem by introducing a mini-batch gradient descent-based algorithm, which has been widely used in large-scale machine learning. In our method, the endmember can be updated pixel set by pixel set while abundance can be updated band set by band set. Thus, the computational cost is lowered to a certain extent. The performance of the proposed algorithm is quantified in the experiment on synthetic and real data.


2019 ◽  
Author(s):  
Robert L. Peach ◽  
Dominik Saman ◽  
Sophia N. Yaliraki ◽  
David R. Klug ◽  
Liming Ying ◽  
...  

AbstractProteins exhibit complex dynamics across a vast range of time and length scales, from the atomistic to the conformational. Adenylate kinase (ADK) showcases the biological relevance of such inherently coupled dynamics across scales: single mutations can affect large-scale protein motions and enzymatic activity. Here we present a combined computational and experimental study of multiscale structure and dynamics in proteins, using ADK as our system of choice. We show how a computationally efficient method for unsupervised graph partitioning can be applied to atomistic graphs derived from protein structures to reveal intrinsic, biochemically relevant substructures at all scales, without re-parameterisation or a priori coarse-graining. We subsequently perform full alanine and arginine in silico mutagenesis scans of the protein, and score all mutations according to the disruption they induce on the large-scale organisation. We use our calculations to guide Förster Resonance Energy Transfer (FRET) experiments on ADK, and show that mutating residue D152 to alanine or residue V164 to arginine induce a large dynamical shift of the protein structure towards a closed state, in accordance with our predictions. Our computations also predict a graded effect of different mutations at the D152 site as a result of increased coherence between the core and binding domains, an effect confirmed quantitatively through a high correlation (R2 = 0.93) with the FRET ratio between closed and open populations measured on six mutants.


2015 ◽  
Vol 143 (2) ◽  
pp. 563-580 ◽  
Author(s):  
Joanna Slawinska ◽  
Olivier Pauluis ◽  
Andrew J. Majda ◽  
Wojciech W. Grabowski

Abstract This paper discusses the sparse space–time superparameterization (SSTSP) algorithm and evaluates its ability to represent interactions between moist convection and the large-scale circulation in the context of a Walker cell flow over a planetary scale two-dimensional domain. The SSTSP represents convective motions in each column of the large-scale model by embedding a cloud-resolving model, and relies on a sparse sampling in both space and time to reduce computational cost of explicit simulation of convective processes. Simulations are performed varying the spatial compression and/or temporal acceleration, and results are compared to the cloud-resolving simulation reported previously. The algorithm is able to reproduce a broad range of circulation features for all temporal accelerations and spatial compressions, but significant biases are identified. Precipitation tends to be too intense and too localized over warm waters when compared to the cloud-resolving simulations. It is argued that this is because coherent propagation of organized convective systems from one large-scale model column to another is difficult when superparameterization is used, as noted in previous studies. The Walker cell in all simulations exhibits low-frequency variability on a time scale of about 20 days, characterized by four distinctive stages: suppressed, intensification, active, and weakening. The SSTSP algorithm captures spatial structure and temporal evolution of the variability. This reinforces the confidence that SSTSP preserves fundamental interactions between convection and the large-scale flow, and offers a computationally efficient alternative to traditional convective parameterizations.


Author(s):  
Yueqing Wang ◽  
Xinwang Liu ◽  
Yong Dou ◽  
Rongchun Li

Multiple kernel clustering (MKC) algorithms have been extensively studied and applied to various applications. Although they demonstrate great success in both the theoretical aspects and applications, existing MKC algorithms cannot be applied to large-scale clustering tasks due to: i) the heavy computational cost to calculate the base kernels; and ii) insufficient memory to load the kernel matrices. In this paper, we propose an approximate algorithm to overcome these issues, and to make it be applicable to large-scale applications. Specifically, our algorithm trains a deep neural network to regress the indicating matrix generated by MKC algorithms on a small subset, and then obtains the approximate indicating matrix of the whole data set using the trained network, and finally performs the $k$-means on the output of our network. By mapping features into indicating matrix directly, our algorithm avoids computing the full kernel matrices, which dramatically decreases the memory requirement. Extensive experiments show that our algorithm consumes less time than most comparatively similar algorithms, while it achieves comparable performance with MKC algorithms.


2018 ◽  
Author(s):  
Tsubasa Ito ◽  
Keisuke Ota ◽  
Kanako Ueno ◽  
Yasuhiro Oisi ◽  
Chie Matsubara ◽  
...  

AbstractThe rapid progress of calcium imaging has reached a point where the activity of tens of thousands of cells can be recorded simultaneously. However, the huge amount of data in such records makes it difficult to carry out cell detection manually. Consequently, because the cell detection is the first step of multicellular data analysis, there is a pressing need for automatic cell detection methods for large-scale image data. Automatic cell detection algorithms have been pioneered by a handful of research groups. Such algorithms, however, assume a conventional field of view (FOV) (i.e. 512 × 512 pixels) and need a significantly higher computational power for a wider FOV to work within a practical period of time. To overcome this issue, we propose a method called low computational-cost cell detection (LCCD), which can complete its processing even on the latest ultra-large FOV data within a practical period of time. We compared it with two previously proposed methods, constrained non-negative matrix factorization (CNMF) and Suite2P. We found that LCCD makes it possible to detect cells from a huge-amount of high-density imaging data within a shorter period of time and with an accuracy comparable to or better than those of CNMF and Suite2P.


Author(s):  
B. Aparna ◽  
S. Madhavi ◽  
G. Mounika ◽  
P. Avinash ◽  
S. Chakravarthi

We propose a new design for large-scale multimedia content protection systems. Our design leverages cloud infrastructures to provide cost efficiency, rapid deployment, scalability, and elasticity to accommodate varying workloads. The proposed system can be used to protect different multimedia content types, including videos, images, audio clips, songs, and music clips. The system can be deployed on private and/or public clouds. Our system has two novel components: (i) method to create signatures of videos, and (ii) distributed matching engine for multimedia objects. The signature method creates robust and representative signatures of videos that capture the depth signals in these videos and it is computationally efficient to compute and compare as well as it requires small storage. The distributed matching engine achieves high scalability and it is designed to support different multimedia objects. We implemented the proposed system and deployed it on two clouds: Amazon cloud and our private cloud. Our experiments with more than 11,000 videos and 1 million images show the high accuracy and scalability of the proposed system. In addition, we compared our system to the protection system used by YouTube and our results show that the YouTube protection system fails to detect most copies of videos, while our system detects more than 98% of them.


Author(s):  
Jiawei Huang ◽  
Shiqi Wang ◽  
Shuping Li ◽  
Shaojun Zou ◽  
Jinbin Hu ◽  
...  

AbstractModern data center networks typically adopt multi-rooted tree topologies such leaf-spine and fat-tree to provide high bisection bandwidth. Load balancing is critical to achieve low latency and high throughput. Although the per-packet schemes such as Random Packet Spraying (RPS) can achieve high network utilization and near-optimal tail latency in symmetric topologies, they are prone to cause significant packet reordering and degrade the network performance. Moreover, some coding-based schemes are proposed to alleviate the problem of packet reordering and loss. Unfortunately, these schemes ignore the traffic characteristics of data center network and cannot achieve good network performance. In this paper, we propose a Heterogeneous Traffic-aware Partition Coding named HTPC to eliminate the impact of packet reordering and improve the performance of short and long flows. HTPC smoothly adjusts the number of redundant packets based on the multi-path congestion information and the traffic characteristics so that the tailing probability of short flows and the timeout probability of long flows can be reduced. Through a series of large-scale NS2 simulations, we demonstrate that HTPC reduces average flow completion time by up to 60% compared with the state-of-the-art mechanisms.


2021 ◽  
Vol 13 (9) ◽  
pp. 5108
Author(s):  
Navin Ranjan ◽  
Sovit Bhandari ◽  
Pervez Khan ◽  
Youn-Sik Hong ◽  
Hoon Kim

The transportation system, especially the road network, is the backbone of any modern economy. However, with rapid urbanization, the congestion level has surged drastically, causing a direct effect on the quality of urban life, the environment, and the economy. In this paper, we propose (i) an inexpensive and efficient Traffic Congestion Pattern Analysis algorithm based on Image Processing, which identifies the group of roads in a network that suffers from reoccurring congestion; (ii) deep neural network architecture, formed from Convolutional Autoencoder, which learns both spatial and temporal relationships from the sequence of image data to predict the city-wide grid congestion index. Our experiment shows that both algorithms are efficient because the pattern analysis is based on the basic operations of arithmetic, whereas the prediction algorithm outperforms two other deep neural networks (Convolutional Recurrent Autoencoder and ConvLSTM) in terms of large-scale traffic network prediction performance. A case study was conducted on the dataset from Seoul city.


Sign in / Sign up

Export Citation Format

Share Document