scholarly journals PymoNNto: A Flexible Modular Toolbox for Designing Brain-Inspired Neural Networks

2021 ◽  
Vol 15 ◽  
Author(s):  
Marius Vieth ◽  
Tristan M. Stöber ◽  
Jochen Triesch

The Python Modular Neural Network Toolbox (PymoNNto) provides a versatile and adaptable Python-based framework to develop and investigate brain-inspired neural networks. In contrast to other commonly used simulators such as Brian2 and NEST, PymoNNto imposes only minimal restrictions for implementation and execution. The basic structure of PymoNNto consists of one network class with several neuron- and synapse-groups. The behaviour of each group can be flexibly defined by exchangeable modules. The implementation of these modules is up to the user and only limited by Python itself. Behaviours can be implemented in Python, Numpy, Tensorflow, and other libraries to perform computations on CPUs and GPUs. PymoNNto comes with convenient high level behaviour modules, allowing differential equation-based implementations similar to Brian2, and an adaptable modular Graphical User Interface for real-time observation and modification of the simulated network and its parameters.

Author(s):  
Mostafijur Rahaman ◽  
Sankar Prasad Mondal ◽  
Shariful Alam

In this chapter, different inventory control problems are formulated in fuzzy environment and solved by artificial neural network. Due to present the non-linearity associated with the differential equation in fuzzy environment, the solution procedure may be very complicated. To avoid the situation, artificial neural networks play an important role. In this chapter, different inventory control problems are formulated in fuzzy environment and solved by artificial neural network. Due to present the non-linearity associated with the differential equation in fuzzy environment, the solution procedure may be very complicated. To avoid the situation, artificial neural networks play an important role.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Alexandru Lavric ◽  
Popa Valentin

Keratoconus (KTC) is a noninflammatory disorder characterized by progressive thinning, corneal deformation, and scarring of the cornea. The pathological mechanisms of this condition have been investigated for a long time. In recent years, this disease has come to the attention of many research centers because the number of people diagnosed with keratoconus is on the rise. In this context, solutions that facilitate both the diagnostic and treatment options are quickly needed. The main contribution of this paper is the implementation of an algorithm that is able to determine whether an eye is affected or not by keratoconus. The KeratoDetect algorithm analyzes the corneal topography of the eye using a convolutional neural network (CNN) that is able to extract and learn the features of a keratoconus eye. The results show that the KeratoDetect algorithm ensures a high level of performance, obtaining an accuracy of 99.33% on the data test set. KeratoDetect can assist the ophthalmologist in rapid screening of its patients, thus reducing diagnostic errors and facilitating treatment.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Zhenyu Yang ◽  
Mingge Zhang ◽  
Guojing Liu ◽  
Mingyu Li

The recommendation method based on user sessions is mainly to model sessions as sequences in the assumption that user behaviors are independent and identically distributed, and then to use deep semantic information mining through Deep Neural Networks. Nevertheless, user behaviors may be a nonindependent intention at irregular points in time. For example, users may buy painkillers, books, or clothes for different reasons at different times. However, this has not been taken seriously in previous studies. Therefore, we propose a session recommendation method based on Neural Differential Equations in an attempt to predict user behavior forward or backward from any point in time. We used Ordinary Differential Equations to train the Graph Neural Network and could predict forward or backward at any point in time to model the user's nonindependent sessions. We tested for four real datasets and found that our model achieved the expected results and was superior to the existing session-based recommendations.


2020 ◽  
Vol 45 (03) ◽  
Author(s):  
HO DAC QUAN ◽  
HUYNH TRUNG HIEU

Phương trình đạo hàm riêng đã được ứng dụng rộng rãi trong các lĩnh vực khác nhau của đời sống như vật lý, hóa học, kinh tế, xử lý ảnh vv. Trong bài báo này chúng tôi trình bày một phương pháp giải phương trình đạo hàm riêng (partial differential equation - PDE) thoả điều kiện biên Dirichlete sửdụng mạng neural truyền thẳng một lớp ẩn (single-hidden layer feedfordward neural networks - SLFN) gọi là phương pháp mạng neural (neural network method – NNM). Các tham số của mạng neural được xác định dựa trên thuật toán huấn luyện mạng lan truyền ngược (backpropagation - BP). Kết quả nghiệm PDE thu được bằng phương pháp NNM chính xác hơn so với nghiệm PDE giải bằng phương pháp sai phân hữu hạn.


Electronics ◽  
2021 ◽  
Vol 10 (21) ◽  
pp. 2687
Author(s):  
Eun-Hun Lee ◽  
Hyeoncheol Kim

The significant advantage of deep neural networks is that the upper layer can capture the high-level features of data based on the information acquired from the lower layer by stacking layers deeply. Since it is challenging to interpret what knowledge the neural network has learned, various studies for explaining neural networks have emerged to overcome this problem. However, these studies generate the local explanation of a single instance rather than providing a generalized global interpretation of the neural network model itself. To overcome such drawbacks of the previous approaches, we propose the global interpretation method for the deep neural network through features of the model. We first analyzed the relationship between the input and hidden layers to represent the high-level features of the model, then interpreted the decision-making process of neural networks through high-level features. In addition, we applied network pruning techniques to make concise explanations and analyzed the effect of layer complexity on interpretability. We present experiments on the proposed approach using three different datasets and show that our approach could generate global explanations on deep neural network models with high accuracy and fidelity.


Author(s):  
Jingyun Xu ◽  
Yi Cai

Some text classification methods don’t work well on short texts due to the data sparsity. What’s more, they don’t fully exploit context-relevant knowledge. In order to tackle these problems, we propose a neural network to incorporate context-relevant knowledge into a convolutional neural network for short text classification. Our model consists of two modules. The first module utilizes two layers to extract concept and context features respectively and then employs an attention layer to extract those context-relevant concepts. The second module utilizes a convolutional neural network to extract high-level features from the word and the contextrelevant concept features. The experimental results on three datasets show that our proposed model outperforms the stateof-the-art models.


2017 ◽  
Author(s):  
Michael F. Bonner ◽  
Russell A. Epstein

ABSTRACTBiologically inspired deep convolutional neural networks (CNNs), trained for computer vision tasks, have been found to predict cortical responses with remarkable accuracy. However, the complex internal operations of these models remain poorly understood, and the factors that account for their success are unknown. Here we developed a set of techniques for using CNNs to gain insights into the computational mechanisms underlying cortical responses. We focused on responses in the occipital place area (OPA), a scene-selective region of dorsal occipitoparietal cortex. In a previous study, we showed that fMRI activation patterns in the OPA contain information about the navigational affordances of scenes: that is, information about where one can and cannot move within the immediate environment. We hypothesized that this affordance information could be extracted using a set of purely feedforward computations. To test this idea, we examined a deep CNN with a feedforward architecture that had been previously trained for scene classification. We found that the CNN was highly predictive of OPA representations, and, importantly, that it accounted for the portion of OPA variance that reflected the navigational affordances of scenes. The CNN could thus serve as an image-computable candidate model of affordance-related responses in the OPA. We then ran a series of in silico experiments on this model to gain insights into its internal computations. These analyses showed that the computation of affordance-related features relied heavily on visual information at high-spatial frequencies and cardinal orientations, both of which have previously been identified as low-level stimulus preferences of scene-selective visual cortex. These computations also exhibited a strong preference for information in the lower visual field, which is consistent with known retinotopic biases in the OPA. Visualizations of feature selectivity within the CNN suggested that affordance-based responses encoded features that define the layout of the spatial environment, such as boundary-defining junctions and large extended surfaces. Together, these results map the sensory functions of the OPA onto a fully quantitative model that provides insights into its visual computations. More broadly, they advance integrative techniques for understanding visual cortex across multiple level of analysis: from the identification of cortical sensory functions to the modeling of their underlying algorithmic implementations.AUTHOR SUMMARYHow does visual cortex compute behaviorally relevant properties of the local environment from sensory inputs? For decades, computational models have been able to explain only the earliest stages of biological vision, but recent advances in the engineering of deep neural networks have yielded a breakthrough in the modeling of high-level visual cortex. However, these models are not explicitly designed for testing neurobiological theories, and, like the brain itself, their complex internal operations remain poorly understood. Here we examined a deep neural network for insights into the cortical representation of the navigational affordances of visual scenes. In doing so, we developed a set of high-throughput techniques and statistical tools that are broadly useful for relating the internal operations of neural networks with the information processes of the brain. Our findings demonstrate that a deep neural network with purely feedforward computations can account for the processing of navigational layout in high-level visual cortex. We next performed a series of experiments and visualization analyses on this neural network, which characterized a set of stimulus input features that may be critical for computing navigationally related cortical representations and identified a set of high-level, complex scene features that may serve as a basis set for the cortical coding of navigational layout. These findings suggest a computational mechanism through which high-level visual cortex might encode the spatial structure of the local navigational environment, and they demonstrate an experimental approach for leveraging the power of deep neural networks to understand the visual computations of the brain.


2022 ◽  
Vol 16 (2) ◽  
pp. 1-18
Author(s):  
Hanlu Wu ◽  
Tengfei Ma ◽  
Lingfei Wu ◽  
Fangli Xu ◽  
Shouling Ji

Crowdsourcing has attracted much attention for its convenience to collect labels from non-expert workers instead of experts. However, due to the high level of noise from the non-experts, a label aggregation model that infers the true label from noisy crowdsourced labels is required. In this article, we propose a novel framework based on graph neural networks for aggregating crowd labels. We construct a heterogeneous graph between workers and tasks and derive a new graph neural network to learn the representations of nodes and the true labels. Besides, we exploit the unknown latent interaction between the same type of nodes (workers or tasks) by adding a homogeneous attention layer in the graph neural networks. Experimental results on 13 real-world datasets show superior performance over state-of-the-art models.


Sign in / Sign up

Export Citation Format

Share Document