scholarly journals A hybrid neural network-based technique to improve the flow forecasting of physical and data-driven models: Methodology and case studies in Andean watersheds

2020 ◽  
Vol 27 ◽  
pp. 100652 ◽  
Author(s):  
Juan F. Farfán ◽  
Karina Palacios ◽  
Jacinto Ulloa ◽  
Alex Avilés
Materials ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 2875
Author(s):  
Xiaoxin Lu ◽  
Julien Yvonnet ◽  
Leonidas Papadopoulos ◽  
Ioannis Kalogeris ◽  
Vissarion Papadopoulos

A stochastic data-driven multilevel finite-element (FE2) method is introduced for random nonlinear multiscale calculations. A hybrid neural-network–interpolation (NN–I) scheme is proposed to construct a surrogate model of the macroscopic nonlinear constitutive law from representative-volume-element calculations, whose results are used as input data. Then, a FE2 method replacing the nonlinear multiscale calculations by the NN–I is developed. The NN–I scheme improved the accuracy of the neural-network surrogate model when insufficient data were available. Due to the achieved reduction in computational time, which was several orders of magnitude less than that to direct FE2, the use of such a machine-learning method is demonstrated for performing Monte Carlo simulations in nonlinear heterogeneous structures and propagating uncertainties in this context, and the identification of probabilistic models at the macroscale on some quantities of interest. Applications to nonlinear electric conduction in graphene–polymer composites are presented.


Author(s):  
Navaamsini Boopalan ◽  
Agileswari K. Ramasamy ◽  
Farrukh Hafiz Nagi

Array sensors are widely used in various fields such as radar, wireless communications, autonomous vehicle applications, medical imaging, and astronomical observations fault diagnosis. Array signal processing is accomplished with a beam pattern which is produced by the signal's amplitude and phase at each element of array. The beam pattern can get rigorously distorted in case of failure of array element and effect its Signal to Noise Ratio (SNR) badly. This paper proposes on a Hybrid Neural Network layer weight Goal Attain Optimization (HNNGAO) method to generate a recovery beam pattern which closely resembles the original beam pattern with remaining elements in the array. The proposed HNNGAO method is compared with classic synthesize beam pattern goal attain method and failed beam pattern generated in MATLAB environment. The results obtained proves that the proposed HNNGAO method gives better SNR ratio with remaining working element in linear array compared to classic goal attain method alone. Keywords: Backpropagation; Feed-forward neural network; Goal attain; Neural networks; Radiation pattern; Sensor arrays; Sensor failure; Signal-to-Noise Ratio (SNR)


2021 ◽  
Vol 11 (4) ◽  
pp. 1829
Author(s):  
Davide Grande ◽  
Catherine A. Harris ◽  
Giles Thomas ◽  
Enrico Anderlini

Recurrent Neural Networks (RNNs) are increasingly being used for model identification, forecasting and control. When identifying physical models with unknown mathematical knowledge of the system, Nonlinear AutoRegressive models with eXogenous inputs (NARX) or Nonlinear AutoRegressive Moving-Average models with eXogenous inputs (NARMAX) methods are typically used. In the context of data-driven control, machine learning algorithms are proven to have comparable performances to advanced control techniques, but lack the properties of the traditional stability theory. This paper illustrates a method to prove a posteriori the stability of a generic neural network, showing its application to the state-of-the-art RNN architecture. The presented method relies on identifying the poles associated with the network designed starting from the input/output data. Providing a framework to guarantee the stability of any neural network architecture combined with the generalisability properties and applicability to different fields can significantly broaden their use in dynamic systems modelling and control.


2021 ◽  
Vol 17 (2) ◽  
pp. 1-27
Author(s):  
Morteza Hosseini ◽  
Tinoosh Mohsenin

This article presents a low-power, programmable, domain-specific manycore accelerator, Binarized neural Network Manycore Accelerator (BiNMAC), which adopts and efficiently executes binary precision weight/activation neural network models. Such networks have compact models in which weights are constrained to only 1 bit and can be packed several in one memory entry that minimizes memory footprint to its finest. Packing weights also facilitates executing single instruction, multiple data with simple circuitry that allows maximizing performance and efficiency. The proposed BiNMAC has light-weight cores that support domain-specific instructions, and a router-based memory access architecture that helps with efficient implementation of layers in binary precision weight/activation neural networks of proper size. With only 3.73% and 1.98% area and average power overhead, respectively, novel instructions such as Combined Population-Count-XNOR , Patch-Select , and Bit-based Accumulation are added to the instruction set architecture of the BiNMAC, each of which replaces execution cycles of frequently used functions with 1 clock cycle that otherwise would have taken 54, 4, and 3 clock cycles, respectively. Additionally, customized logic is added to every core to transpose 16×16-bit blocks of memory on a bit-level basis, that expedites reshaping intermediate data to be well-aligned for bitwise operations. A 64-cluster architecture of the BiNMAC is fully placed and routed in 65-nm TSMC CMOS technology, where a single cluster occupies an area of 0.53 mm 2 with an average power of 232 mW at 1-GHz clock frequency and 1.1 V. The 64-cluster architecture takes 36.5 mm 2 area and, if fully exploited, consumes a total power of 16.4 W and can perform 1,360 Giga Operations Per Second (GOPS) while providing full programmability. To demonstrate its scalability, four binarized case studies including ResNet-20 and LeNet-5 for high-performance image classification, as well as a ConvNet and a multilayer perceptron for low-power physiological applications were implemented on BiNMAC. The implementation results indicate that the population-count instruction alone can expedite the performance by approximately 5×. When other new instructions are added to a RISC machine with existing population-count instruction, the performance is increased by 58% on average. To compare the performance of the BiNMAC with other commercial-off-the-shelf platforms, the case studies with their double-precision floating-point models are also implemented on the NVIDIA Jetson TX2 SoC (CPU+GPU). The results indicate that, within a margin of ∼2.1%--9.5% accuracy loss, BiNMAC on average outperforms the TX2 GPU by approximately 1.9× (or 7.5× with fabrication technology scaled) in energy consumption for image classification applications. On low power settings and within a margin of ∼3.7%--5.5% accuracy loss compared to ARM Cortex-A57 CPU implementation, BiNMAC is roughly ∼9.7×--17.2× (or 38.8×--68.8× with fabrication technology scaled) more energy efficient for physiological applications while meeting the application deadline.


Sign in / Sign up

Export Citation Format

Share Document