scholarly journals Design of Nonlinear Autoregressive Exogenous Model Based Intelligence Computing for Efficient State Estimation of Underwater Passive Target

Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 550
Author(s):  
Wasiq Ali ◽  
Wasim Ullah Khan ◽  
Muhammad Asif Zahoor Raja ◽  
Yigang He ◽  
Yaan Li

In this study, an intelligent computing paradigm built on a nonlinear autoregressive exogenous (NARX) feedback neural network model with the strength of deep learning is presented for accurate state estimation of an underwater passive target. In underwater scenarios, real-time motion parameters of passive objects are usually extracted with nonlinear filtering techniques. In filtering algorithms, nonlinear passive measurements are associated with linear kinetics of the target, governing by state space methodology. To improve tracking accuracy, effective feature estimation and minimizing position error of dynamic passive objects, the strength of NARX based supervised learning is exploited. Dynamic artificial neural networks, which contain tapped delay lines, are suitable for predicting the future state of the underwater passive object. Neural networks-based intelligence computing is effectively applied for estimating the real-time actual state of a passive moving object, which follows a semi-curved path. Performance analysis of NARX based neural networks is evaluated for six different scenarios of standard deviation of white Gaussian measurement noise by following bearings only tracking phenomena. Root mean square error between estimated and real position of the passive target in rectangular coordinates is computed for evaluating the worth of the proposed NARX feedback neural network scheme. The Monte Carlo simulations are conducted and the results certify the capability of the intelligence computing over conventional nonlinear filtering algorithms such as spherical radial cubature Kalman filter and unscented Kalman filter for given state estimation model.

Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1124
Author(s):  
Wasiq Ali ◽  
Yaan Li ◽  
Muhammad Asif Zahoor Raja ◽  
Wasim Ullah Khan ◽  
Yigang He

In this study, an application of deep learning-based neural computing is proposed for efficient real-time state estimation of the Markov chain underwater maneuvering object. The designed intelligent strategy is exploiting the strength of nonlinear autoregressive with an exogenous input (NARX) network model, which has the capability for estimating the dynamics of the systems that follow the discrete-time Markov chain. Nonlinear Bayesian filtering techniques are often applied for underwater maneuvering state estimation applications by following state-space methodology. The robustness and precision of NARX neural network are efficiently investigated for accurate state prediction of the passive Markov chain highly maneuvering underwater target. A continuous coordinated turning trajectory of an underwater maneuvering object is modeled for analyzing the performance of the neural computing paradigm. State estimation modeling is developed in the context of bearings only tracking technology in which the efficiency of the NARX neural network is investigated for ideal and complex ocean environments. Real-time position and velocity of maneuvering object are computed for five different cases by varying standard deviations of white Gaussian measured noise. Sufficient Monte Carlo simulation results validate the competence of NARX neural computing over conventional generalized pseudo-Bayesian filtering algorithms like an interacting multiple model extended Kalman filter and an interacting multiple model unscented Kalman filter.


Author(s):  
Muhammad Hanif Ahmad Nizar ◽  
Chow Khuen Chan ◽  
Azira Khalil ◽  
Ahmad Khairuddin Mohamed Yusof ◽  
Khin Wee Lai

Background: Valvular heart disease is a serious disease leading to mortality and increasing medical care cost. The aortic valve is the most common valve affected by this disease. Doctors rely on echocardiogram for diagnosing and evaluating valvular heart disease. However, the images from echocardiogram are poor in comparison to Computerized Tomography and Magnetic Resonance Imaging scan. This study proposes the development of Convolutional Neural Networks (CNN) that can function optimally during a live echocardiographic examination for detection of the aortic valve. An automated detection system in an echocardiogram will improve the accuracy of medical diagnosis and can provide further medical analysis from the resulting detection. Methods: Two detection architectures, Single Shot Multibox Detector (SSD) and Faster Regional based Convolutional Neural Network (R-CNN) with various feature extractors were trained on echocardiography images from 33 patients. Thereafter, the models were tested on 10 echocardiography videos. Results: Faster R-CNN Inception v2 had shown the highest accuracy (98.6%) followed closely by SSD Mobilenet v2. In terms of speed, SSD Mobilenet v2 resulted in a loss of 46.81% in framesper- second (fps) during real-time detection but managed to perform better than the other neural network models. Additionally, SSD Mobilenet v2 used the least amount of Graphic Processing Unit (GPU) but the Central Processing Unit (CPU) usage was relatively similar throughout all models. Conclusion: Our findings provide a foundation for implementing a convolutional detection system to echocardiography for medical purposes.


Processes ◽  
2021 ◽  
Vol 9 (5) ◽  
pp. 737
Author(s):  
Chaitanya Sampat ◽  
Rohit Ramachandran

The digitization of manufacturing processes has led to an increase in the availability of process data, which has enabled the use of data-driven models to predict the outcomes of these manufacturing processes. Data-driven models are instantaneous in simulate and can provide real-time predictions but lack any governing physics within their framework. When process data deviates from original conditions, the predictions from these models may not agree with physical boundaries. In such cases, the use of first-principle-based models to predict process outcomes have proven to be effective but computationally inefficient and cannot be solved in real time. Thus, there remains a need to develop efficient data-driven models with a physical understanding about the process. In this work, we have demonstrate the addition of physics-based boundary conditions constraints to a neural network to improve its predictability for granule density and granule size distribution (GSD) for a high shear granulation process. The physics-constrained neural network (PCNN) was better at predicting granule growth regimes when compared to other neural networks with no physical constraints. When input data that violated physics-based boundaries was provided, the PCNN identified these points more accurately compared to other non-physics constrained neural networks, with an error of <1%. A sensitivity analysis of the PCNN to the input variables was also performed to understand individual effects on the final outputs.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Georges Aad ◽  
Anne-Sophie Berthold ◽  
Thomas Calvet ◽  
Nemer Chiedde ◽  
Etienne Marie Fortin ◽  
...  

AbstractThe ATLAS experiment at the Large Hadron Collider (LHC) is operated at CERN and measures proton–proton collisions at multi-TeV energies with a repetition frequency of 40 MHz. Within the phase-II upgrade of the LHC, the readout electronics of the liquid-argon (LAr) calorimeters of ATLAS are being prepared for high luminosity operation expecting a pileup of up to 200 simultaneous proton–proton interactions. Moreover, the calorimeter signals of up to 25 subsequent collisions are overlapping, which increases the difficulty of energy reconstruction by the calorimeter detector. Real-time processing of digitized pulses sampled at 40 MHz is performed using field-programmable gate arrays (FPGAs). To cope with the signal pileup, new machine learning approaches are explored: convolutional and recurrent neural networks outperform the optimal signal filter currently used, both in assignment of the reconstructed energy to the correct proton bunch crossing and in energy resolution. The improvements concern in particular energies derived from overlapping pulses. Since the implementation of the neural networks targets an FPGA, the number of parameters and the mathematical operations need to be well controlled. The trained neural network structures are converted into FPGA firmware using automated implementations in hardware description language and high-level synthesis tools. Very good agreement between neural network implementations in FPGA and software based calculations is observed. The prototype implementations on an Intel Stratix-10 FPGA reach maximum operation frequencies of 344–640 MHz. Applying time-division multiplexing allows the processing of 390–576 calorimeter channels by one FPGA for the most resource-efficient networks. Moreover, the latency achieved is about 200 ns. These performance parameters show that a neural-network based energy reconstruction can be considered for the processing of the ATLAS LAr calorimeter signals during the high-luminosity phase of the LHC.


Author(s):  
Fatma Gumus ◽  
Derya Yiltas-Kaplan

Software Defined Network (SDN) is a programmable network architecture that provides innovative solutions to the problems of the traditional networks. Congestion control is still an uncharted territory for this technology. In this work, a congestion prediction scheme has been developed by using neural networks. Minimum Redundancy Maximum Relevance (mRMR) feature selection algorithm was performed on the data collected from the OMNET++ simulation. The novelty of this study also covers the implementation of mRMR in an SDN congestion prediction problem. After evaluating the relevance scores, two highest ranking features were used. On the learning stage Nonlinear Autoregressive Exogenous Neural Network (NARX), Nonlinear Autoregressive Neural Network, and Nonlinear Feedforward Neural Network algorithms were executed. These algorithms had not been used before in SDNs according to the best of the authors knowledge. The experiments represented that NARX was the best prediction algorithm. This machine learning approach can be easily integrated to different topologies and application areas.


Sign in / Sign up

Export Citation Format

Share Document