neural computing
Recently Published Documents


TOTAL DOCUMENTS

417
(FIVE YEARS 81)

H-INDEX

34
(FIVE YEARS 5)

Author(s):  
Ameni Hedhli ◽  
Haithem Mezni ◽  
Lamjed Ben Said
Keyword(s):  

Mathematics ◽  
2021 ◽  
Vol 9 (19) ◽  
pp. 2522
Author(s):  
Harwant Singh Arri ◽  
Ramandeep Singh Khosa ◽  
Sudan Jha ◽  
Deepak Prashar ◽  
Gyanendra Prasad Joshi ◽  
...  

It is a non-deterministic challenge on a fog computing network to schedule resources or jobs in a manner that increases device efficacy and throughput, diminishes reply period, and maintains the system well-adjusted. Using Machine Learning as a component of neural computing, we developed an improved Task Group Aggregation (TGA) overflow handling system for fog computing environments. As a result of TGA usage in conjunction with an Artificial Neural Network (ANN), we may assess the model’s QoS characteristics to detect an overloaded server and then move the model’s data to virtual machines (VMs). Overloaded and underloaded virtual machines will be balanced according to parameters, such as CPU, memory, and bandwidth to control fog computing overflow concerns with the help of ANN and the machine learning concept. Additionally, the Artificial Bee Colony (ABC) algorithm, which is a neural computing system, is employed as an optimization technique to separate the services and users depending on their individual qualities. The response time and success rate were both enhanced using the newly proposed optimized ANN-based TGA algorithm. Compared to the present work’s minimal reaction time, the total improvement in average success rate is about 3.6189 percent, and Resource Scheduling Efficiency has improved by 3.9832 percent. In terms of virtual machine efficiency for resource scheduling, average success rate, average task completion success rate, and virtual machine response time are improved. The proposed TGA-based overflow handling on a fog computing domain enhances response time compared to the current approaches. Fog computing, for example, demonstrates how artificial intelligence-based systems can be made more efficient.


Author(s):  
Q N Wang ◽  
C Zhao ◽  
W Liu ◽  
H van Zalinge ◽  
Y N Liu ◽  
...  

Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1124
Author(s):  
Wasiq Ali ◽  
Yaan Li ◽  
Muhammad Asif Zahoor Raja ◽  
Wasim Ullah Khan ◽  
Yigang He

In this study, an application of deep learning-based neural computing is proposed for efficient real-time state estimation of the Markov chain underwater maneuvering object. The designed intelligent strategy is exploiting the strength of nonlinear autoregressive with an exogenous input (NARX) network model, which has the capability for estimating the dynamics of the systems that follow the discrete-time Markov chain. Nonlinear Bayesian filtering techniques are often applied for underwater maneuvering state estimation applications by following state-space methodology. The robustness and precision of NARX neural network are efficiently investigated for accurate state prediction of the passive Markov chain highly maneuvering underwater target. A continuous coordinated turning trajectory of an underwater maneuvering object is modeled for analyzing the performance of the neural computing paradigm. State estimation modeling is developed in the context of bearings only tracking technology in which the efficiency of the NARX neural network is investigated for ideal and complex ocean environments. Real-time position and velocity of maneuvering object are computed for five different cases by varying standard deviations of white Gaussian measured noise. Sufficient Monte Carlo simulation results validate the competence of NARX neural computing over conventional generalized pseudo-Bayesian filtering algorithms like an interacting multiple model extended Kalman filter and an interacting multiple model unscented Kalman filter.


2021 ◽  
Author(s):  
Yurui Qu ◽  
Ming Zhou ◽  
Erfan Khoram ◽  
Nanfang Yu ◽  
Zongfu Yu

Abstract There is a strong interest in using physical waves for artificial neural computing because of their unique advantages in fast speed and intrinsic parallelism. Resonance, as a ubiquitous feature across many wave systems, is a natural candidate for analog computing in temporal signals. We demonstrate that resonance can be used to construct stable and scalable recurrent neural networks. By including resonators with different lifetimes, the computing system develops both short-term and long-term memory simultaneously.


Author(s):  
Ajmal Saeed Mian ◽  
Lei Wang ◽  
Ruiping Wang ◽  
Hamid Laga ◽  
Naveed Akhtar

2021 ◽  
Vol 14 (4) ◽  
pp. 308-317
Author(s):  
Naoki WAKAMIYA

Sign in / Sign up

Export Citation Format

Share Document