computational capability
Recently Published Documents


TOTAL DOCUMENTS

116
(FIVE YEARS 45)

H-INDEX

9
(FIVE YEARS 3)

2022 ◽  
Vol 119 (2) ◽  
pp. e2023340118
Author(s):  
Srinath Nizampatnam ◽  
Lijun Zhang ◽  
Rishabh Chandak ◽  
James Li ◽  
Baranidharan Raman

Invariant stimulus recognition is a challenging pattern-recognition problem that must be dealt with by all sensory systems. Since neural responses evoked by a stimulus are perturbed in a multitude of ways, how can this computational capability be achieved? We examine this issue in the locust olfactory system. We find that locusts trained in an appetitive-conditioning assay robustly recognize the trained odorant independent of variations in stimulus durations, dynamics, or history, or changes in background and ambient conditions. However, individual- and population-level neural responses vary unpredictably with many of these variations. Our results indicate that linear statistical decoding schemes, which assign positive weights to ON neurons and negative weights to OFF neurons, resolve this apparent confound between neural variability and behavioral stability. Furthermore, simplification of the decoder using only ternary weights ({+1, 0, −1}) (i.e., an “ON-minus-OFF” approach) does not compromise performance, thereby striking a fine balance between simplicity and robustness.


2021 ◽  
Vol 12 (1) ◽  
pp. 384
Author(s):  
Seolwon Koo ◽  
Yujin Lim

In the Industrial Internet of Things (IIoT), various tasks are created dynamically because of the small quantity batch production. Hence, it is difficult to execute tasks only with devices that have limited battery lives and computation capabilities. To solve this problem, we adopted the mobile edge computing (MEC) paradigm. However, if there are numerous tasks to be processed on the MEC server (MECS), it may not be suitable to deal with all tasks in the server within a delay constraint owing to the limited computational capability and high network overhead. Therefore, among cooperative computing techniques, we focus on task offloading to nearby devices using device-to-device (D2D) communication. Consequently, we propose a method that determines the optimal offloading strategy in an MEC environment with D2D communication. We aim to minimize the energy consumption of the devices and task execution delay under certain delay constraints. To solve this problem, we adopt a Q-learning algorithm that is part of reinforcement learning (RL). However, if one learning agent determines whether to offload tasks from all devices, the computing complexity of that agent increases tremendously. Thus, we cluster the nearby devices that comprise the job shop, where each cluster’s head determines the optimal offloading strategy for the tasks that occur within its cluster. Simulation results show that the proposed algorithm outperforms the compared methods in terms of device energy consumption, task completion rate, task blocking rate, and throughput.


Webology ◽  
2021 ◽  
Vol 18 (2) ◽  
pp. 1047-1054
Author(s):  
V. Padmajothi ◽  
J.L. Mazher Iqbal

Scheduling is a critical process in cyber-physical systems to ensure the computation will be over within the physical system's deadline. Under the cyber-physical system, the processor is distributive and hydrogenous. Less latency task scheduling under this distributive cyberphysical system with a hydrogenous processor and resource is challenging. This article presents a decision tree based less complex mechanism of task scheduling in a heterogeneous processor environment this proposed mechanism model the tasks and the processor resource, current load level, their individual computational capability, memory availability, communication delay in the distributive system to move the task from one point to another point is taken into account for the scheduling purpose the numerical results prove that the proposed mechanism able to schedule the task quickly and with more task deadline meet the ratio.


2021 ◽  
Author(s):  
Yan Li ◽  
Ning Zhong ◽  
David Taniar ◽  
Haolan Zhang

Abstract It has been a challenge for solving the motor imagery classification problem in the brain informatics area. Accuracy and efficiency are the major obstacles for motor imagery analysis in the past decades since the computational capability and algorithmic availability cannot satisfy complex brain signal analysis. In recent years, the rapid development of Machine Learning (ML) methods has empowered people to tackle the motor imagery classification problem with more efficient methods. Among various ML methods, the Graph neural networks(GNNs) method has shown its efficiency and accuracy in dealing with inter-related complex networks. The use of GNN provides new possibilities for feature extraction from brain structure connection. In this paper, we proposed a new model called MCGNet+, which improves the performance of our previous model MutualGraphNet. In this latest model, the mutual information of the input columns forms the initial adjacency matrix for the cosine similarity calculation between columns to generate a new adjacency matrix in each iteration. The dynamic adjacency matrix combined with the spatial temporal graph convolution network(ST-GCN) has better performance than the unchanged matrix model. The experimental results indicate that MCGNet+ is robust enough to learn the interpretable features and outperforms the current state-of-the-art methods.


Author(s):  
Upadesh Subedi ◽  
Anil Kunwar ◽  
Yuri Amorim Coutinho ◽  
Khem Gyanwali

AbstractMulti-principal element alloys (MPEAs) occur at or nearby the centre of the multicomponent phase space, and they have the unique potential to be tailored with a blend of several desirable properties for the development of materials of future. The lack of universal phase diagrams for MPEAs has been a major challenge in the accelerated design of products with these materials. This study aims to solve this issue by employing data-driven approaches in phase prediction. A MPEA is first represented by numerical fingerprints (composition, atomic size difference , electronegativity , enthalpy of mixing , entropy of mixing , dimensionless $$\Omega$$ Ω parameter, valence electron concentration and phase types ), and an artificial neural network (ANN) is developed upon the datasets of these numerical descriptors. A pyMPEALab GUI interface is developed on the top of this ANN model with a computational capability to associate composition features with remaining other input features. With the GUI interface, an user can predict the phase(s) of a MPEA by entering solely the information of composition. It is further explored on how the knowledge of phase(s) prediction in composition-varied $$\hbox {Al}_x$$ Al x CrCoFeMnNi and $$\hbox {CoCrNiNb}_x$$ CoCrNiNb x can help in understanding the mechanical behavior of these MPEAs. Graphic Abstract


2021 ◽  
Author(s):  
Masayuki Ushio ◽  
Kazufumi Watanabe ◽  
Yasuhiro Fukuda ◽  
Yuji Tokudome ◽  
Kohei Nakajima

Ecological dynamics is driven by an ecological network consisting of complex interactions. Information processing capability of artificial networks has been exploited as a computational resource, yet whether an ecological network possesses a computational capability and how we can exploit it remain unclear. Here, we show that ecological dynamics can be exploited as a computational resource. We call this approach "Ecological Reservoir Computing" (ERC) and developed two types of ERC. In silico ERC reconstructs ecological dynamics from empirical time series and uses simulated system responses as reservoir states, which predicts near future of chaotic dynamics and emulates nonlinear dynamics. The real-time ERC uses population dynamics of a unicellular organism, Tetrahymena thermophila. Medium temperature is an input signal and changes in population abundance are reservoir states. Intriguingly, the real-time ERC has necessary conditions for reservoir computing and is able to make near future predictions of model and empirical time series.


PhotoniX ◽  
2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Chong Li ◽  
Xiang Zhang ◽  
Jingwei Li ◽  
Tao Fang ◽  
Xiaowen Dong

AbstractIn recent years, the explosive development of artificial intelligence implementing by artificial neural networks (ANNs) creates inconceivable demands for computing hardware. However, conventional computing hardware based on electronic transistor and von Neumann architecture cannot satisfy such an inconceivable demand due to the unsustainability of Moore’s Law and the failure of Dennard’s scaling rules. Fortunately, analog optical computing offers an alternative way to release unprecedented computational capability to accelerate varies computing drained tasks. In this article, the challenges of the modern computing technologies and potential solutions are briefly explained in Chapter 1. In Chapter 2, the latest research progresses of analog optical computing are separated into three directions: vector/matrix manipulation, reservoir computing and photonic Ising machine. Each direction has been explicitly summarized and discussed. The last chapter explains the prospects and the new challenges of analog optical computing.


Author(s):  
Golak B Mahanta ◽  
B B V L Deepak ◽  
Bibhuti B Biswal

Robotic grasping has become one of the most important domains of robotics research over the past few decades due to its wide range of applications in industrial automation. The model of grasping objects by robot hand depends on a good number of factors, such as type and size of the object, the morphology of object, type of hand, degree of freedom, etc. Thus, the model sometimes becomes mathematically intractable. With the progress in computational capability, soft computing methods have found a way to address the challenges faced by traditional methods while dealing with the robotic grasping problem. This article aims to summarize the outcome of a systematic study in the field of application of soft computing methods in robotic grasping and manipulation. The key processes of robotic grasping where soft computing methods are applied are identified, and research contributions of all processes are analyzed. This review presents a state-of-the-art survey and attempts to find the research gaps in the area of soft computing applications to address the robotic grasping problem.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Aonan Zhang ◽  
Hao Zhan ◽  
Junjie Liao ◽  
Kaimin Zheng ◽  
Tao Jiang ◽  
...  

AbstractQuantum computing is seeking to realize hardware-optimized algorithms for application-related computational tasks. NP (nondeterministic-polynomial-time) is a complexity class containing many important but intractable problems like the satisfiability of potentially conflict constraints (SAT). According to the well-founded exponential time hypothesis, verifying an SAT instance of size n requires generally the complete solution in an O(n)-bit proof. In contrast, quantum verification algorithms, which encode the solution into quantum bits rather than classical bit strings, can perform the verification task with quadratically reduced information about the solution in $$\tilde O(\sqrt n )$$ O ̃ ( n ) qubits. Here we realize the quantum verification machine of SAT with single photons and linear optics. By using tunable optical setups, we efficiently verify satisfiable and unsatisfiable SAT instances and achieve a clear completeness-soundness gap even in the presence of experimental imperfections. The protocol requires only unentangled photons, linear operations on multiple modes and at most two-photon joint measurements. These features make the protocol suitable for photonic realization and scalable to large problem sizes with the advances in high-dimensional quantum information manipulation and large scale linear-optical systems. Our results open an essentially new route toward quantum advantages and extend the computational capability of optical quantum computing.


Sign in / Sign up

Export Citation Format

Share Document