computation process
Recently Published Documents


TOTAL DOCUMENTS

128
(FIVE YEARS 33)

H-INDEX

11
(FIVE YEARS 2)

2021 ◽  
Vol 11 (4) ◽  
pp. 521-532
Author(s):  
A.A. Zuenko ◽  

Within the Constraint Programming technology, so-called table constraints such as typical tables, compressed tables, smart tables, segmented tables, etc, are widely used. They can be used to represent any other types of constraints, and algorithms of the table constraint propagation (logical inference on constraints) allow eliminating a lot of "redundant" values from the domains of variables, while having low computational complexity. In the previous studies, the author proposed to divide smart tables into structures of C- and D-types. The generally accepted methodology for solving con-straint satisfaction problems is the combined application of constraint propagation methods and backtracking depth-first search methods. In the study, it is proposed to integrate breadth-first search methods and author`s method of table con-straint propagation. D-type smart tables are proposed to be represented as a join of several orthogonalized C-type smart tables. The search step is to select a pair of C-type smart tables to be joined and then propagate the restrictions. To de-termine the order of joining orthogonalized smart tables at each step of the search, a specialized heuristic is used, which reduces the search space, taking into account further calculations. When the restrictions are extended, the acceleration of the computation process is achieved by applying the developed reduction rules for the case of C-type smart tables. The developed hybrid method allows one to find all solutions to the problems of satisfying constraints modeled using one or several D-type smart tables, without decomposing tabular constraints into elementary tuples.


2021 ◽  
Vol 924 (1) ◽  
pp. 012049
Author(s):  
W A P Dania ◽  
A Hidayat ◽  
B A Nugraha ◽  
E Lestari

Abstract Supply chain collaboration is one of the most crucial variables of driving business success in organic fertiliser Companies, especially to maintain the continual flow from upstream to downstream. Therefore, understanding the level of collaboration factors is vital in sustaining the partnership as well as reducing any conflicts among stakeholders. This study aims to measure the depth of collaboration among Company X and its suppliers. The assessment of the supply chain collaboration index will perform Fuzzy Analytical Hierarchy Process (FAHP) to weight the collaboration behaviour factors and Supply Chain Collaboration Index (SCCI) to measure the depth level of collaboration. The collaboration behaviour factors examined in this study are including joint effort, collaboration values, sharing activities, adaptation, trust, power, stability, commitment, continuous improvement and coordination. Based on the computation process using SCCI, the collaboration index among company X and its supplier is 76.72 on a scale of 1-100. It implies that the collaboration is at a moderate level. Consequently, the company needs to recognise the low score factors and develop a strategy for improvement. Some aspects that deserve further attention are sharing activities, power, and stability. By enhancing the performance of these factors, the supply chain collaboration index can also be increased.


2021 ◽  
Vol 36 (1) ◽  
pp. 486-499
Author(s):  
Nur Syahirah Hashim ◽  
Khairul Nizam Tahar ◽  
Wiwin Windupranata ◽  
Saiful Aman Hj Sulaiman

The problems in bathymetry measurement often have gaps or ‘holes’ within the data. As a result, hydrographic surveyors often have sparse data, and even though the data is dense and equal distances, there is still a gap in time. This paper present coastal depth extraction from satellite images. The problem encountered during the bathymetry derivation process and the problem related to the space, distribution and quantity of the Single-beam echo sounder (SBES) data. Therefore, the idea of using spatial interpolation could be a suitable approach in solving the problems. This study intends to produce Satellite-Derived Bathymetry (SDB) from Landsat 8 images at Pantai Tok Jembal, Terengganu, Malaysia. The proposed method by first interpolating the SBES point in the calibration data using spatial predictors, i.e. Inverse Distance Weightage, Thin-Plate Spline, Spline with Tension, Universal Kriging, Natural Neighbor, and Topo to Raster. Second, the raster output created from the interpolation process then converts into the point shapefile. Third, intersect function use to eliminate the point whereby not in the domain. Finally, the newly generated SBES points in calibration data ready to apply at the SDB computation process, generating SDB. In continuation, a comparative analysis conducted between six SDB results generated using each different newly generated calibration data. The result indicates SDB utilizes with Universal Kriging-newly generated calibration data (RMSE: 0.718 m) was the best result. To summarise, this study has successfully attained the research objectives by utilizing the newly generated calibration data in generating SDB. The task of spatial interpolation recreates the SBES data from irregular space and short data to uniform space and long data, which facilitate in pixel to point value extraction and help refine the bathymetry derivation process. Furthermore, the proposed method suitable to be used when the data are not applicable or limited.


2021 ◽  
Vol 3 (134) ◽  
pp. 56-66
Author(s):  
Dmytro Moroz

The paper is devoted to the approach development related to methodology definition for evaluation of the modular multiprocessor computing systems efficiency. At the same time, the main attention is focused on the impact peculiarities on this network interface value. The formation analysis of the multiprocessor system network interface architecture and the basic modes of its operation have been analyzed. To evaluate the processes occurring in the system during the information flows transmission, the network system bandwidth and the switch throughput were compared; which allowed determining the preconditions for optimal components selection of the multiprocessor computing system network interface. The performed researches also allowed deducing analytical relations for determining the optimal number of system nodes with different functioning modes. The selected processors coherency coefficient, network interface and value of the computing area are deduced. The derived analytical relationships also showed that the optimal number of blades in a multiprocessor computing system, that provide its highest speed, decreases with increasing computing power of the processors included. It is shown that the network data interchange among the multiprocessor computing system nodes the more likely to impede the overall computation process; the less time will be spent directly on solving a specific problem.


Author(s):  
R. Sathya Et al.

Cyber physical systems combine both the physical as well as the computation process. Embedded computers and systems monitor to control the physical forms with feedback loops which have an effect on computations and contrariwise. A vast number of failures and cyber-attacks are present in the cyber physical systems, which leads to a limited growth and accuracy in the intrusion detection system and thus implementing the suitable actions which may be taken to reduce the damage to the system. As Cyber-physical systems square measure but to be made public universally, the applying of the instruction detection mechanism remains open presently. As a result, the inconvenience is made to talk about the way to suitably apply the interruption location component to Cyber physical frameworks amid this paper. By analysing the unmistakable properties of Cyber-physical frameworks, it extraordinary to diagram the exact necessities 1st. At that point, the arranging characterize of the intrusion discovery component in Cyber-physical frameworks is introduced in terms of the layers of framework and particular location procedures. At long last, a few imperative investigation issues unit known for edifying the following considers.


2021 ◽  
Vol 3 (1) ◽  
pp. 49-58
Author(s):  
Subarna Shakya ◽  
Joby P P

Wireless Body Sensor Network (BSNs) are devices that can be worn by human beings. They have sensors with transmission, computation, storage and varying sensing qualities. When there are multiple devices to obtain data from, it is necessary to merge these data to avoid errors from being transmitted, resulting in a high quality fused data. In this proposed work, we have designed a data fusion approach with the help of data obtained from the BSNs, using Fog computing. Everyday activities are gathered in the form of data using an array of sensors which are then merged together to form high quality data. The data so obtained is then given as the input to ensemble classifier to predict heart-related diseases at an early stage. Using a fog computing environment, the data collector is established and the computation process is done with a decentralised system. A final output is produced on combining the result of the nodes using the fog computing database. A novel kernel random data collector is used for classification purpose to result in an improved quality. Experimental analysis indicates an accuracy of 96% where the depth is about 10 with an estimator count of 45 along with 7 features parameters considered.


2021 ◽  
Vol 13 (2) ◽  
pp. 67-80
Author(s):  
Yudha Suherman ◽  
Tajuddin Nur

This paper is about to discuss the effect of combining a magnetic shaping technique with an axial channel in the rotor core to reduce the cogging torque of a permanent magnet synchronous generator. Computation process is performed by using the optimization response surface method. In this case, this research is done by employing two types of axial channel systems, namely circular and hexagonal. The axial channel area at the core of the engine rotor is 0.000279683 m2. Determination of magnetic shaping was carried out with an angle of 10 and a surface angle of 530. The effect of the combination of the cogging torque reduction technique with magnetic shaping and axal channel was analyzed by numerical method based on the finite element method (FEMM). Based on the analysis, it is found that the combination shows a decrease in cogging torque by 98% when compared with the cogging torque in the initial design (initial structure). Another advantage of the combination of the two cogging torque reduction techniques is that there is no significant increase in the magnetic flux density of the engine core. It can be said that the combination of the cogging torque reduction technique and the axial channel at the core of the engine rotor can significantly reduce the cogging torque.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 790
Author(s):  
Shabana Urooj ◽  
Fadwa Alrowais ◽  
Ramya Kuppusamy ◽  
Yuvaraja Teekaraman ◽  
Hariprasath Manoharan

In the present scenario the depletion of conventional sources causes an energy crisis. The energy crisis causes load demand with respect to electricity. The use of renewable energy sources plays a vital role in reducing the energy crisis and in reduction of CO2 emission. The use of solar energy is the major source of power in generation as this is the root cause for the development of wind, tides, etc. However, due to climatic condition the availability of PV sources varies from time to time. Hence it is essential to track the maximum source of energy by implementing different types of MPPT algorithms. However, use of MPPT algorithms has the limitation of using the same during partial shadow conditions. The issue of tracking power under partial shadow conditions can be resolved by implementing an intelligent optimization tracking algorithm which involves a computation process. Though many of nature’s inspired algorithms were present to address real world problems, Mirjalili developed the dragonfly algorithm to provide a better optimization solution to the issues faced in real-time applications. The proposed concept focuses on the implementation of the dragonfly optimization algorithm to track the maximum power from solar and involves the concept of machine learning, image processing, and data computation.


This chapter gives an overview of programming methods (algebraic, parallel, adaptive, and other) related to the approach of the program design proposed in the book. Algorithm algebras intended for formalized description of algorithms in the form of high-level schemes are considered: Dijkstra's algebra associated with technology of structured programming; Kaluzhnin's algebra for graphical description of non-structured schemes of algorithms; Glushkov's algebra for description of structured schemes, including the facilities for computation process prediction and design of parallel algorithms; the algebra of algorithmics, which is based on the mentioned algebras. The signature of each algebra consists of predicate and operator constructs conforming to a specific method of algorithm design, that is, structured, non-structured, and other. Basic notions related to software auto-tuning are considered, and the classification of auto-tuners is given.


Author(s):  
Ashok Kumar Yadav

Unprecedented advancement in wireless technology, storage, and computing power of portable devices with the gigabyte speed of internet connectivity enables the possibility of communication among machine to machine. IoT has a different way to connect many nodes simultaneously to store, access, and share the information to improve the quality of life by the elimination of the involvement of human. Irrespective of unlimited benefit, IoT has so many issues that arise to eclipse IoT in reality because of its centralized model. Scalability, reliability, privacy, and security challenges are rising because of the huge numbers of IoT nodes, centralized architecture, and complex networks. Centralized architecture may lead to problems like a single point of failure, single way traffic, huge infrastructure cost, privacy, security, and single source of trust. Therefore, to overcome the issues of the centralized infrastructure of the IoT, the authors diverted to decentralized infrastructure. It may be the best decision in terms of performance, reliability, security, privacy, and trust. Blockchain is an influential latest decentralization technology to decentralize computation, process management, and trust. A combination of blockchain with IoT may have the potential to solve scalability, reliability, privacy, and security issues of IoT. This chapter has an overview of some important consensus algorithms, IoT challenges, integration of the blockchain with IoT, its challenges, and future research issues of a combination of blockchain and IoT are also discussed.


Sign in / Sign up

Export Citation Format

Share Document