central processing
Recently Published Documents


TOTAL DOCUMENTS

1147
(FIVE YEARS 402)

H-INDEX

48
(FIVE YEARS 9)

Author(s):  
Dana Khwailleh ◽  
Firas Al-balas

The rapid growth of internet of things (IoT) in multiple areas brings research challenges closely linked to the nature of IoT technology. Therefore, there has been a need to secure the collected data from IoT sensors in an efficient and dynamic way taking into consideration the nature of collected data due to its importance. So, in this paper, a dynamic algorithm has been developed to distinguish the importance of data collected and apply the suitable security approach for each type of data collected. This was done by using hybrid system that combines block cipher and stream cipher systems. After data classification using machine learning classifiers the less important data are encrypted using stream cipher (SC) that use rivest cipher 4 algorithm, and more important data encrypted using block cipher (BC) that use advanced encryption standard algorithm. By applying a performance evaluation using simulation, the proposed method guarantees that it encrypts the data with less central processing unit (CPU) time with improvement in the security over the data by using the proposed hybrid system.


Author(s):  
Hala Khankhour ◽  
Otman Abdoun ◽  
Jâafar Abouchabaka

<span>This article presents a new approach of integrating parallelism into the genetic algorithm (GA), to solve the problem of routing in a large ad hoc network, the goal is to find the shortest path routing. Firstly, we fix the source and destination, and we use the variable-length chromosomes (routes) and their genes (nodes), in our work we have answered the following question: what is the better solution to find the shortest path: the sequential or parallel method?. All modern systems support simultaneous processes and threads, processes are instances of programs that generally run independently, for example, if you start a program, the operating system spawns a new process that runs parallel elements to other programs, within these processes, we can use threads to execute code simultaneously. Therefore, we can make the most of the available central processing unit (CPU) cores. Furthermore, the obtained results showed that our algorithm gives a much better quality of solutions. Thereafter, we propose an example of a network with 40 nodes, to study the difference between the sequential and parallel methods, then we increased the number of sensors to 100 nodes, to solve the problem of the shortest path in a large ad hoc network.</span>


2022 ◽  
Vol 24 (1) ◽  
pp. 61-71
Author(s):  
Walaa Mahmoud Shehata ◽  
◽  
Fatma Khalifa Gad ◽  
Mohamed Galal Helal ◽  

Global warming is nowadays one of the main and important issues. As the increase in the concentration of carbon dioxide and other greenhouse gases in the atmosphere as a result of the combustion of these gases causes such phenomena. Therefore, oil and gas plants need to be constantly reviewed over time to maintain high performance and operability, especially while changing feed composition and rate to meet standard product specifications. The aim of this study is to study the effect of flare gases recovery using gas compressors on the economic and environmental performance of an existing oilfield plant. A commercial simulation program aspen HYSYS Version 11 was used. The Kalabsha Central Processing Facility (KCPF) in the Western Desert of Egypt is the studied plant. This plant handles 30 million standard cubic feet per day (MMSCFD) from free water knock out drum and 1.6 MMSCFD of gases from heaters. 20 MMSCFD from gas is charged to the gas pipeline and 10 MMSCFD is sent to the flare with the 1.6 MMSCFD. It is proposed to install gas compressors to capture the gases from the free water knock out drum and heaters before sending them to the flare. Such technology can be used as a guide in upgrading existing and new oil and gas plants to reduce gas flaring. In addition, environmental protection also adds more economic profits from burning the recovered gas besides increasing the life of the flare equipment.


Author(s):  
Prerana Shenoy S. P. ◽  
Sai Vishnu Soudri ◽  
Ramakanth Kumar P. ◽  
Sahana Bailuguttu

Observability is the ability for us to monitor the state of the system, which involves monitoring standard metrics like central processing unit (CPU) utilization, memory usage, and network bandwidth. The more we can understand the state of the system, the better we can improve the performance by recognizing unwanted behavior, improving the stability and reliability of the system. To achieve this, it is essential to build an automated monitoring system that is easy to use and efficient in its working. To do so, we have built a Kubernetes operator that automates the deployment and monitoring of applications and notifies unwanted behavior in real time. It also enables the visualization of the metrics generated by the application and allows standardizing these visualization dashboards for each type of application. Thus, it improves the system's productivity and vastly saves time and resources in deploying monitored applications, upgrading Kubernetes resources for each application deployed, and migration of applications.


Tomography ◽  
2022 ◽  
Vol 8 (1) ◽  
pp. 45-58
Author(s):  
Bing Li ◽  
Chuang Liu ◽  
Shaoyong Wu ◽  
Guangqing Li

Due to the complex shape of the vertebrae and the background containing a lot of interference information, it is difficult to accurately segment the vertebrae from the computed tomography (CT) volume by manual segmentation. This paper proposes a convolutional neural network for vertebrae segmentation, named Verte-Box. Firstly, in order to enhance feature representation and suppress interference information, this paper places a robust attention mechanism on the central processing unit, including a channel attention module and a dual attention module. The channel attention module is used to explore and emphasize the interdependence between channel graphs of low-level features. The dual attention module is used to enhance features along the location and channel dimensions. Secondly, we design a multi-scale convolution block to the network, which can make full use of different combinations of receptive field sizes and significantly improve the network’s perception of the shape and size of the vertebrae. In addition, we connect the rough segmentation prediction maps generated by each feature in the feature box to generate the final fine prediction result. Therefore, the deep supervision network can effectively capture vertebrae information. We evaluated our method on the publicly available dataset of the CSI 2014 Vertebral Segmentation Challenge and achieved a mean Dice similarity coefficient of 92.18 ± 0.45%, an intersection over union of 87.29 ± 0.58%, and a 95% Hausdorff distance of 7.7107 ± 0.5958, outperforming other algorithms.


Informatics ◽  
2021 ◽  
Vol 18 (4) ◽  
pp. 17-25
Author(s):  
A. N. Markov ◽  
R. O. Ihnatovich ◽  
A. I. Paramonov

Objectives. The authors aimed to demonstrate the need for implementation of video conferencing service into the learning process, to select a video conferencing service, and to conduct a computer experiment with the selected BigBlueButton video conferencing service.Methods. The problems of choosing a video conferencing service from the list of video conferencing and video conferencing software are considered. At the stage of software selection, the features of its operation, requirements for hardware and for integration into internal information systems are indicated. Load testing of the video conferencing service was carried out by the method of volume and stability testing.Results. The load graphs for hardware components of the virtual server in the long term period are presented. The article describes the results of the graphs analysis in order to identify the key features of the video conferencing service during the test and trial operations.Conclusion. Taking into account the cost of licensing, as well as integration into the e-learning system, a choice of video conferencing service was made. A computer experiment was carried out with the selected BigBlueButton video conferencing service. The features of the hardware operation of the virtual server (on which the BigBlueButton system is located) have been determined. The load graphs for the central processing unit, random access memory and local computer network are presented. Problems of service operation at the stage of load increase are formulated.


Author(s):  
Ghaidaa Mohammad Esber, Mothanna Alkubeily, Samer Sulaiman Ghaidaa Mohammad Esber, Mothanna Alkubeily, Samer Sulaiman

Wireless sensor network simulation programs provide representation for an actual system, without needing to deploy real testbed which is highly constrained by the available budget, and the direct operations inside physical layer in most of these programs are hidden and work implicitly. This is what motivated us to build a kernel for a virtual simulation platform to be able to simulate protocol operations and algorithms at the node processing unit level, The proposed system aims to observe the execution of operations at the low level of the wireless sensor physical infrastructure with the ability to modify at this level. That give the improvers of wireless sensor nodes the ability to test their ideas without needing to use physical environment. We have built the functionality operations which are related to the platform kernel at several stages. We defined (as a first step) the essential operations inside a virtual microprocessor that uses a partial set pf MIPS instructions, and built the kernel of minimized virtual WSN simulator depending on the proposed microprocessor, that means we can add any number of nodes inside the GUI of the WSN simulator kernel, and these nodes use the proposed virtual microprocessor . Then we improved this platform by adding the instruction set of a real microprocessor that is used in wireless sensor network nodes. Finally, (and to ease and simplify the interaction operation between program GUI of the platform kernel and the user), we have built simplified compiler that allows user to deal with microprocessor GUI inside each node, and to clarify protocol and algorithm operations by a set of orders and functions without needing to deal with low level language (Assembly language) in a direct way. The simulation results have presented high flexibility and performance to this platform in observing the operation sequence inside wireless sensor nodes at assembly level, in addition to focus on some parameters that are related to microprocessor inside each node.


2021 ◽  
Vol 4 (2) ◽  
pp. 55-68
Author(s):  
Seyed Ghorashi

The Internet of Things (IoT) and Wireless Sensor Network (WSN) devices are prone to security vulnerabilities, especially when they are resource-constrained. Lightweight cryptography is a promising encryption concept for IoT and WSN devices, that can mitigate these vulnerabilities. For example, Klein encryption is a lightweight block cipher, which has achieved popularity for the trade-off between performance and security. In this paper, we propose one novel method to enhance the efficiency of the Klein block cipher and the effects on the Central Processing Unit (CPU), memory usage, and processing time. Furthermore, we evaluate another approach on the performance of the Klein encryption iterations. These approaches were implemented in the Python language and ran on the Raspberry PI 3. We evaluated and analyzed the results of two modified encryption algorithms and confirmed that two enhancing techniques lead to significantly improved performance compared to the original algorithm


Author(s):  
Liam Dunn ◽  
Patrick Clearwater ◽  
Andrew Melatos ◽  
Karl Wette

Abstract The F-statistic is a detection statistic used widely in searches for continuous gravitational waves with terrestrial, long-baseline interferometers. A new implementation of the F-statistic is presented which accelerates the existing "resampling" algorithm using graphics processing units (GPUs). The new implementation runs between 10 and 100 times faster than the existing implementation on central processing units without sacrificing numerical accuracy. The utility of the GPU implementation is demonstrated on a pilot narrowband search for four newly discovered millisecond pulsars in the globular cluster Omega Centauri using data from the second Laser Interferometer Gravitational-Wave Observatory observing run. The computational cost is 17:2 GPU-hours using the new implementation, compared to 1092 core-hours with the existing implementation.


Author(s):  
Severin Weiler ◽  
Christian Matt ◽  
Thomas Hess

AbstractConversational agents (CAs) are often unable to provide meaningful responses to user requests, thereby triggering user resistance and impairing the successful diffusion of CAs. Literature mostly focuses on improving CA responses but fails to address user resistance in the event of further response failures. Drawing on inoculation theory and the elaboration likelihood model, we examine how inoculation messages, as communication that seeks to prepare users for a possible response failure, can be used as an alleviation mechanism. We conducted a randomized experiment with 558 users, investigating how the performance level (high or low) and the linguistic form of the performance information (qualitative or quantitative) affected users’ decision to discontinue CA usage after a response failure. We found that inoculation messages indicating a low performance level alleviate the negative effects of CA response failures on discontinuance. However, quantitative performance level information exhibits this moderating effect on users’ central processing, while qualitative performance level information affected users’ peripheral processing. Extending studies that primarily discuss ex-post strategies, our results provide meaningful insights for practitioners.


Sign in / Sign up

Export Citation Format

Share Document