average response time
Recently Published Documents


TOTAL DOCUMENTS

86
(FIVE YEARS 35)

H-INDEX

6
(FIVE YEARS 1)

Author(s):  
Damiano Perri ◽  
Marco Simonetti ◽  
Osvaldo Gervasi

This study analyses some of the leading technologies for the construction and configuration of IT infrastructures to provide services to users. For modern applications, guaranteeing service continuity even in very high computational load or network problems is essential. Our configuration has among the main objectives of being highly available (HA) and horizontally scalable, that is, able to increase the computational resources that can be delivered when needed and reduce them when they are no longer necessary. Various architectural possibilities are analysed, and the central schemes used to tackle problems of this type are also described in terms of disaster recovery. The benefits offered by virtualisation technologies are highlighted and are bought with modern techniques for managing Docker containers that will be used to build the back-end of a sample infrastructure related to a use-case we have developed. In addition to this, an in-depth analysis is reported on the central autoscaling policies that can help manage high loads of requests from users to the services provided by the infrastructure. The results we have presented show an average response time of 21.7 milliseconds with a standard deviation of 76.3 milliseconds showing excellent responsiveness. Some peaks are associated with high-stress events for the infrastructure, but the response time does not exceed 2 seconds even in this case. The results of the considered use case studied for nine months are presented and discussed. In the study period, we improved the back-end configuration and defined the main metrics to deploy the web application efficiently.


2022 ◽  
Vol 6 (1) ◽  
pp. 89-99
Author(s):  
Annisa Heparyanti Safitri ◽  
Agung Teguh Wibowo Almais ◽  
A'la Syauqi ◽  
Roro Inda Melani

Volume data yang sangat besar dari tim surveyor Perencanaan dan Pengendalian Penanganan Bencana(P3B) menciptakan masalah yang luas dan beragam sehingga dapat menghabiskan sumber daya sistem dan waktu pemrosesan yang terbilang lama. Oleh karena itu penelitian ini mengusulkan solusi dengan melakukan Optimasi query pada metode TOPSIS yang diimplementasikan pada sistem pendukung kepeutusan untuk menentukan tingkat kerusakan pasca bencana. Berdasarkan 3 kali uji coba dengan jumlah data yang berbeda-beda yaitu ujicoba ke-1 menggunakan 114 data, ujicoba ke-2 sebanyak 228 data dan ujicoba ke-3 menggunakan 334 data. Selain itu, setiap ujicoba dilakukan lagi pengukuran re-spons time sebanyak 3 kali maka didapatkan hasil rata-rata (average) response time dari masing-masing langkah metode TOPSIS. Didapati bahwa hasil dari tahapan perangkingan menggunakan query optimiza-tion lebih cepat 0.00076 dibandingakan dengan qury non-optimization. Sehingga dapat di simpulkan bahwa response time yang didapat query optimization pada setiap langkah metode TOPSIS pada sistem pendukung keputusan kerusakan sektor pasca bencana alam lebih kecil dibandingkan dengan response time pada query non-optimization.


2021 ◽  
Vol 18 (4(Suppl.)) ◽  
pp. 1356
Author(s):  
M.A. Fazlina ◽  
Rohaya Latip ◽  
Azizol Abdullah ◽  
Hamidah Ibrahim ◽  
Mohamed A. Alrshah

Cloud Computing is a mass platform to serve high volume data from multi-devices and numerous technologies. Cloud tenants have a high demand to access their data faster without any disruptions. Therefore, cloud providers are struggling to ensure every individual data is secured and always accessible. Hence, an appropriate replication strategy capable of selecting essential data is required in cloud replication environments as the solution. This paper proposed a Crucial File Selection Strategy (CFSS) to address poor response time in a cloud replication environment. A cloud simulator called CloudSim is used to conduct the necessary experiments, and results are presented to evidence the enhancement on replication performance. The obtained analytical graphs are discussed thoroughly, and apparently, the proposed CFSS algorithm outperformed another existing algorithm with a 10.47% improvement in average response time for multiple jobs per round.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261594
Author(s):  
Tianqi Wang ◽  
Ning Li ◽  
Houran Li

Client-Server (C/S) application is always used in the existing Human Resource Management System (HRMS) as the system architecture, which has the problems of complex maintenance and poor compatibility; and cannot use professional database and development system, making the system development difficult and the data security low. To solve the above problems, the overall demand is analyzed, as well as feasibility and key technologies of the enterprise HRMS system. Then a HRMS is designed and developed, based on the user’s key functional requirements and related technologies, which is reasonable and easy to maintain. The system is supported by Browser-Server (B/S) structure, with the current popular Java 2 Platform Enterprise Edition (J2EE) multi-level structure as the overall architecture. The mature Microsoft SQL Server 2008 introduced by Microsoft is used as the database platform. Combined with Model View Controller (MVC) design pattern, this system can be used by users without geographical restrictions and system maintenance. In this system, performance logic and business logic are separated, which makes it convenient for the development and maintenance of the system. The system mainly includes six modules: personnel management, organizational management, recruitment management, training management, salary management and system management, which integrates enterprise information and realizes the functions of easy access and easy query of information database. Its interface is simple, easy to understand, and easy to operate, with low investment, low cost, high safety, good performance and easy maintenance, which help to improve the work efficiency and modern management level of enterprises. In the end, the operation performance of the system is tested. The results show that the throughput of the main functional modules in the system is greater than 100 times/s when dealing with the business, and the success rate of event processing is greater than 99%. The average response time of the business end is less than 0.4 s, and the average response time of the terminal side is less than 0.5 s, which all meet the standards. System CPU occupancy rate can be basically controlled below 30%, and memory usage rate is below 30%. In summary, the system designed here has the basic functions but also to ensure good performance, suitable for enterprise personnel management, organizational management, recruitment management, training management and salary management. The design and development of this system aims to provide technical support for the service quality of enterprise human resource management business, to improve the overall efficiency, promote the pace of enterprise strategic development, and enhance the market competitiveness of enterprises.


Author(s):  
Anastasia V. Daraseliya ◽  
Eduard S. Sopin

The offloading of computing tasks to the fog computing system is a promising approach to reduce the response time of resource-greedy real-time mobile applications. Besides the decreasing of the response time, the offloading mechanisms may reduce the energy consumption of mobile devices. In the paper, we focused on the analysis of the energy consumption of mobile devices that use fog computing infrastructure to increase the overall system performance and to improve the battery life. We consider a three-layer computing architecture, which consists of the mobile device itself, a fog node, and a remote cloud. The tasks are processed locally or offloaded according to the threshold-based offloading criterion. We have formulated an optimization problem that minimizes the energy consumption under the constraints on the average response time and the probability that the response time is lower than a certain threshold. We also provide the numerical solution to the optimization problem and discuss the numerical results.


2021 ◽  
pp. 1-13
Author(s):  
Raj Kumar Kalimuthu ◽  
Brindha Thomas

In today’s world, cloud computing plays a significant role in the development of an effective computing paradigm that adds more benefits to the modern Internet of Things (IoT) frameworks. However, cloud resources are considered to be dynamic and the demands necessitated for resource allocation for a certain task are different. These diverse factors may cause load and power imbalance which also affect the resource utilization and task scheduling in the cloud-based IoT environment. Recently, a bio-inspired algorithm can work effectually to solve task scheduling problems in the cloud-based IoT system. Therefore, this work focuses on efficient task scheduling and resource allocation through a novel Hybrid Bio-Inspired algorithm with the hybridized of Improvised Particle Swarm Optimization and Ant Colony Optimization. The vital objective of hybridizing these two approaches is to determine the nearest multiple sources to attain discrete and continuous solutions. Here, the task has been allocated to the virtual machine through a particle swarm and continuous resource management can be carried out by an ant colony. The performance of the proposed approach has been evaluated using the CloudSim simulator. The simulation results manifest that the proposed Hybridized algorithm efficiently scheduling the task in the cloud-based IoT environment with a lesser average response time of 2.18 sec and average waiting time of 3.6 sec as compared with existing state-of-the-art algorithms.


2021 ◽  
Author(s):  
Pamela Reinagel

When subjects control the duration of sampling a sensory stimulus before making a decision, they generally take more time to make more difficult sensory discriminations. This has been found to be true of many rats performing visual tasks. But two rats performing visual motion discrimination were found to have inverted chronometric response functions: their average response time paradoxically increased with stimulus strength. We hypothesize that corrective decision reversals may underlie this unexpected observation.


2021 ◽  
Vol 11 (21) ◽  
pp. 9981
Author(s):  
Ozoda Makhkamova ◽  
Doohyun Kim

Chatbot technologies have made our lives easier. To create a chatbot with high intelligence, a significant amount of knowledge processing is required. However, this can slow down the reaction time; hence, a mechanism to enable a quick response is needed. This paper proposes a cache mechanism to improve the response time of the chatbot service; while the cache in CPU utilizes the locality of references within binary code executions, our cache mechanism for chatbots uses the frequency and relevance information which potentially exists within the set of Q&A pairs. The proposed idea is to enable the broker in a multi-layered structure to analyze and store the keyword-wise relevance of the set of Q&A pairs from chatbots. In addition, the cache mechanism accumulates the frequency of the input questions by monitoring the conversation history. When a cache miss occurs, the broker selects a chatbot according to the frequency and relevance, and then delivers the query to the selected chatbot to obtain a response for answer. This mechanism showed a significant increase in the cache hit ratio as well as an improvement in the average response time.


2021 ◽  
Vol 11 (4) ◽  
pp. 100-112
Author(s):  
Poonam Nandal ◽  
Deepa Bura ◽  
Meeta Singh ◽  
Sudeep Kumar

In today's world, the IT industry is emerging day by day; therefore, the need for storage and computing is increasing multifold. Cloud computing has transformed the IT sector to much greater heights by virtualizing the systems, thereby reducing cost of the hardware to greater extent. Cloud computing is based on the pay as per use policy. Due to the exponential growth in cloud computing, users demand supplementary services and improved results which makes load balancing a major challenge. Load balancing distributes the workload across multiple nodes to optimize the performance of the system. Various load balancing algorithms exist to provide better resource utilization. This paper gives a brief analysis of load balancing algorithms and also compared these algorithms on the basis of certain metrics like average response time, processing cost, and data servicing time.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1719
Author(s):  
Abdullah Lakhan ◽  
Mazhar Ali Dootio ◽  
Tor Morten Groenli ◽  
Ali Hassan Sodhro ◽  
Muhammad Saddam Khokhar

These days, with the emerging developments in wireless communication technologies, such as 6G and 5G and the Internet of Things (IoT) sensors, the usage of E-Transport applications has been increasing progressively. These applications are E-Bus, E-Taxi, self-autonomous car, E-Train and E-Ambulance, and latency-sensitive workloads executed in the distributed cloud network. Nonetheless, many delays present in cloudlet-based cloud networks, such as communication delay, round-trip delay and migration during the workload in the cloudlet-based cloud network. However, the distributed execution of workloads at different computing nodes during the assignment is a challenging task. This paper proposes a novel Multi-layer Latency (e.g., communication delay, round-trip delay and migration delay) Aware Workload Assignment Strategy (MLAWAS) to allocate the workload of E-Transport applications into optimal computing nodes. MLAWAS consists of different components, such as the Q-Learning aware assignment and the Iterative method, which distribute workload in a dynamic environment where runtime changes of overloading and overheating remain controlled. The migration of workload and VM migration are also part of MLAWAS. The goal is to minimize the average response time of applications. Simulation results demonstrate that MLAWAS earns the minimum average response time as compared with the two other existing strategies.


Sign in / Sign up

Export Citation Format

Share Document