maximum delay
Recently Published Documents


TOTAL DOCUMENTS

134
(FIVE YEARS 50)

H-INDEX

13
(FIVE YEARS 4)

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Solomon Melaku Belay ◽  
Seifu Tilahun ◽  
Mitiku Yehualaw ◽  
Jose Matos ◽  
Helder Sousa ◽  
...  

Today, several developing countries struggle to improve the cost and time performances of major infrastructure works due to various reasons. Cost overrun and delay are one of the major challenges being faced by the construction and infrastructure sector. Hence, the aim of this study is to explore the extent of cost overrun and schedule delays in building and road infrastructure projects across the Ethiopian construction industry. Primary data were collected through a structured questionnaire survey to evaluate the potential risks leading to those challenges. Various data analysis tools were employed, to investigate the critical causes of cost overrun and delays in infrastructure projects. The findings reveal that the minimum cost overrun for building construction projects is found to be 2%, whereas the maximum and average cost overruns are 248% and 35%, respectively. For road infrastructure projects, the minimum, maximum, and average cost overruns are found to be 1%, 61%, and 18%, respectively. Similarly, the minimum, maximum, and average delays recorded in building construction projects are 9%, 802%, and 143%, respectively, whereas, in road infrastructure projects, the minimum delay is found to be 3%, the maximum delay is 312%, and an average schedule delay of 110% is recorded. In addition, the top risk factors leading to cost overrun in infrastructure projects are inflation, inaccurate cost estimates, and variations, whereas the major risks causing schedule delays are variations, economic conditions, and escalation of material prices. Further, practical implications and key recommendations were provided to curb cost overrun and delay in infrastructure projects.


Author(s):  
Amalia Nurul Fauziah ◽  
M. Atik Martsiningsih ◽  
Budi Setiawan

The samples used for serum electrolyte measurement should be analyzed immediately after being received in the laboratory within 1-2 hours to avoid an increase in the error of the results. Serum should be stored at 4°C for a period to prevent damage. The analyst should consider maximum delay time in the examination to maintain the serum's quality. This study compared the 2-hour and 3-hour delays in sodium (Na), potassium (K), and chlorine (Cl) tests. The method used in this study is an observational analysis with a cross-sectional study design. The samples in this study used 35 patient serum residues. The study was conducted in November 2020 with a continuous sampling technique. Electrolyte levels in the sample were measured by AVL 9180 Electrolyte Analyzer using Ion-Selective Electrode (ISE) method. The differences in electrolyte (Na, K, Cl) levels were analyzed by the Kruskal-Wallis Statistical test at a 95% confidence level. The results showed that the content of sodium, potassium, and chlorine were 0.719; 0.976; and 0.772. This study showed that there was no significant difference in the electrolyte content of sodium (Na), potassium (K), and chlorine (Cl) in the serum directly detected from the serum stored at 4°C for 2 hours and 3 hours. In conclusion, it is acceptable to postpone the serum test for 3 hours with various considerations.


2021 ◽  
Vol 13 (2) ◽  
pp. 62-79
Author(s):  
Юлия Васильевна Чиркова ◽  
Julia Chirkova

The Machine Load Balancing Game with linear externalities is considered. A set of jobs is to be assigned to a set of machines with different latencies depending on their own loads and also loads on other machines. Jobs choose machines to minimize their own latencies. The social cost of a schedule is the maximum delay among all machines, i.e. {\it makespan. For the case of two machines in this model an Nash equilibrium existence is proven and of the expression for the Price of Anarchy is obtained.


2021 ◽  
Author(s):  
vikas kumar ◽  
Mithun Mukherjee

The advantage of computational resources in edge computing near the data source has kindled growing interest in delay-sensitive Internet of Things (IoT) applications. However, the benefit of the edge server is limited by the uploading and downloading links between end-users and edge servers when these end-users seek computational resources from edge servers. The scenario becomes more severe when the user-end's devices are in the shaded region resulting in low uplink/downlink quality. In this paper, we consider a reconfigurable intelligent surface (RIS)-assisted edge computing system, where the benefits of RIS are exploited to improve the uploading transmission rate. We further aim to minimize the delay of worst-case in the network when the end-users either compute task data in their local CPU or offload task data to the edge server. Next, we optimize the uploading bandwidth allocation for every end-user's task data to minimize the maximum delay in the network. The above optimization problem is formulated as quadratically constrained quadratic programming. Afterward, we solve this problem by semidefinite relaxation. Finally, the simulation results demonstrate that the proposed strategy is scalable under various network settings.


2021 ◽  
Author(s):  
vikas kumar ◽  
Mithun Mukherjee

The advantage of computational resources in edge computing near the data source has kindled growing interest in delay-sensitive Internet of Things (IoT) applications. However, the benefit of the edge server is limited by the uploading and downloading links between end-users and edge servers when these end-users seek computational resources from edge servers. The scenario becomes more severe when the user-end's devices are in the shaded region resulting in low uplink/downlink quality. In this paper, we consider a reconfigurable intelligent surface (RIS)-assisted edge computing system, where the benefits of RIS are exploited to improve the uploading transmission rate. We further aim to minimize the delay of worst-case in the network when the end-users either compute task data in their local CPU or offload task data to the edge server. Next, we optimize the uploading bandwidth allocation for every end-user's task data to minimize the maximum delay in the network. The above optimization problem is formulated as quadratically constrained quadratic programming. Afterward, we solve this problem by semidefinite relaxation. Finally, the simulation results demonstrate that the proposed strategy is scalable under various network settings.


2021 ◽  
Vol 52 (3) ◽  
Author(s):  
Volodymyr Bulgakov ◽  
Simone Pascuzzi ◽  
Semjons Ivanovs ◽  
Volodymyr Kuvachov ◽  
Yulia Postol ◽  
...  

Controlled traffic farming allows to minimize traffic-induced soil compaction by a permanent separation of the crop zone from the traffic lanes used by wide span tractors. The Authors developed an agricultural wide span vehicle equipped with a skid equipment for turning and an automatic driving system prototype based on a laser beam. The aim of this work was to study the kinematic conditions that control the steering of this machine. Furthermore, the accuracy and the maximum delay time of the signal transmission by the automatic driving system of the set-up was also assessed. In comparison with crawler tractors, the turning of the agricultural wide span vehicle needs a smaller difference in the moments applied to its right- and left-side wheels. For the predetermined accuracy of the beam position relative to the plant rows, ±ds = ±0.025 m, the accuracy of the direction of the laser beam at a distance S=200 m should not be more than ±0.07° and ±0.0014°, considering a run length of 1000 m. Furthermore, at a speed V=2.5 m s–1 a trajectory deviation φ≤5° requires a topmost delay time of the control signal of Δtmax=0.11 s is required.


2021 ◽  
Vol 13 (5) ◽  
pp. 37-56
Author(s):  
Dhirendra Kumar Sharma ◽  
Nitika Goenka

In the mobile ad hoc network (MANET) update of link connectivity is necessary to refresh the neighbor tables in data transfer. A existing hello process periodically exchanges the link connectivity information, which is not adequate for dynamic topology. Here, slow update of neighbour table entries causes link failures which affect performance parameter as packet drop, maximum delay, energy consumption, and reduced throughput. In the dynamic hello technique, new neighbour nodes and lost neighbour nodes are used to compute link change rate (LCR) and hello-interval/refresh rate (r). Exchange of link connectivity information at a fast rate consumes unnecessary bandwidth and energy. In MANET resource wastage can be controlled by avoiding the re-route discovery, frequent error notification, and local repair in the entire network. We are enhancing the existing hello process, which shows significant improvement in performance.


2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Saad Ijaz Majid ◽  
Syed Waqar Shah ◽  
Safdar Nawaz Khan Marwat ◽  
Abdul Hafeez ◽  
Haider Ali ◽  
...  

The future high-speed cellular networks require efficient and high-speed handover mechanisms. However, the traditional cellular handovers are based upon measurements of target cell radio strength which requires frequent measurement gaps. During these measurement windows, data transmission ceases each time, while target bearings are measured causing serious performance degradation. Therefore, prediction-based handover techniques are preferred in order to eliminate frequent measurement windows. Thus, this work proposes an ultrafast and efficient XGBoost-based predictive handover technique for next generation mobile communications. The ML algorithm in general prefers 70–30% of training and test data, respectively. However, always obtaining 70% of training samples in mobile communications is challenging because the channel remains correlated within coherence time only. Therefore, collecting training samples beyond coherence time limits performance and adds delay; thus, the proposed work trains the model within coherence time where this fixed data split of 70–30% makes the model exceed coherence time. Despite the fact that the proposed model gets starved of required training samples, still there is no loss in predication accuracy. The test results show a maximum delay improvement of up to 596 ms with enhanced performance efficiency of 68.70% depending upon the scenario. The proposed model reduces delay and improves efficiency by having an appropriate training period; thus, the intelligent technique activates faster with improved accuracy and eliminates delay in the algorithm to predict mmWaves’ signal strength in contrast to actually measuring them.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Xiangyan Liu ◽  
Jianhong Zheng ◽  
Meng Zhang ◽  
Yang Li ◽  
Rui Wang ◽  
...  

AbstractDevice-to-device (D2D) communications and mobile edge computing (MEC) used to resolve traffic overload problems is a trend in the cellular network. By jointly considering the computation capability and the maximum delay, resource-constrained terminals offload parts of their computation-intensive tasks to one nearby device via a D2D connection or an edge server deployed at a base station via a cellular connection. In this paper, a novel method of cellular D2D–MEC system is proposed, which enables task offloading and resource allocation meanwhile improving the execution efficiency of each device with a low latency. We consider the partial offloading strategy and divide the task into local and remote computing, both of which can be executed in parallel through different computational modes. Instead of allocating system resources from a macroscopic view, we innovatively study both the task offloading strategy and the computing efficiency of each device from a microscopic perspective. By taking both task offloading policy and computation resource allocation into consideration, the optimization problem is formulated as that of maximized computing efficiency. As the formulated problem is a mixed-integer non-linear problem, we thus propose a two-phase heuristic algorithm by jointly considering helper selection and computation resources allocation. In the first phase, we obtain the suboptimal helper selection policy. In the second phase, the MEC computation resources allocation strategy is achieved. The proposed low complexity dichotomy algorithm (LCDA) is used to match the subtask-helper pair. The simulation results demonstrate the superiority of the proposed D2D-enhanced MEC system over some traditional D2D–MEC algorithms.


Water ◽  
2021 ◽  
Vol 13 (15) ◽  
pp. 2011
Author(s):  
Pablo Páliz Larrea ◽  
Xavier Zapata Ríos ◽  
Lenin Campozano Parra

Despite the importance of dams for water distribution of various uses, adequate forecasting on a day-to-day scale is still in great need of intensive study worldwide. Machine learning models have had a wide application in water resource studies and have shown satisfactory results, including the time series forecasting of water levels and dam flows. In this study, neural network models (NN) and adaptive neuro-fuzzy inference systems (ANFIS) models were generated to forecast the water level of the Salve Faccha reservoir, which supplies water to Quito, the Capital of Ecuador. For NN, a non-linear input–output net with a maximum delay of 13 days was used with variation in the number of nodes and hidden layers. For ANFIS, after up to four days of delay, the subtractive clustering algorithm was used with a hyperparameter variation from 0.5 to 0.8. The results indicate that precipitation was not influencing input in the prediction of the reservoir water level. The best neural network and ANFIS models showed high performance, with a r > 0.95, a Nash index > 0.95, and a RMSE < 0.1. The best the neural network model was t + 4, and the best ANFIS model was model t + 6.


Sign in / Sign up

Export Citation Format

Share Document