Merging and Prioritizing Optimization in Block I/O Scheduling of Disk Storage

Author(s):  
Hui Li ◽  
Jianwei Liao ◽  
Xiaoyan Liu

I/O merging optimization at the block I/O layer of disk storage is widely adopted to reduce I/O response time. But it may result in certain overhead of merging judgment in the case of a large number of concurrent I/O requests accessing disk storage, and place negative effects on the response of small requests. This paper proposes a divide and conquer scheduling scheme at the block layer of I/O stack, to satisfy a large number of concurrent I/O requests with less I/O response time and ensure the fairness of each request response by decreasing the average I/O latency. First, we propose a horizontal visibility graph-based approach to cluster relevant block requests, according to their offsets (i.e., logic block numbers). Next, it carries out the optimization operation of merging consecutive block I/O requests within each cluster, as only these requests in the same cluster are most likely to be issued by a specific application. Then, we have introduced the functionality of merging judgment when performing merging optimization to effectively guarantee the average I/O response time. After that, the merged requests in the queue will be reordered on the basis of their priorities, to purposely cut down the average I/O response time. Finally, the prioritized requests are supposed to be delivered to the disk storage, for being serviced. Through a series of experiments, we show that compared to the benchmark, the newly proposed scheme can not only cut down the I/O response time by more than 18.2%, but also decrease the average I/O response time up to 71.7%.

Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2234 ◽  
Author(s):  
Jieyu Zhang ◽  
Yuanying Qiu ◽  
Xuechao Duan ◽  
Kangli Xu ◽  
Changqi Yang

Horizontal docking assembly is a fundamental process in the aerospace assembly, where intelligent measurement and adjustable support systems are urgently needed to achieve higher automation and precision. Thus, a laser scanning approach is employed to obtain the point cloud from a laser scanning sensor. And a method of section profile fitting is put forward to solve the pose parameters from the data cloud acquired by the laser scanning sensor. Firstly, the data is segmented into planar profiles by a series of parallel planes, and ellipse fitting is employed to estimate each center of the section profiles. Secondly, the pose of the part can be obtained through a spatial straight line fitting with these profile centers. However, there may be some interference features on the surface of the parts in the practical assembly process, which will cause negative effects to the measurement. Aiming at the interferences, a robust method improved from M-estimation and RANSAC is proposed to enhance the measurement robustness. The proportion of the inner points in a whole profile point set is set as a judgment criterion to validate each planar profile. Finally, a prototype is fabricated, a series of experiments have been conducted to verify the proposed method.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 9926-9934 ◽  
Author(s):  
Gulraiz Iqbal Choudhary ◽  
Wajid Aziz ◽  
Ishtiaq Rasool Khan ◽  
Susanto Rahardja ◽  
Pasi Franti

2020 ◽  
Author(s):  
Ganesh Ghimire ◽  
Navid Jadidoleslam ◽  
Witold Krajewski ◽  
Anastasios Tsonis

<p>Streamflow is a dynamical process that integrates water movement in space and time within basin boundaries. The authors characterize the dynamics associated with streamflow time series data from about seventy-one U.S. Geological Survey (USGS) stream-gauge stations in the state of Iowa. They employ a novel approach called visibility graph (VG). It uses the concept of mapping time series into complex networks to investigate the time evolutionary behavior of dynamical system. The authors focus on a simple variant of VG algorithm called horizontal visibility graph (HVG). The tracking of dynamics and hence, the predictability of streamflow processes, are carried out by extracting two key pieces of information called characteristic exponent, λ of degree distribution and global clustering coefficient, GC pertaining to HVG derived network. The authors use these two measures to identify whether streamflow process has its origin in random or chaotic processes. They show that the characterization of streamflow dynamics is sensitive to data attributes. Through a systematic and comprehensive analysis, the authors illustrate that streamflow dynamics characterization is sensitive to the normalization, and the time-scale of streamflow time-series. At daily scale, streamflow at all stations used in the analysis, reveals randomness with strong spatial scale (basin size) dependence. This has implications for predictability of streamflow and floods. The authors demonstrate that dynamics transition through potentially chaotic to randomly correlated process as the averaging time-scale increases. Finally, the temporal trends of λ and GC are statistically significant at about 40% of the total number of stations analyzed. Attributing this trend to factors such as changing climate or land use requires further research.</p>


2020 ◽  
Vol 2 (10) ◽  
pp. 29-44
Author(s):  
Andrii Halchenko ◽  
Sergiy Choporov

The deniable encryption algorithms productivity increasing is investigated in this paper. This investigation is relevant because of effective schemes for information and its users protection. But these algorithms is very complex and lumped. It really affects them. That's why deniable encryption algorithms have not been widespread in data processing and information security systems. The execution time reducing methods and tools exploration is the main goal of this work. The divide and conquer method has been discussed and investigated in this paper. It has been implemented into the data processing system of the deniable encryption algorithms. Nothing modifies have been implemented into the base algorithm. It allows to make it universal and apply to other deniable encryption algorithms. The series of experiments have been completed by authors to verify the hypothesis. The base deniable encryption algorithm discussing is the first stage of investigation. Its vulnerabilities have been found and investigated. Another algorithm is based on the divide and conquer method applying. It has been implemented into the modified data processing system. The both algorithms efficiency has been investigated by the experiments with the real with public and secret information files. The experiments have been completed on the prepared equipment. This equipment simulates the user's workplace with real hardware and software. According to the results the deniable encryption algorithms productivity has been reached by the divide and rule method. Also the method has been verified by the different size encryption keys. The base deniable encryption algorithms have not been modified. The results have been compared with other authors' investigations. In the end authors' hypothesis has been proved. But some restrictions of this results reaching have been set by the authors.


2019 ◽  
Vol 29 (7) ◽  
pp. 073119 ◽  
Author(s):  
Zhong-Ke Gao ◽  
Wei Guo ◽  
Qing Cai ◽  
Chao Ma ◽  
Yuan-Bo Zhang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document