compression rate
Recently Published Documents


TOTAL DOCUMENTS

322
(FIVE YEARS 102)

H-INDEX

19
(FIVE YEARS 3)

2022 ◽  
Vol 23 (1) ◽  
Author(s):  
Miaoshan Lu ◽  
Shaowei An ◽  
Ruimin Wang ◽  
Jinyin Wang ◽  
Changbin Yu

Abstract Background With the precision of the mass spectrometry (MS) going higher, the MS file size increases rapidly. Beyond the widely-used open format mzML, near-lossless or lossless compression algorithms and formats emerged in scenarios with different precision requirements. The data precision is often related to the instrument and subsequent processing algorithms. Unlike storage-oriented formats, which focus more on lossless compression rate, computation-oriented formats concentrate as much on decoding speed as the compression rate. Results Here we introduce “Aird”, an opensource and computation-oriented format with controllable precision, flexible indexing strategies, and high compression rate. Aird provides a novel compressor called Zlib-Diff-PforDelta (ZDPD) for m/z data. Compared with Zlib only, m/z data size is about 55% lower in Aird average. With the high-speed decoding and encoding performance of the single instruction multiple data technology used in the ZDPD, Aird merely takes 33% decoding time compared with Zlib. We have downloaded seven datasets from ProteomeXchange and Metabolights. They are from different SCIEX, Thermo, and Agilent instruments. Then we convert the raw data into mzML, mgf, and mz5 file formats by MSConvert and compare them with Aird format. Aird uses JavaScript Object Notation for metadata storage. Aird-SDK is written in Java, and AirdPro is a GUI client for vendor file converting written in C#. They are freely available at https://github.com/CSi-Studio/Aird-SDK and https://github.com/CSi-Studio/AirdPro. Conclusions With the innovation of MS acquisition mode, MS data characteristics are also constantly changing. New data features can bring more effective compression methods and new index modes to achieve high search performance. The MS data storage mode will also become professional and customized. ZDPD uses multiple MS digital features, and researchers also can use it in other formats like mzML. Aird is designed to become a computing-oriented data format with high scalability, compression rate, and fast decoding speed.


2021 ◽  
Vol 10 (12) ◽  
pp. 817
Author(s):  
Zhihong Ouyang ◽  
Lei Xue ◽  
Feng Ding ◽  
Da Li

Linear approximate segmentation and data compression of moving target spatio-temporal trajectory can reduce data storage pressure and improve the efficiency of target motion pattern mining. High quality segmentation and compression need to accurately select and store as few points as possible that can reflect the characteristics of the original trajectory, while the existing methods still have room for improvement in segmentation accuracy, reduction of compression rate and simplification of algorithm parameter setting. A trajectory segmentation and compression algorithm based on particle swarm optimization is proposed. First, the trajectory segmentation problem is transformed into a global intelligent optimization problem of segmented feature points, which makes the selection of segmented points more accurate; then, a particle update strategy combining neighborhood adjustment and random jump is established to improve the efficiency of segmentation and compression. Through experiments on a real data set and a maneuvering target simulation trajectory set, the results show that compared with the existing typical methods, this method has advantages in segmentation accuracy and compression rate.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Minhyeok Cho ◽  
Albert No

Abstract Background Advances in sequencing technology have drastically reduced sequencing costs. As a result, the amount of sequencing data increases explosively. Since FASTQ files (standard sequencing data formats) are huge, there is a need for efficient compression of FASTQ files, especially quality scores. Several quality scores compression algorithms are recently proposed, mainly focused on lossy compression to boost the compression rate further. However, for clinical applications and archiving purposes, lossy compression cannot replace lossless compression. One of the main challenges for lossless compression is time complexity, where it takes thousands of seconds to compress a 1 GB file. Also, there are desired features for compression algorithms, such as random access. Therefore, there is a need for a fast lossless compressor with a reasonable compression rate and random access functionality. Results This paper proposes a Fast and Concurrent Lossless Quality scores Compressor (FCLQC) that supports random access and achieves a lower running time based on concurrent programming. Experimental results reveal that FCLQC is significantly faster than the baseline compressors on compression and decompression at the expense of compression ratio. Compared to LCQS (baseline quality score compression algorithm), FCLQC shows at least 31x compression speed improvement in all settings, where a performance degradation in compression ratio is up to 13.58% (8.26% on average). Compared to general-purpose compressors (such as 7-zip), FCLQC shows 3x faster compression speed while having better compression ratios, at least 2.08% (4.69% on average). Moreover, the speed of random access decompression also outperforms the others. The concurrency of FCLQC is implemented using Rust; the performance gain increases near-linearly with the number of threads. Conclusion The superiority of compression and decompression speed makes FCLQC a practical lossless quality score compressor candidate for speed-sensitive applications of DNA sequencing data. FCLQC is available at https://github.com/Minhyeok01/FCLQC and is freely available for non-commercial usage.


Circulation ◽  
2021 ◽  
Vol 144 (Suppl_2) ◽  
Author(s):  
Jocasta Ball ◽  
Ziad Nehme ◽  
Melanie Villani ◽  
Karen L Smith

Introduction: Many regions around the world have reported declining survival rates from out-of-hospital cardiac arrest (OHCA) during the COVID-19 pandemic. This has been attributed to COVID-19 infection and overwhelmed healthcare services in some regions and imposed social restrictions in others. However, the effect of the pandemic period on CPR quality, which has the potential to impact outcomes, has not yet been described. Methods: A retrospective observational study was performed using data collected in an established OHCA registry in Victoria, Australia. During a pre-pandemic period (11 February 2019-31 January 2020) and the COVID-19 pandemic period (1 February 2020-31 January 2021), 1,111 and 1,349 cases with attempted resuscitation had complete CPR quality data, respectively. The proportion of cases where CPR targets (chest compression fraction [CCF]≥90%, compression depth 5-10cm, compression rate 100-120 per minute, pre-shock pauses <6 seconds, post-shock pauses <5 seconds) were met was compared between the pre-pandemic and pandemic periods. Logistic regression was performed to identify the independent effect of the COVID-19 pandemic on achieving CPR targets. Results: The proportion of arrests where CCF≥90% significantly decreased during the pandemic (57% vs 74% in the pre-pandemic period, p<0.001) as did the proportion with pre-shock pauses <6 seconds (54% vs 62%, p=0.019) and post-shock pauses <5 seconds (68% vs 82%, p<0.001). However, the proportion within target compression rate significantly increased during the pandemic (64% vs 56%, p<0.001). Following multivariable adjustment, the COVID-19 pandemic period was independently associated with a decrease in the odds of achieving a CCF≥90% (adjusted odds ratio [AOR] 0.47 [95% CI 0.40, 0.56]), a decrease in the odds of achieving pre-shock pauses<6 seconds (AOR 0.71 [95% CI 0.52, 0.96]), and a decrease in the odds of achieving post-shock pauses<5 seconds (AOR 0.49 [95% CI 0.34, 0.71]). Conclusion: CPR quality was impacted during the COVID-19 pandemic period which may have contributed to a decrease in OHCA survival previously identified. These findings reinforce the importance of maintaining effective resuscitation practices despite changes to clinical context.


Metals ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 1748
Author(s):  
Yaqi Wu ◽  
Peter K. Liaw ◽  
Yong Zhang

The refractory HEAs block material was prepared by powder sintering, using an equal atomic proportion of mixed TiZrNbMoV and NbTiAlTaV metal powder raw materials. The phase was analyzed, using an XRD. The microstructure of the specimen was observed, employing a scanning electron microscope, and the compressive strength of the specimen was measured, using an electronic universal testing machine. The results showed that the bulk cubic alloy structure was obtained by sintering at 1300 °C and 30 MPa for 4 h, and a small amount of complex metal compounds were contained. According to the pore distribution, the formed microstructure can be divided into dense and porous zones. At a compression rate of 10−4s−1, the yield strengths of TiZrNbMoV and NbTiAlTaV alloys are 1201 and 700 MPa, respectively.


2021 ◽  
Vol 14 (2) ◽  
pp. 99
Author(s):  
Dewa Ayu Indah Cahya Dewi ◽  
I Made Oka Widyantara

Through image compression, can save bandwidth usage on telecommunication networks, accelerate image file sending time and can save memory in image file storage. Technique to reduce image size through compression techniques is needed. Image compression is one of the image processing techniques performed on digital images with the aim of reducing the redundancy of the data contained in the image so that it can be stored or transmitted efficiently. This research analyzed the results of image compression and measure the error level of the image compression results. The analysis to be carried out is in the form of an analysis of JPEG compression techniques with various types of images. The method of measuring the compression results uses the MSE and PSNR methods. Meanwhile, to determine the percentage level of compression using the compression ratio calculation. The average ratio for JPEG compression was 0.08605, the compression rate was 91.39%. The average compression ratio for the DWT method was 0.133090833, the compression rate was 86.69%. The average compression ratio of the SVD method was 0.101938833 and the compression rate was 89.80%.


2021 ◽  
Vol 51 (3) ◽  
pp. 225-243
Author(s):  
Abhishek YADAV ◽  
Suresh KANNAUJIYA ◽  
Prashant Kumar CHAMPATI RAY ◽  
Rajeev Kumar YADAV ◽  
Param Kirti GAUTAM

GPS measurements have proved extremely useful in quantifying strain accumulation rate and assessing seismic hazard in a region. Continuous GPS measurements provide estimates of secular motion used to understand the earthquake and other geodynamic processes. GNSS stations extending from the South of India to the Higher Himalayan region have been used to quantify the strain build-up rate in Central India and the Himalayan region to assess the seismic hazard potential in this realm. Velocity solution has been determined after the application of Markov noise estimated from GPS time series data. The recorded GPS data are processed along with the closest International GNSS stations data for estimation of daily basis precise positioning. The baseline method has been used for the estimation of the linear strain rate between the two stations. Whereas the principal strain axes, maximum shear strain, rotation rate, and crustal shortening rate has been calculated through the site velocity using an independent approach; least-square inversion approach-based triangulation method. The strain rate analysis estimated by the triangulation approach exhibits a mean value of extension rate of 26.08 nano-strain/yr towards N131°, the compression rate of –25.38 nano-strain/yr towards N41°, maximum shear strain rate of 51.47 nano-strain/yr, dilation of –37.57 nano-strain/yr and rotation rate of 0.7°/Ma towards anti-clockwise. The computed strain rate from the Baseline method and the Triangulation method reports an extensive compression rate that gradually increases from the Indo-Gangetic Plain in South to Higher Himalaya in North. The slip deficit rate between India and Eurasia Plate in Kumaun Garhwal Himalaya has been computed as 18±1.5 mm/yr based on elastic dislocation theory. Thus, in this study, present-day surface deformation rate and interseismic strain accumulation rate in the Himalayan region and the Central Indian region have been estimated for seismic hazard analysis using continuous GPS measurements.


Author(s):  
Shimaalsadat Mostafavi ◽  
Franz Bamer ◽  
Bernd Markert

AbstractThe formation of a reliable joint between a large number of aluminum strands for battery applications is crucial in automotive industry, especially in the technology of autonomous vehicles. Therefore, in this study, mechanical deformations and diffusion patterns of the mating interface in ultrasonic welding of aluminum were investigated using molecular dynamics simulations. Furthermore, microscopic observations of the joints between aluminum strands from ultrasonic welding illustrating the influence of two process parameters were done. To study the nanomechanics of the joint formation, two aluminum crystallites of different orientations were built. The impact of the sliding velocity and the compression rate of the upper crystal block on the diffusion pattern at the interface of the two crystallites were quantified via the diffusion coefficient. Tensile deformations of several joint configurations were performed to investigate the load-bearing capacity of the solid state bond, taking into account the compression rate, the sliding velocity and the crystallite orientation. The atomic scale simulations revealed that the orientations of the crystallites govern the interface diffusion and the tensile strength of the joint significantly. Furthermore, interface atom diffusion increased with increasing the sliding velocity. Additionally, it was observed that a higher sliding velocity enhances the friction heat generation between the crystallites and significantly increases the interface temperature.


Author(s):  
J. Arturo Abraldes ◽  
Ricardo J. Fernandes ◽  
Ricardo Morán-Navarro

Survival outcomes increase significantly when cardiopulmonary resuscitation (CPR) is provided correctly, but rescuers’ fatigue can compromise its delivery. We investigated the effect of two exercise modes on CPR effectiveness and physiological outputs. After 4 min baseline conditions, 30 lifeguards randomly performed a 100 m run and a combined water rescue before 4 min CPR (using an adult manikin and a 30:2 compression–ventilation ratio). Physiological variables were continuously measured during baseline and CPR using a portable gas analyzer (K4b2, Cosmed, Rome, Italy) and CPR effectiveness was analyzed using two HD video cameras. Higher oxygen uptake (23.0 ± 9.9 and 20.6 ± 9.1 vs. 13.5 ± 6.2 mL·kg·min−1) and heart rate (137 ± 19 and 133 ± 15 vs. 114 ± 15 bpm), and lower compression efficacy (63.3 ± 29.5 and 62.2 ± 28.3 vs. 69.2 ± 28.0%), were found for CPRrun and CPRswim compared to CPRbase. In addition, ventilation efficacy was higher in the rescues preceded by intense exercise than in CPRbase (49.5 ± 42.3 and 51.9 ± 41.0 vs. 33.5 ± 38.3%), but no differences were detected between CPRrun and CPRswim. In conclusion, CPRrun and CPRswim protocols induced a relevant physiological stress over each min and in the overall CPR compared with CPRbase. The CPRun protocol reduces the compression rate but has a higher effectiveness percentage than the CPRswim protocol, in which there is a considerably higher compression rate but with less efficacy.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Lixin Li ◽  
Liwen Cao

It has great advantages in data processing. Embedded microsystems are widely used in IoT devices because of their specific functions and hard decoding technology. This article adds a literary vocabulary semantic analysis model to the embedded microsystem to reduce power consumption and improve the accuracy and speed of the system. The main purpose of this paper is to improve the accuracy and speed of semantic analysis of literary vocabulary based on the embedded microsystem, combined with the design idea of Robot Process Automation (RPA) and adding CNN logic algorithm. In this paper, RPA Adam model is proposed. The RPA Adam model indicates that the vector in the vector contains not only the characteristics of its own node but also the characteristics of neighboring nodes. It is applied to graph convolution network of isomorphic network analysis and analyzes the types of devices that can be carried by embedded chips, and displays them with graphics. Through the results, we find that the error rate of the RPA Adam model is the same at different compression rates. Due to the different correlations between knowledge entities in different data sets, specifically, high frequency can maintain a low bit error rate of 10.79% when the compression rate is 4.85%, but when the compression rate of high frequency is only 60.32%, the error rate is as high as 11.26%, while the compression rate of low frequency is 23.51% when the error rate is 9.65%.


Sign in / Sign up

Export Citation Format

Share Document