compression layer
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 5)

H-INDEX

5
(FIVE YEARS 0)

2021 ◽  
Vol 13 (12) ◽  
pp. 2418
Author(s):  
Qingbo Yu ◽  
Xuexin Yan ◽  
Qing Wang ◽  
Tianliang Yang ◽  
Wenxi Lu ◽  
...  

Land reclamation has been increasingly employed in many coastal cities to resolve issues associated with land scarcity and natural hazards. Especially, land subsidence is a non-negligible environmental geological problem in reclamation areas, which is essentially caused by soil consolidation. However, spatial-scale evaluation on the average degree of consolidation (ADC) of soil layers and the effects of soil consolidation on land subsidence have rarely been reported. This study aims to carry out the integrated analysis on soil consolidation and subsidence mechanism in Chongming East Shoal (CES) reclamation area, Shanghai, at spatial-, macro-, and micro-scale so that appropriate guides can be provided to resist the potential environmental hazards. The interferometric synthetic aperture radar (InSAR) technique was utilized to retrieve the settlement curves of the selected onshore (Ra) and offshore (Rb) areas. Then, the hyperbolic (HP) model and three-point modified exponential (TME) model were combined applied to predict the ultimate settlement and to determine the range of ADC rather than a single pattern. With two boreholes Ba and Bb set within Ra and Rb, conventional tests, MIP test, and SEM test were conducted on the collected undisturbed soil to clarify the geological features of exposed soil layers and the micro-scale pore and structure characteristics of representative compression layer. The preliminary results showed that the ADC in Rb (93.1–94.1%) was considerably higher than that in Ra (60.8–78.7%); the clay layer was distinguished as the representative compression layer; on micro-scale, the poor permeability conditions contributed to the low consolidation efficiency and slight subsidence in Rb, although there was more compression space. During urbanization, the offshore area may suffer from potential subsidence when it is subjected to an increasing ground load, which requires special attention.


2021 ◽  
Vol 67 (1) ◽  
Author(s):  
Rongfeng Huang ◽  
Noboru Fujimoto ◽  
Hiroki Sakagami ◽  
Shanghuan Feng

AbstractThe sapwood and heartwood of plantation sugi wood (Cryptomeria japonica), and plantation hinoki (Chamaecyparis obtusa) wood were flat-sawn into timbers, then kiln-dried to a MC level below 12%. These timbers were further processed into specific sizes and wetted on the surfaces, preheated at 150 °C and radially compressed into sandwich compressed timbers. Density distribution, compressed layer(s) position and thickness, surface hardness were investigated. It was demonstrated that sugi and hinoki timbers were both applicable for sandwich compression. By controlling the preheating time, sugi heartwood timber, sugi sapwood timber and hinoki timber can be all sandwich compressed, which resulted in surfaces compressed timbers, interior compressed timbers and center compressed timbers. When sugi timbers were sandwich compressed, density only tremendously increased in the earlywood. The increased density of the compressed sugi earlywood was independent of compressed layer(s) position, compressing distance or annual growth width, while for hinoki timbers compression, density increased both in earlywood and latewood. Surface hardness of the uncompressed sugi sapwood was almost twice of that of the uncompressed sugi heartwood. Surface compression sharply increased the surface hardness of sugi heartwood and sugi sapwood. Interior compression and center compression also contributed to increased surface hardness for the compressed timbers, but to smaller extents. Surface hardness change due to the surface compression was consistent with the surface average density change of timbers. Compression layer(s) position exerted statistically significant effects on the surface hardness, while surface hardness of the compressed wood was almost unrelated to the original density of the used wood or average density of the sandwich compressed wood. However, bigger compressing distance led to bigger surface hardness for the surface compressed wood.


2021 ◽  
Vol 293 ◽  
pp. 03006
Author(s):  
Binbin Xu

This paper mainly studies the deformation and stress of the concrete plat under the uniform load considering the thickness of the compression layer of layered ground. Firstly, the load characteristics and the ground strategy are discussed in detail to clarify the boundary condition. Then the deformation of the ground in each layer is calculated by GEO-Cal program and the final settlement and uneven deformation is also predicted using consolidation theory. Ground stress and settlement is also calculated by FEM to verify the previous calculation.


2020 ◽  
Vol 27 (5) ◽  
pp. 052101
Author(s):  
Shiquan Cao ◽  
Maogen Su ◽  
Jinzhu Liu ◽  
Qi Min ◽  
Duixiong Sun ◽  
...  

Author(s):  
Colin Chaigneau ◽  
Thomas Fuhr ◽  
Henri Gilbert ◽  
Jian Guo ◽  
Jérémy Jean ◽  
...  

This paper presents a cryptanalysis of full Kravatte, an instantiation of the Farfalle construction of a pseudorandom function (PRF) with variable input and output length. This new construction, proposed by Bertoni et al., introduces an efficiently parallelizable and extremely versatile building block for the design of symmetric mechanisms, e.g. message authentication codes or stream ciphers. It relies on a set of permutations and on so-called rolling functions: it can be split into a compression layer followed by a two-step expansion layer. The key is expanded and used to mask the inputs and outputs of the construction. Kravatte instantiates Farfalle using linear rolling functions and permutations obtained by iterating the Keccak round function.We develop in this paper several attacks against this PRF, based on three different attack strategies that bypass part of the construction and target a reduced number of permutation rounds. A higher order differential distinguisher exploits the possibility to build an affine space of values in the cipher state after the compression layer. An algebraic meet-in-the-middle attack can be mounted on the second step of the expansion layer. Finally, due to the linearity of the rolling function and the low algebraic degree of the Keccak round function, a linear recurrence distinguisher can be found on intermediate states of the second step of the expansion layer. All the attacks rely on the ability to invert a small number of the final rounds of the construction. In particular, the last two rounds of the construction together with the final masking by the key can be algebraically inverted, which allows to recover the key.The complexities of the devised attacks, applied to the Kravatte specifications published on the IACR ePrint in July 2017, or the strengthened version of Kravatte recently presented at ECC 2017, are far below the security claimed.


Author(s):  
Guido Bertoni ◽  
Joan Daemen ◽  
Seth Hoffert ◽  
Michaël Peeters ◽  
Gilles Van Assche ◽  
...  

In this paper, we introduce Farfalle, a new permutation-based construction for building a pseudorandom function (PRF). The PRF takes as input a key and a sequence of arbitrary-length data strings, and returns an arbitrary-length output. It has a compression layer and an expansion layer, each involving the parallel application of a permutation. The construction also makes use of LFSR-like rolling functions for generating input and output masks and for updating the inner state during expansion. On top of the inherent parallelism, Farfalle instances can be very efficient because the construction imposes less requirements on the underlying primitive than, e.g., the duplex construction or typical block cipher modes. Farfalle has an incremental property: compression of common prefixes of inputs can be factored out. Thanks to its input-output characteristics, Farfalle is really versatile. We specify simple modes on top of it for authentication, encryption and authenticated encryption, as well as a wide block cipher mode. As a showcase, we present Kravatte, a very efficient instance of Farfalle based on Keccak-p[1600, nr] permutations and formulate concrete security claims against classical and quantum adversaries. The permutations in the compression and expansion layers of Kravatte have only 6 rounds apiece and the rolling functions are lightweight. We provide a rationale for our choices and report on software performance.


2017 ◽  
Vol 20 (11) ◽  
pp. 1757-1767 ◽  
Author(s):  
Saman Rashidyan ◽  
Mohammad-Reza Sheidaii

Progressive collapse is a chain of local failures leading to the collapse of either the entire or a part of the structure. The double-layer space trusses are susceptible to progressive collapse due to sudden buckling of compression members. The method of strengthening the compression layer members along with weakening the tension layer members is an effective method for retrofitting the double-layer space truss behavior against progressive collapse. In this study, the method is applied on offset double-layer space truss models with different support conditions, members’ geometrical imperfections, height, and shapes, and the effectiveness in increasing the structure’s ductility and load-bearing capacity is demonstrated. The results show that the method converted the sudden collapse of the structures into a beneficial gradual (progressive) collapse. More specifically, for double-layer space trusses comprising members with similar geometrical imperfection, strengthening the compression layer chords along with weakening the tension layer chords within 30%–40% will significantly improve the ductility and load-bearing capacity. In addition, the results show that the method can decrease the weight of the structures and consequently provide more economical structures.


2017 ◽  
Vol 10 (1) ◽  
pp. 413-423 ◽  
Author(s):  
Jeremy D. Silver ◽  
Charles S. Zender

Abstract. The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scale and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.


2016 ◽  
Author(s):  
Jeremy D. Silver ◽  
Charles S. Zender

Abstract. The netCDF-4 format is widely used for large gridded scientific datasets, and includes several compression methods: lossy linear scaling and non-lossy deflate and shuffle algorithms. Many multidimensional datasets exhibit considerable variation over one or several spatial dimensions (e.g. vertically) with less variation in the remaining dimensions (e.g. horizontally). On such datasets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We propose a method (termed "layer packing") that simultaneously exploits lossy linear scaling and lossless compression. Layer packing stores arrays (instead of a scalar pair) of scale and offset parameters. An implementation of this method is compared with existing compression techniques in terms of compression ratio, accuracy, and speed. Layer packing produces typical errors of 0.01–0.02 % of the standard deviation within the packed layer, and yields files roughly 33 % smaller than the lossless deflate algorithm. This was similar to storing between 3 and 4 significant figures per datum. In the six test datasets considered, layer packing demonstrated a better compression/error trade-off than storing 3–4 significant digits in half of cases and worse in the remaining cases, highlighting the need to compare lossy compression methods in individual applications. Layer packing preserves substantially more precision than scalar linear packing, whereas scalar linear packing achieves greater compression ratios. Layer-packed data files must be "unpacked" to be readily usable. These characteristics make layer-packing a competitive archive format for many geophysical datasets.


Sign in / Sign up

Export Citation Format

Share Document