scholarly journals Observing desert dust devils with a pressure logger

2012 ◽  
Vol 1 (2) ◽  
pp. 209-220 ◽  
Author(s):  
R. D. Lorenz

Abstract. A commercial pressure logger has been adapted for long-term field use. Its flash memory affords the large data volume to allow months of pressure measurements to be acquired at the rapid cadence (>1 Hz) required to detect dust devils, small dust-laden convective vortices observed in arid regions. The power consumption of the unit is studied and battery and solar/battery options evaluated for long-term observations. A two-month long field test is described, and several example dust devil encounters are examined. In addition, a periodic (~20 min) convective signature is observed, and some lessons in operations and correction of data for temperature drift are reported. The unit shows promise for obtaining good statistics on dust devil pressure drops, to permit comparison with Mars lander measurements, and for array measurements.

Author(s):  
R. D. Lorenz

Abstract. A commercial pressure logger has been adapted for long-term field use. Its flash memory affords the large data volume to allow months of pressure measurements to be acquired at the rapid cadence (>1 Hz) required to detect dust devils, small dust-laden convective vortices observed in arid regions. The power consumption of the unit is studied and battery and solar/battery options evaluated for long-term observations. A two-month-long field test is described, and several example dust devil encounters are examined. In addition, a periodic (~20 min) convective signature is observed, and some lessons in operations and correction of data for temperature drift are reported. The unit shows promise for obtaining good statistics on dust devil pressure drops, to permit comparison with Mars lander measurements, and for array measurements.


2014 ◽  
Vol 71 (12) ◽  
pp. 4461-4472 ◽  
Author(s):  
Ralph D. Lorenz

Abstract A phenomenological model is developed wherein vortices are introduced at random into a virtual arena with specified distributions of diameter, core pressure drop, longevity, and translation speed, and the pressure history at a fixed station is generated using an analytic model of vortex structure. Only a subset of the vortices present are detected as temporary pressure drops, and the observed peak pressure-drop distribution has a shallower slope than the vortex-core pressure drops. Field studies indicate a detection rate of about two vortex events per day under favorable conditions for a threshold of 0.2 mb (1 mb = 1 hPa): this encounter rate and the observed falloff of events with increasing pressure drop can be reproduced in the model with approximately 300 vortices per square kilometer per day—rather more than the highest visual dust devil counts of approximately 100 devils per square kilometer per day. This difference can be reconciled if dust lifting typically only occurs in the field above a threshold core pressure drop of about 0.3 mb, consistent with observed laboratory pressure thresholds. The vortex population modeled to reproduce field results is concordant with recent high-resolution large-eddy simulations, which produce some thousands of 0.04–0.1-mb vortices per square kilometer per day, suggesting that these accurately reproduce the character of the strongly heated desert boundary layer. The amplitude and duration statistics of observed pressure drops suggest large dust devils may preferentially be associated with low winds.


2012 ◽  
Vol 1 (2) ◽  
pp. 151-154 ◽  
Author(s):  
A. Spiga

Abstract. Lorenz et al. (2012) proposes to use pressure loggers for long-term field measurements in terrestrial deserts. The dataset obtained through this method features both pressure drops (reminiscent of dust devils) and periodic convective signatures. Here we use large-eddy simulations to provide an explanation for those periodic convective signatures and to argue that pressure measurements in deserts have broader applications than monitoring dust devils.


Author(s):  
A. Spiga

Abstract. Lorenz (2012) proposes to use pressure loggers for long-term field measurements in terrestrial deserts. The dataset obtained through this method features both pressure drops (reminiscent of dust devils) and periodic convective signatures. Here we use Large-Eddy Simulations to provide an explanation for those periodic convective signatures and to argue that pressure measurements in deserts have broader applications than monitoring dust devils.


2021 ◽  
Vol 17 (2) ◽  
pp. 1-45
Author(s):  
Cheng Pan ◽  
Xiaolin Wang ◽  
Yingwei Luo ◽  
Zhenlin Wang

Due to large data volume and low latency requirements of modern web services, the use of an in-memory key-value (KV) cache often becomes an inevitable choice (e.g., Redis and Memcached). The in-memory cache holds hot data, reduces request latency, and alleviates the load on background databases. Inheriting from the traditional hardware cache design, many existing KV cache systems still use recency-based cache replacement algorithms, e.g., least recently used or its approximations. However, the diversity of miss penalty distinguishes a KV cache from a hardware cache. Inadequate consideration of penalty can substantially compromise space utilization and request service time. KV accesses also demonstrate locality, which needs to be coordinated with miss penalty to guide cache management. In this article, we first discuss how to enhance the existing cache model, the Average Eviction Time model, so that it can adapt to modeling a KV cache. After that, we apply the model to Redis and propose pRedis, Penalty- and Locality-aware Memory Allocation in Redis, which synthesizes data locality and miss penalty, in a quantitative manner, to guide memory allocation and replacement in Redis. At the same time, we also explore the diurnal behavior of a KV store and exploit long-term reuse. We replace the original passive eviction mechanism with an automatic dump/load mechanism, to smooth the transition between access peaks and valleys. Our evaluation shows that pRedis effectively reduces the average and tail access latency with minimal time and space overhead. For both real-world and synthetic workloads, our approach delivers an average of 14.0%∼52.3% latency reduction over a state-of-the-art penalty-aware cache management scheme, Hyperbolic Caching (HC), and shows more quantitative predictability of performance. Moreover, we can obtain even lower average latency (1.1%∼5.5%) when dynamically switching policies between pRedis and HC.


2018 ◽  
Vol 4 (12) ◽  
pp. 142 ◽  
Author(s):  
Hongda Shen ◽  
Zhuocheng Jiang ◽  
W. Pan

Hyperspectral imaging (HSI) technology has been used for various remote sensing applications due to its excellent capability of monitoring regions-of-interest over a period of time. However, the large data volume of four-dimensional multitemporal hyperspectral imagery demands massive data compression techniques. While conventional 3D hyperspectral data compression methods exploit only spatial and spectral correlations, we propose a simple yet effective predictive lossless compression algorithm that can achieve significant gains on compression efficiency, by also taking into account temporal correlations inherent in the multitemporal data. We present an information theoretic analysis to estimate potential compression performance gain with varying configurations of context vectors. Extensive simulation results demonstrate the effectiveness of the proposed algorithm. We also provide in-depth discussions on how to construct the context vectors in the prediction model for both multitemporal HSI and conventional 3D HSI data.


2021 ◽  
Author(s):  
Rens Hofman ◽  
Joern Kummerow ◽  
Simone Cesca ◽  
Joachim Wassermann ◽  
Thomas Plenefisch ◽  
...  

<p>The AlpArray seismological experiment is an international and interdisciplinary project to advance our understanding of geophysical processes in the greater Alpine region. The heart of the project consists of a large seismological array that covers the mountain range and its surrounding areas. To understand how the Alps and their neighbouring mountain belts evolved through time, we can only study its current structure and processes. The Eastern Alps are of prime interest since they currently demonstrate the highest crustal deformation rates. A key question is how these surface processes are linked to deeper structures. The Swath-D network is an array of temporary seismological stations complementary to the AlpArray network located in the Eastern Alps. This creates a unique opportunity to investigate high resolution seismicity on a local scale.</p><p>In this study, a combination of waveform-based detection methods was used to find small earthquakes in the large data volume of the Swath-D network. Methods were developed to locate the seismic events using semi-automatic picks, and estimate event magnitudes. We present an overview of the methods and workflow, as well as a preliminary overview of the seismicity in the Eastern Alps.</p>


2020 ◽  
Vol 16 (11) ◽  
pp. e1008415
Author(s):  
Teresa Maria Rosaria Noviello ◽  
Francesco Ceccarelli ◽  
Michele Ceccarelli ◽  
Luigi Cerulo

Small non-coding RNAs (ncRNAs) are short non-coding sequences involved in gene regulation in many biological processes and diseases. The lack of a complete comprehension of their biological functionality, especially in a genome-wide scenario, has demanded new computational approaches to annotate their roles. It is widely known that secondary structure is determinant to know RNA function and machine learning based approaches have been successfully proven to predict RNA function from secondary structure information. Here we show that RNA function can be predicted with good accuracy from a lightweight representation of sequence information without the necessity of computing secondary structure features which is computationally expensive. This finding appears to go against the dogma of secondary structure being a key determinant of function in RNA. Compared to recent secondary structure based methods, the proposed solution is more robust to sequence boundary noise and reduces drastically the computational cost allowing for large data volume annotations. Scripts and datasets to reproduce the results of experiments proposed in this study are available at: https://github.com/bioinformatics-sannio/ncrna-deep.


2020 ◽  
Vol 4 (4) ◽  
pp. 191
Author(s):  
Mohammad Aljanabi ◽  
Hind Ra'ad Ebraheem ◽  
Zahraa Faiz Hussain ◽  
Mohd Farhan Md Fudzee ◽  
Shahreen Kasim ◽  
...  

Much attention has been paid to large data technologies in the past few years mainly due to its capability to impact business analytics and data mining practices, as well as the possibility of influencing an ambit of a highly effective decision-making tools. With the current increase in the number of modern applications (including social media and other web-based and healthcare applications) which generates high data in different forms and volume, the processing of such huge data volume is becoming a challenge with the conventional data processing tools. This has resulted in the emergence of big data analytics which also comes with many challenges. This paper introduced the use of principal components analysis (PCA) for data size reduction, followed by SVM parallelization. The proposed scheme in this study was executed on the Spark platform and the experimental findings revealed the capability of the proposed scheme to reduce the classifiers’ classification time without much influence on the classification accuracy of the classifier.


Author(s):  
Yasser Hachaichi ◽  
Jamel Feki ◽  
Hanene Ben-Abdallah

Due to the international economic competition, enterprises are ever looking for efficient methods to build data marts/warehouses to analyze the large data volume in their decision making process. On the other hand, even though the relational data model is the most commonly used model, any data mart/ warehouse construction method must now deal with other data types and in particular XML documents which represent the dominant type of data exchanged between partners and retrieved from the Web. This chapter presents a data mart design method that starts from both a relational database source and XML documents compliant to a given DTD. Besides considering these two types of data structures, the originality of our method lies in its being decision maker centered, its automatic extraction of loadable data mart schemas and its genericity.


Sign in / Sign up

Export Citation Format

Share Document