scholarly journals The thermal conductivity of seasonal snow

1997 ◽  
Vol 43 (143) ◽  
pp. 26-41 ◽  
Author(s):  
Matthew Sturm ◽  
Jon Holmgren ◽  
Max König ◽  
Kim Morris

AbstractTwenty-seven studies on the thermal conductivity of snow (Keff) have been published since 1886. Combined, they comprise 354 values ofKeff, and have been used to derive over 13 regression equation and predictingKeffvs density. Due to large (and largely undocumented) differences in measurement methods and accuracy, sample temperature and snow type, it is not possible to know what part of the variability in this data set is the result of snow microstructure. We present a new data set containing 488 measurements for which the temperature, type and measurement accuracy are known. A quadratic equation,whereρis in g cm−3, andKeffis in W m−1K−1, can be fit to the new data (R2= 0.79). A logarithmic expression,can also be used. The first regression is better when estimating values beyond the limits of the data; the second when estimating values for low-density snow. Within the data set, snow types resulting from kinetic growth show density-independent behavior. Rounded-grain and wind-blown snow show strong density dependence. The new data set has a higher mean value of density but a lower mean value of thermal conductivity than the old set. This shift is attributed to differences in snow types and sample temperatures in the sets. Using both data sets, we show that there are well-defined limits to the geometric configurations that natural seasonal snow can take.

1997 ◽  
Vol 43 (143) ◽  
pp. 26-41 ◽  
Author(s):  
Matthew Sturm ◽  
Jon Holmgren ◽  
Max König ◽  
Kim Morris

AbstractTwenty-seven studies on the thermal conductivity of snow (Keff) have been published since 1886. Combined, they comprise 354 values of Keff, and have been used to derive over 13 regression equation and predicting Keff vs density. Due to large (and largely undocumented) differences in measurement methods and accuracy, sample temperature and snow type, it is not possible to know what part of the variability in this data set is the result of snow microstructure. We present a new data set containing 488 measurements for which the temperature, type and measurement accuracy are known. A quadratic equation,where ρ is in g cm−3, and Keff is in W m−1K−1, can be fit to the new data (R2 = 0.79). A logarithmic expression,can also be used. The first regression is better when estimating values beyond the limits of the data; the second when estimating values for low-density snow. Within the data set, snow types resulting from kinetic growth show density-independent behavior. Rounded-grain and wind-blown snow show strong density dependence. The new data set has a higher mean value of density but a lower mean value of thermal conductivity than the old set. This shift is attributed to differences in snow types and sample temperatures in the sets. Using both data sets, we show that there are well-defined limits to the geometric configurations that natural seasonal snow can take.


2012 ◽  
Vol 6 (6) ◽  
pp. 4673-4693 ◽  
Author(s):  
H. Löwe ◽  
F. Riche ◽  
M. Schneebeli

Abstract. Finding relevant microstructural parameters beyond the density is a longstanding problem which hinders the formulation of accurate parametrizations of physical properties of snow. Towards a remedy we address the effective thermal conductivity tensor of snow via known anisotropic, second-order bounds. The bound provides an explicit expression for the thermal conductivity and predicts the relevance of a microstructural anisotropy parameter Q which is given by an integral over the two-point correlation function and unambiguously defined for arbitrary snow structures. For validation we compiled a comprehensive data set of 167 snow samples. The set comprises individual samples of various snow types and entire time series of metamorphism experiments under isothermal and temperature gradient conditions. All samples were digitally reconstructed by micro-computed tomography to perform microstructure-based simulations of heat transport. The incorporation of anisotropy via Q considerably reduces the root mean square error over the usual density-based parametrization. The systematic quantification of anisotropy via the two-point correlation function suggests a generalizable route to incorporate microstructure into snowpack models. We indicate the inter-relation of the conductivity to other properties and outline a potential impact of Q on dielectric constant, permeability and adsorption rate of diffusing species in the pore space.


2013 ◽  
Vol 7 (5) ◽  
pp. 1473-1480 ◽  
Author(s):  
H. Löwe ◽  
F. Riche ◽  
M. Schneebeli

Abstract. Finding relevant microstructural parameters beyond density is a longstanding problem which hinders the formulation of accurate parameterizations of physical properties of snow. Towards a remedy, we address the effective thermal conductivity tensor of snow via anisotropic, second-order bounds. The bound provides an explicit expression for the thermal conductivity and predicts the relevance of a microstructural anisotropy parameter Q, which is given by an integral over the two-point correlation function and unambiguously defined for arbitrary snow structures. For validation we compiled a comprehensive data set of 167 snow samples. The set comprises individual samples of various snow types and entire time series of metamorphism experiments under isothermal and temperature gradient conditions. All samples were digitally reconstructed by micro-computed tomography to perform microstructure-based simulations of heat transport. The incorporation of anisotropy via Q considerably reduces the root mean square error over the usual density-based parameterization. The systematic quantification of anisotropy via the two-point correlation function suggests a generalizable route to incorporate microstructure into snowpack models. We indicate the inter-relation of the conductivity to other properties and outline a potential impact of Q on dielectric constant, permeability and adsorption rate of diffusing species in the pore space.


2020 ◽  
Author(s):  
◽  
Dylan G Rees

The contact centre industry employs 4% of the entire United King-dom and United States’ working population and generates gigabytes of operational data that require analysis, to provide insight and to improve efficiency. This thesis is the result of a collaboration with QPC Limited who provide data collection and analysis products for call centres. They provided a large data-set featuring almost 5 million calls to be analysed. This thesis utilises novel visualisation techniques to create tools for the exploration of the large, complex call centre data-set and to facilitate unique observations into the data.A survey of information visualisation books is presented, provid-ing a thorough background of the field. Following this, a feature-rich application that visualises large call centre data sets using scatterplots that support millions of points is presented. The application utilises both the CPU and GPU acceleration for processing and filtering and is exhibited with millions of call events.This is expanded upon with the use of glyphs to depict agent behaviour in a call centre. A technique is developed to cluster over-lapping glyphs into a single parent glyph dependant on zoom level and a customizable distance metric. This hierarchical glyph repre-sents the mean value of all child agent glyphs, removing overlap and reducing visual clutter. A novel technique for visualising individually tailored glyphs using a Graphics Processing Unit is also presented, and demonstrated rendering over 100,000 glyphs at interactive frame rates. An open-source code example is provided for reproducibility.Finally, a novel interaction and layout method is introduced for improving the scalability of chord diagrams to visualise call transfers. An exploration of sketch-based methods for showing multiple links and direction is made, and a sketch-based brushing technique for filtering is proposed. Feedback from domain experts in the call centre industry is reported for all applications developed.


Author(s):  
Valiya Hamza ◽  
Fabio Vieira ◽  
Jorge Luiz dos Santos Gomes ◽  
Suze Guimaraes ◽  
Carlos Alexandrino ◽  
...  

An updated heat-flow database for Brazil is presented providing details of measurements carried out at 406 sites. It has been organized as per the scheme proposed by the International Heat Flow Commission. The data sets refer to results obtained using methods referred to as interval temperature logs (ITL), underground mines (UMM), bottom-hole temperatures (BHT), stable bottom temperatures (SBT) and water wells (AQT). The compilation provides information on depths of temperature logs, gradient determinations, measurements of thermal conductivity and radiogenic heat production. Also included is information on the methods employed and error estimates of the main parameters. A new heat flow map of Brazil has been derived based on the updated data set. A multipronged system has been employed in citing references, where the indexing scheme adopted follows chronological order. It provides information not only on the primary work concerning heat flow determination but also later improvements in measurements of main parameters (temperature gradients, thermal conductivity and radiogenic heat production) as well as techniques employed in data analysis.


Geophysics ◽  
2013 ◽  
Vol 78 (5) ◽  
pp. M29-M41 ◽  
Author(s):  
Mahdi H. Almutlaq ◽  
Gary F. Margrave

We evaluated the concept of surface-consistent matching filters for processing time-lapse seismic data, in which matching filters are convolutional filters that minimize the sum-squared error between two signals. Because in the Fourier domain a matching filter is the spectral ratio of the two signals, we extended the well-known surface-consistent hypothesis such that the data term is a trace-by-trace spectral ratio of two data sets instead of only one (i.e., surface-consistent deconvolution). To avoid unstable division of spectra, we computed the spectral ratios in the time domain by first designing trace-sequential, least-squares matching filters, then Fourier transforming them. A subsequent least-squares solution then factored the trace-sequential matching filters into four operators: two surface-consistent (source and receiver) and two subsurface-consistent (offset and midpoint). We evaluated a time-lapse synthetic data set with nonrepeatable acquisition parameters, complex near-surface geology, and a variable subsurface reservoir layer. We computed the four-operator surface-consistent matching filters from two surveys, baseline and monitor, then applied these matching filters to the monitor survey to match it to the baseline survey over a temporal window where changes were not expected. This algorithm significantly reduced the effect of most of the nonrepeatable parameters, such as differences in source strength, receiver coupling, wavelet bandwidth and phase, and static shifts. We computed the normalized root mean square difference on raw stacked data (baseline and monitor) and obtained a mean value of 70%. This value was significantly reduced after applying the 4C surface-consistent matching filters to about 13.6% computed from final stacks.


Author(s):  
Frank Klawonn ◽  
Frank Rehm

For many applications in knowledge discovery in databases finding outliers, rare events, is of importance. Outliers are observations, which deviate significantly from the rest of the data, so that it seems they are generated by another process (Hawkins, 1980). Such outlier objects often contain information about an untypical behavior of the system. However, outliers bias the results of many data mining methods like the mean value, the standard deviation or the positions of the prototypes of k-means clustering (Estivill-Castro, 2004; Keller, 2000). Therefore, before further analysis or processing of data is carried out with more sophisticated data mining techniques, identifying outliers is a crucial step. Usually, data objects are considered as outliers, when they occur in a region of extremely low data density. Many clustering techniques like possibilistic clustering (PCM) (Krishnapuram & Keller, 1993; Krishnapuram & Keller, 1996) or noise clustering (NC) (Dave, 1991; Dave & Krishnapuram, 1997) that deal with noisy data and can identify outliers, need good initializations or suffer from lack of adaptability to different cluster sizes (Rehm, Klawonn & Kruse, 2007). Distance-based approaches (Knorr, 1998; Knorr, Ng & Tucakov, 2000) have a global view on the data set. These algorithms can hardly treat data sets containing regions with different data density (Breuning, Kriegel, Ng & Sander, 2000). In this work we present an approach that combines a fuzzy clustering algorithm (Höppner, Klawonn, Kruse & Runkler, 1999) (or any other prototype-based clustering algorithm) with statistical distribution-based outlier detection.


2018 ◽  
Vol 154 (2) ◽  
pp. 149-155
Author(s):  
Michael Archer

1. Yearly records of worker Vespula germanica (Fabricius) taken in suction traps at Silwood Park (28 years) and at Rothamsted Research (39 years) are examined. 2. Using the autocorrelation function (ACF), a significant negative 1-year lag followed by a lesser non-significant positive 2-year lag was found in all, or parts of, each data set, indicating an underlying population dynamic of a 2-year cycle with a damped waveform. 3. The minimum number of years before the 2-year cycle with damped waveform was shown varied between 17 and 26, or was not found in some data sets. 4. Ecological factors delaying or preventing the occurrence of the 2-year cycle are considered.


2018 ◽  
Vol 21 (2) ◽  
pp. 117-124 ◽  
Author(s):  
Bakhtyar Sepehri ◽  
Nematollah Omidikia ◽  
Mohsen Kompany-Zareh ◽  
Raouf Ghavami

Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Materials & Methods: Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Result & Conclusion: Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


Sign in / Sign up

Export Citation Format

Share Document