scholarly journals The mechanism of the initiation and propagation of detonation in solid explosives

Although half a century has elapsed since the publication of the classical treatise of Berthelot upon explosives, the detailed mechanism of the initiation and propagation of detonation in liquid and solid explosives is still obscure. Detonation is a phenomenon exhibiting a number of specific characteristics which differentiate it quite definitely from the explosive combustions of such substances as gunpowder and cordite. It is well known that the latter are governed by laws relating the rate of reaction to the surface area, the temperature and pressure of the surrounding gases, etc., and that heat is the chief medium of initiation and propagation, whereas in the case of detonation, the reaction wave-front travels directly through the explosive medium in the same sense as does a sound wave, and the velocity of propagation is a very definite characteristic of the phenomenon. This stability of the detonation velocity is well demonstrated for solid explosives by the photographs in a recent paper by E. Jones ; the speeds are usually much greater than any exhibited by explosive combustions, and range from 1500 to 10,000 metres per second. Finally, the initiation and propagation of detonation appear to be associated much more intimately with mechanical shock than with flame. The weight of evidence strongly indicates that the difference between detonation and explosive combustion is fundamental and not merely of degree, and the term “high explosive” is reserved for substances capable of the former property. The theoretical treatment of detonation as a shock wave traversing the medium and maintained by the accompanying chemical reactions has been developed by several investigators. These writers have built up a quantitative theory from thermodynamical reasoning and have been able to calculate velocities of propagation, which in some cases are correct, but in practice it has been found that the thermodynamical conditions, while necessary, are not sufficient. Thus, a great number of compositions possessing all the thermodynamical qualifications of a high explosive cannot be made to detonate; others permit detonation to be initiated successfully but without propagation, and the reaction degenerates into a mere deflagration, or even dies out completely. It is indeed very difficult to judge whether a particular composition is a true detonating explosive without the opportunity to test the sample in reasonable quantity. The violent decompositions of small samples or single crystals furnishes no a priori evidence of detonation, and innumerable examples may be quoted of such material in bulk being unable to propagate the local and violent initial activity.

2007 ◽  
Vol 32 (3) ◽  
pp. 55-65
Author(s):  
Henny Coolen

Two ideal types of data can be distinguished in housing research: structured and less-structured data. Questionnaires and official statistics are examples of structured data, while less-structured data arise for instance from open interviews and documents. Structured data are sometimes labelled quantitative, while less-structured data are called qualitative. In this paper structured and less-structured data are considered from the perspective of measurement and analysis. Structured data arise when the researcher has an a priori category system or measurement scale available for collecting the data. When such an a priori system or scale is not available the data are called less-structured. It will be argued that these less-structured observations can only be used for any further analysis when they contain some minimum level of structure called a category system, which is equivalent to a nominal measurement scale. Once this becomes evident, one realizes that through the necessary process of categorization less-structured data can be analyzed in much the same way as structured data, and that the difference between the two types of data is one of degree and not of kind. In the second part of the paper these ideas are illustrated with examples from my own research on the meaning of preferences for dwelling features in which the concept of a meaning structure plays a central part. Until now these meaning structures have been determined by means of semi-structured interviews which, even with small samples, result in large amounts of less-structured data.


Crisis ◽  
2013 ◽  
Vol 34 (6) ◽  
pp. 434-437 ◽  
Author(s):  
Donald W. MacKenzie

Background: Suicide clusters at Cornell University and the Massachusetts Institute of Technology (MIT) prompted popular and expert speculation of suicide contagion. However, some clustering is to be expected in any random process. Aim: This work tested whether suicide clusters at these two universities differed significantly from those expected under a homogeneous Poisson process, in which suicides occur randomly and independently of one another. Method: Suicide dates were collected for MIT and Cornell for 1990–2012. The Anderson-Darling statistic was used to test the goodness-of-fit of the intervals between suicides to distribution expected under the Poisson process. Results: Suicides at MIT were consistent with the homogeneous Poisson process, while those at Cornell showed clustering inconsistent with such a process (p = .05). Conclusions: The Anderson-Darling test provides a statistically powerful means to identify suicide clustering in small samples. Practitioners can use this method to test for clustering in relevant communities. The difference in clustering behavior between the two institutions suggests that more institutions should be studied to determine the prevalence of suicide clustering in universities and its causes.


2020 ◽  
pp. 65-72
Author(s):  
V. V. Savchenko ◽  
A. V. Savchenko

This paper is devoted to the presence of distortions in a speech signal transmitted over a communication channel to a biometric system during voice-based remote identification. We propose to preliminary correct the frequency spectrum of the received signal based on the pre-distortion principle. Taking into account a priori uncertainty, a new information indicator of speech signal distortions and a method for measuring it in conditions of small samples of observations are proposed. An example of fast practical implementation of the method based on a parametric spectral analysis algorithm is considered. Experimental results of our approach are provided for three different versions of communication channel. It is shown that the usage of the proposed method makes it possible to transform the initially distorted speech signal into compliance on the registered voice template by using acceptable information discrimination criterion. It is demonstrated that our approach may be used in existing biometric systems and technologies of speaker identification.


2003 ◽  
Vol 10 (3) ◽  
pp. 401-410
Author(s):  
M. S. Agranovich ◽  
B. A. Amosov

Abstract We consider a general elliptic formally self-adjoint problem in a bounded domain with homogeneous boundary conditions under the assumption that the boundary and coefficients are infinitely smooth. The operator in 𝐿2(Ω) corresponding to this problem has an orthonormal basis {𝑢𝑙} of eigenfunctions, which are infinitely smooth in . However, the system {𝑢𝑙} is not a basis in Sobolev spaces 𝐻𝑡 (Ω) of high order. We note and discuss the following possibility: for an arbitrarily large 𝑡, for each function 𝑢 ∈ 𝐻𝑡 (Ω) one can explicitly construct a function 𝑢0 ∈ 𝐻𝑡 (Ω) such that the Fourier series of the difference 𝑢 – 𝑢0 in the functions 𝑢𝑙 converges to this difference in 𝐻𝑡 (Ω). Moreover, the function 𝑢(𝑥) is viewed as a solution of the corresponding nonhomogeneous elliptic problem and is not assumed to be known a priori; only the right-hand sides of the elliptic equation and the boundary conditions for 𝑢 are assumed to be given. These data are also sufficient for the computation of the Fourier coefficients of 𝑢 – 𝑢0. The function 𝑢0 is obtained by applying some linear operator to these right-hand sides.


The present paper describes an investigation of diffusion in the solid state. Previous experimental work has been confined to the case in which the free energy of a mixture is a minimum for the single-phase state, and diffusion decreases local differences of concentration. This may be called ‘diffusion downhill’. However, it is possible for the free energy to be a minimum for the two-phase state; diffusion may then increase differences of concentration; and so may be called ‘diffusion uphill’. Becker (1937) has proposed a simple theoretical treatment of these two types of diffusion in a binary alloy. The present paper describes an experimental test of this theory, using the unusual properties of the alloy Cu 4 FeNi 3 . This alloy is single phase above 800° C and two-phase at lower temperatures, both the phases being face-centred cubic; the essential difference between the two phases is their content of copper. On dissociating from one phase into two the alloy develops a series of intermediate structures showing striking X-ray patterns which are very sensitive to changes of structure. It was found possible to utilize these results for a quantitative study of diffusion ‘uphill’ and ‘downhill’ in the alloy. The experimental results, which can be expressed very simply, are in fair agreement with conclusions drawn from Becker’s theory. It was found that Fick’s equation, dc / dt = D d2c / dx2 , can, within the limits of error, be applied in all cases, with the modification that c denotes the difference of the measured copper concentration from its equilibrium value. The theory postulates that D is the product of two factors, of which one is D 0f the coefficient of diffusion that would be measured if the alloy were an ideal solid solution. The theory is able to calculate D/D 0 , if only in first approximation, and the experiments confirm this calculation. It was found that in most cases the speed of diffusion—‘uphill’ or ‘downhill’—has the order of magnitude of D 0 . * Now with British Electrical Research Association.


Geophysics ◽  
2007 ◽  
Vol 72 (1) ◽  
pp. F25-F34 ◽  
Author(s):  
Benoit Tournerie ◽  
Michel Chouteau ◽  
Denis Marcotte

We present and test a new method to correct for the static shift affecting magnetotelluric (MT) apparent resistivity sounding curves. We use geostatistical analysis of apparent resistivity and phase data for selected periods. For each period, we first estimate and model the experimental variograms and cross variogram between phase and apparent resistivity. We then use the geostatistical model to estimate, by cokriging, the corrected apparent resistivities using the measured phases and apparent resistivities. The static shift factor is obtained as the difference between the logarithm of the corrected and measured apparent resistivities. We retain as final static shift estimates the ones for the period displaying the best correlation with the estimates at all periods. We present a 3D synthetic case study showing that the static shift is retrieved quite precisely when the static shift factors are uniformly distributed around zero. If the static shift distribution has a nonzero mean, we obtained best results when an apparent resistivity data subset can be identified a priori as unaffected by static shift and cokriging is done using only this subset. The method has been successfully tested on the synthetic COPROD-2S2 2D MT data set and on a 3D-survey data set from Las Cañadas Caldera (Tenerife, Canary Islands) severely affected by static shift.


Open Theology ◽  
2019 ◽  
Vol 5 (1) ◽  
pp. 430-450
Author(s):  
Kristóf Oltvai

Abstract Karl Barth’s and Jean-Luc Marion’s theories of revelation, though prominent and popular, are often criticized by both theologians and philosophers for effacing the human subject’s epistemic integrity. I argue here that, in fact, both Barth and Marion appeal to revelation in an attempt to respond to a tendency within philosophy to coerce thought. Philosophy, when it claims to be able to access a universal, absolute truth within history, degenerates into ideology. By making conceptually possible some ‚evental’ phenomena that always evade a priori epistemic conditions, Barth’s and Marion’s theories of revelation relativize all philosophical knowledge, rendering any ideological claim to absolute truth impossible. The difference between their two theories, then, lies in how they understand the relationship between philosophy and theology. For Barth, philosophy’s attempts to make itself absolute is a produce of sinful human vanity; its corrective is thus an authentic revealed theology, which Barth articulates in Christian, dogmatic terms. Marion, on the other hand, equipped with Heidegger’s critique of ontotheology, highlights one specific kind of philosophizing—metaphysics—as generative of ideology. To counter metaphysics, Marion draws heavily on Barth’s account of revelation but secularizes it, reinterpreting the ‚event’ as the saturated phenomenon. Revelation’s unpredictability is thus preserved within Marion’s philosophy, but is no longer restricted to the appearing of God. Both understandings of revelation achieve the same epistemological result, however. Reality can never be rendered transparent to thought; within history, all truth is provisional. A concept of revelation drawn originally from Christian theology thus, counterintuitively, is what secures philosophy’s right to challenge and critique the pre-given, a hermeneutic freedom I suggest is the meaning of sola scriptura.


Author(s):  
M. Bukenov ◽  
Ye. Mukhametov

This paper considers the numerical implementation of two-dimensional thermoviscoelastic waves. The elastic collision of an aluminum cylinder with a two-layer plate of aluminum and iron is considered. In work [1] the difference schemes and algorithm of their realization are given. The most complete reviews of the main methods of calculation of transients in deformable solids can be found in [2, 3, 4], which also indicates the need and importance of generalized studies on the comparative evaluation of different methods and identification of the areas of their most rational application. In the analysis and physical interpretation of numerical results in this work it is also useful to use a priori information about the qualitative behavior of the solution and all kinds of information about the physics of the phenomena under study. Here is the stage of evolution of contact resistance of collision – plate, stress profile.


1986 ◽  
Vol 60 (3) ◽  
pp. 743-750 ◽  
Author(s):  
K. J. Sullivan ◽  
J. P. Mortola

Static (Cstat) and dynamic (Cdyn) lung compliance and lung stress relaxation were examined in isolated lungs of newborn kittens and adult cats. Cstat was determined by increasing volume in increments and recording the corresponding change in pressure; Cdyn was calculated as the ratio of the changes in volume to transpulmonary pressure between points of zero flow at ventilation frequencies between 10 and 110 cycles/min. Lung volume history, end-inflation volume, and end-deflation pressure were maintained constant. At the lowest frequency of ventilation, Cdyn was less than Cstat, the difference being greater in newborns. Between 20 and 100 cycles/min, Cdyn of the newborn lung remained constant, whereas Cdyn of the adult lung decreased after 60 cycles/min. At all frequencies, the rate of stress relaxation, measured as the decay in transpulmonary pressure during maintained inflation, was greater in newborns than in adults. The frequency response of Cdyn in kittens, together with the relatively greater rate of stress relaxation, suggests that viscoelasticity contributes more to the dynamic stiffening of the lung in newborns than in adults. A theoretical treatment of the data based on a linear model of viscoelasticity supports this conclusion.


1832 ◽  
Vol 122 ◽  
pp. 595-599 ◽  

Mr. Stratford has favoured me with a comparison of the predicted times of high water deduced from Mr. Bulpit’s Tables, White’s Ephemeris, and the British Almanac, with the observations at the London Docks. These observations are, unfortunately, so imperfect, that the differences must not be entirely attributed to the errors of the Tables, which, however, seem susceptible of much improvement. I subjoin this comparison; and in order to convey an idea of the confidence which may be placed in the observations, I also subjoin a comparison, by Mr. Deacon, of the observations at the London and St. Katherine’s Docks, which are made according to the same plan, and of which the merit is the same. The differences in the determinations at these two places, which are only about a quarter of a mile distant from each other, may serve to indicate the reliance which can be placed in either. In my paper on the Tides at Brest, I remarked that the retard or the constant λ — λ, is considerably greater as deduced from observation here than at Brest. That this must be the case is also evident from the following very simple à priori considerations.—The highest high water takes place when the moon passes the meridian at a time equal to the retard. The tide is propagated from Brest to London, round Scotland, in about twenty-two hours, that is, supposing the tide which takes place in our river to be principally due to that branch of the tide which descends along the eastern coast of Great Britain, which I believe to be the case. The highest tide therefore is propagated from Brest to London in about twenty-two hours, and the difference in the retard or in the constant λ — λ, will be nearly the moon’s motion in twenty-two hours, or about 11°; I made the difference in the retard from observation 10°. The tide takes about fifteen hours to reach Brest from the Cape of Good Hope; no doubt the retard there is considerably less.


Sign in / Sign up

Export Citation Format

Share Document