scholarly journals Un-jamming due to energetic instability: statics to dynamics

2021 ◽  
Vol 23 (4) ◽  
Author(s):  
Stefan Luding ◽  
Yimin Jiang ◽  
Mario Liu

Abstract Jamming/un-jamming, the transition between solid- and fluid-like behavior in granular matter, is an ubiquitous phenomenon in need of a sound understanding. As argued here, in addition to the usual un-jamming by vanishing pressure due to a decrease of density, there is also yield (plastic rearrangements and un-jamming that occur) if, e.g., for given pressure, the shear stress becomes too large. Similar to the van der Waals transition between vapor and water, or the critical current in superconductors, we believe that one mechanism causing yield is by the loss of the energy’s convexity (causing irreversible re-arrangements of the micro-structure, either locally or globally). We focus on this mechanism in the context of granular solid hydrodynamics (GSH), generalized for very soft materials, i.e., large elastic deformations, employing it in an over-simplified (bottom-up) fashion by setting as many parameters as possible to constant. Also, we complemented/completed GSH by using various insights/observations from particle simulations and calibrating some of the theoretical parameters—both continuum and particle points of view are reviewed in the context of the research developments during the last few years. Any other energy-based elastic-plastic theory that is properly calibrated (top-down), by experimental or numerical data, would describe granular solids. But only if it would cover granular gas, fluid, and solid states simultaneously (as GSH does) could it follow the system transitions and evolution through all states into un-jammed, possibly dynamic/collisional states—and back to elastically stable ones. We show how the un-jamming dynamics starts off, unfolds, develops, and ends. We follow the system through various deformation modes: transitions, yielding, un-jamming and jamming, both analytically and numerically and bring together the material point continuum model with particle simulations, quantitatively. Graphic abstract

2021 ◽  
Vol 249 ◽  
pp. 10001
Author(s):  
Stefan Luding

The question of how soft granular matter, or dense amorphous systems, re-arrange their microstructure under isotropic compression and de-compression, at different strain rates, will be answered by particle simulations of frictionless model systems in a periodic three-dimensional cuboid. Starting compression below jamming, the systems experience the well known jamming transition, with characteristic evolutions of the state variables elastic energy, elastic stress, coordination number, and elastic moduli. For large strain rates, kinetic energy comes into play and the evolution is more dynamic. In contrast, at extremely slow deformation, the system relaxes to hyper-elastic states, with well-defined elastic moduli, in static equilibrium between irreversible (plastic) re-arrangement events, discrete in time. Small, finite strains explore those reversible (elastic) states, before larger strains push the system into new states, by irreversible, sudden re-arrangements of the micro-structure.


HortScience ◽  
1997 ◽  
Vol 32 (4) ◽  
pp. 605C-605
Author(s):  
Susan Wilson Hamilton

Phenomenological interviewing is a research approach used extensively and successfully in the social sciences and has implications for those working with people-plant interactions. Although many research methods are available for horticulturists to use in obtaining information about a target audience, most methods used (e.g., surveys and questionnaires) are quantitative in nature in that they provide numerical data on statistical generalizable patterns. Phenomenological interviewing allows investigators, through open-ended interview questions, to obtain more in-depth data than traditional quantitative techniques. Transcribed interview tapes become the data from which analysis and interpretation follows. “Coding” the data by searching for words, phrases, patterns of behavior, subjects' ways of thinking, and events which are repeated and stand out classify and categorize the data helping with its interpretation and write up. Writing up such data must develop how you interpret what you found by carefully integrating themes that support a thesis and create or augment theoretical explanations. This research method allows investigators to understand and capture the points of view of the participants without predetermining those points of view through prior selection of questionnaire or survey categories.


Materials ◽  
2019 ◽  
Vol 12 (22) ◽  
pp. 3767 ◽  
Author(s):  
Siyu Ge ◽  
Wenying Zhang ◽  
Jian Sang ◽  
Shuai Yuan ◽  
Glenn V. Lo ◽  
...  

Material Point Method (MPM) mesoscale simulation was used to study the constitutive relation of a polymer bonded explosive (PBX) consisting of 1,3,5-triamino-2,4,6-trinitrobenzene (TATB) and a fluorine polymer binder F2314. The stress-strain variations of the PBX were calculated for different temperatures and different porosities, and the results were found to be consistent with experimental observations. The stress-strain relations at different temperatures were used to develop the constitutive equation of the PBX by using numerical data fitting. Stress-strain data for different porosities were used to establish the constitutive equation by fitting the simulation data to an improved Hashion-Shtrikman model. The equation can be used to predict the shear modulus and bulk modulus of the PBX at different densities of the sample. The constitutive equations developed for TATB/F2314 PBX by MPM mesoscale simulation are important equations for the numerical simulations of the PBX at macroscale. The method presented in this study provides an alternative approach for studying the constitutive relations of PBX.


The most important index to the dynamical constitution of the Sun lies in its dark spots, which have been known, ever since their discovery by Galileo, to recur periodically. Numerical data for the numbers of spots simultaneously present, and later and more precise data for the extent of spotted area, are in existence for about two centuries ; and it is natural that efforts should be made, from all points of view, to extract the knowledge which they contain. The discussion of the fundamental question, whether there is permanent unbroken periodicity, due either to planetary influence or (as seems much more probable) to a period of dynamical oscillation belonging to the Sun itself, was taken up statistically by Dr. Simon Newcomb in a paper “On the Period of the Solar Spots.”* His criterion was to examine whether the more precise phases were equidistant throughout the record, or on the other hand their deviations increased continually (as √ n ) according to the law of errors. The phases chosen for scrutiny were four, those of maxima and minima and the two more definite intermediate times of mean spottedness. An analysis by the method of least squares led him to an unbroken period, of 11·13 ± 0·02 years, and thus of great definiteness. His conclusion is that “underlying the periodic variations of spot activity there is a uniform cycle unchanging from time to time and determining the general mean of the activity.” But to get this very remarkable degree of precision he had to reject the records belonging to the two decades around the year 1780, which showed violent irregularity in the phases. “I was at first disposed to think that these perturbations of the period might be real, but on more mature consideration I think they are to be regarded as errors arising from imperfection of the record. The derivation of any exact epoch requires a fairly continuous series of derivations made on a uniform plan. If we compare and combine the results of observa­tions made in an irregular or sporadic way, it may well be that the actual changes are masked by the apparent changes due only to these imperfections.” And, again, “it would seem from what precedes that a revision of the conclusions to be drawn from the observations of sunspots during the interval of 1775-1790 is very desirable.”


Author(s):  
T. Yanaka ◽  
K. Shirota

It is significant to note field aberrations (chromatic field aberration, coma, astigmatism and blurring due to curvature of field, defined by Glaser's aberration theory relative to the Blenden Freien System) of the objective lens in connection with the following three points of view; field aberrations increase as the resolution of the axial point improves by increasing the lens excitation (k2) and decreasing the half width value (d) of the axial lens field distribution; when one or all of the imaging lenses have axial imperfections such as beam deflection in image space by the asymmetrical magnetic leakage flux, the apparent axial point has field aberrations which prevent the theoretical resolution limit from being obtained.


Author(s):  
L.R. Wallenberg ◽  
J.-O. Bovin ◽  
G. Schmid

Metallic clusters are interesting from various points of view, e.g. as a mean of spreading expensive catalysts on a support, or following heterogeneous and homogeneous catalytic events. It is also possible to study nucleation and growth mechanisms for crystals with the cluster as known starting point.Gold-clusters containing 55 atoms were manufactured by reducing (C6H5)3PAuCl with B2H6 in benzene. The chemical composition was found to be Au9.2[P(C6H5)3]2Cl. Molecular-weight determination by means of an ultracentrifuge gave the formula Au55[P(C6H5)3]Cl6 A model was proposed from Mössbauer spectra by Schmid et al. with cubic close-packing of the 55 gold atoms in a cubeoctahedron as shown in Fig 1. The cluster is almost completely isolated from the surroundings by the twelve triphenylphosphane groups situated in each corner, and the chlorine atoms on the centre of the 3x3 square surfaces. This gives four groups of gold atoms, depending on the different types of surrounding.


Author(s):  
Kenneth R. Lawless

One of the most important applications of the electron microscope in recent years has been to the observation of defects in crystals. Replica techniques have been widely utilized for many years for the observation of surface defects, but more recently the most striking use of the electron microscope has been for the direct observation of internal defects in crystals, utilizing the transmission of electrons through thin samples.Defects in crystals may be classified basically as point defects, line defects, and planar defects, all of which play an important role in determining the physical or chemical properties of a material. Point defects are of two types, either vacancies where individual atoms are missing from lattice sites, or interstitials where an atom is situated in between normal lattice sites. The so-called point defects most commonly observed are actually aggregates of either vacancies or interstitials. Details of crystal defects of this type are considered in the special session on “Irradiation Effects in Materials” and will not be considered in detail in this session.


Author(s):  
W.M. Stobbs

I do not have access to the abstracts of the first meeting of EMSA but at this, the 50th Anniversary meeting of the Electron Microscopy Society of America, I have an excuse to consider the historical origins of the approaches we take to the use of electron microscopy for the characterisation of materials. I have myself been actively involved in the use of TEM for the characterisation of heterogeneities for little more than half of that period. My own view is that it was between the 3rd International Meeting at London, and the 1956 Stockholm meeting, the first of the European series , that the foundations of the approaches we now take to the characterisation of a material using the TEM were laid down. (This was 10 years before I took dynamical theory to be etched in stone.) It was at the 1956 meeting that Menter showed lattice resolution images of sodium faujasite and Hirsch, Home and Whelan showed images of dislocations in the XlVth session on “metallography and other industrial applications”. I have always incidentally been delighted by the way the latter authors misinterpreted astonishingly clear thickness fringes in a beaten (”) foil of Al as being contrast due to “large strains”, an error which they corrected with admirable rapidity as the theory developed. At the London meeting the research described covered a broad range of approaches, including many that are only now being rediscovered as worth further effort: however such is the power of “the image” to persuade that the above two papers set trends which influence, perhaps too strongly, the approaches we take now. Menter was clear that the way the planes in his image tended to be curved was associated with the imaging conditions rather than with lattice strains, and yet it now seems to be common practice to assume that the dots in an “atomic resolution image” can faithfully represent the variations in atomic spacing at a localised defect. Even when the more reasonable approach is taken of matching the image details with a computed simulation for an assumed model, the non-uniqueness of the interpreted fit seems to be rather rarely appreciated. Hirsch et al., on the other hand, made a point of using their images to get numerical data on characteristics of the specimen they examined, such as its dislocation density, which would not be expected to be influenced by uncertainties in the contrast. Nonetheless the trends were set with microscope manufacturers producing higher and higher resolution microscopes, while the blind faith of the users in the image produced as being a near directly interpretable representation of reality seems to have increased rather than been generally questioned. But if we want to test structural models we need numbers and it is the analogue to digital conversion of the information in the image which is required.


Author(s):  
B. Lencova ◽  
G. Wisselink

Recent progress in computer technology enables the calculation of lens fields and focal properties on commonly available computers such as IBM ATs. If we add to this the use of graphics, we greatly increase the applicability of design programs for electron lenses. Most programs for field computation are based on the finite element method (FEM). They are written in Fortran 77, so that they are easily transferred from PCs to larger machines.The design process has recently been made significantly more user friendly by adding input programs written in Turbo Pascal, which allows a flexible implementation of computer graphics. The input programs have not only menu driven input and modification of numerical data, but also graphics editing of the data. The input programs create files which are subsequently read by the Fortran programs. From the main menu of our magnetic lens design program, further options are chosen by using function keys or numbers. Some options (lens initialization and setting, fine mesh, current densities, etc.) open other menus where computation parameters can be set or numerical data can be entered with the help of a simple line editor. The "draw lens" option enables graphical editing of the mesh - see fig. I. The geometry of the electron lens is specified in terms of coordinates and indices of a coarse quadrilateral mesh. In this mesh, the fine mesh with smoothly changing step size is calculated by an automeshing procedure. The options shown in fig. 1 allow modification of the number of coarse mesh lines, change of coordinates of mesh points or lines, and specification of lens parts. Interactive and graphical modification of the fine mesh can be called from the fine mesh menu. Finally, the lens computation can be called. Our FEM program allows up to 8000 mesh points on an AT computer. Another menu allows the display of computed results stored in output files and graphical display of axial flux density, flux density in magnetic parts, and the flux lines in magnetic lenses - see fig. 2. A series of several lens excitations with user specified or default magnetization curves can be calculated and displayed in one session.


Sign in / Sign up

Export Citation Format

Share Document