simple assumption
Recently Published Documents


TOTAL DOCUMENTS

95
(FIVE YEARS 32)

H-INDEX

14
(FIVE YEARS 2)

2021 ◽  
Vol 2131 (5) ◽  
pp. 052079
Author(s):  
A Galkin ◽  
V Pankov

Abstract An important quantity determining the choice of technical solutions in design of both surface and underground structures in the permafrost area is the thawing depth of the rocks. To obtain simple analytical relations to determine the thawing depth over time, a simple assumption is used: that the initial temperature of the rocks is equal to the melting temperature of ice. The aim of the present work was the assessment of impact of this assumption on the forecast precision. For a quantitative assessment, a simple typical formula recommended by construction norms was used. Functional dependence of the density of the rocks and their heat capacity on the fraction of ice content was considered in the formulas. A rock consisting of a combination of quartz sand and ice was used as an example.Multiple variant calculations were done according to the formulas and their results presented in the form of charts. It was shown that the relative error in determination of thawing depth depends solely on the Stefan criterion and is independent of the thawing duration, thermal conductivity coefficient of the thawing rocks and the air temperature during the thawing. A relation was obtained which allows to quickly assess at which initial values (temperature and ice content of the frozen rocks) it is possible to use the formulas obtained from the simplified calculation models with the assumption that the temperature of the rocks is equal to the melting temperature of ice.


Universe ◽  
2021 ◽  
Vol 7 (11) ◽  
pp. 424
Author(s):  
Bei-Lok Hu

The Weyl curvature constitutes the radiative sector of the Riemann curvature tensor and gives a measure of the anisotropy and inhomogeneities of spacetime. Penrose’s 1979 Weyl curvature hypothesis (WCH) assumes that the universe began at a very low gravitational entropy state, corresponding to zero Weyl curvature, namely, the Friedmann–Lemaître–Robertson–Walker (FLRW) universe. This is a simple assumption with far-reaching implications. In classical general relativity, Belinsky, Khalatnikov and Lifshitz (BKL) showed in the 70s that the most general cosmological solutions of the Einstein equation are that of the inhomogeneous Kasner types, with intermittent alteration of the one direction of contraction (in the cosmological expansion phase), according to the mixmaster dynamics of Misner (M). How could WCH and BKL-M co-exist? An answer was provided in the 80s with the consideration of quantum field processes such as vacuum particle creation, which was copious at the Planck time (10−43 s), and their backreaction effects were shown to be so powerful as to rapidly damp away the irregularities in the geometry. It was proposed that the vaccum viscosity due to particle creation can act as an efficient transducer of gravitational entropy (large for BKL-M) to matter entropy, keeping the universe at that very early time in a state commensurate with the WCH. In this essay I expand the scope of that inquiry to a broader range, asking how the WCH would fare with various cosmological theories, from classical to semiclassical to quantum, focusing on their predictions near the cosmological singularities (past and future) or avoidance thereof, allowing the Universe to encounter different scenarios, such as undergoing a phase transition or a bounce. WCH is of special importance to cyclic cosmologies, because any slight irregularity toward the end of one cycle will generate greater anisotropy and inhomogeneities in the next cycle. We point out that regardless of what other processes may be present near the beginning and the end states of the universe, the backreaction effects of quantum field processes probably serve as the best guarantor of WCH because these vacuum processes are ubiquitous, powerful and efficient in dissipating the irregularities to effectively nudge the Universe to a near-zero Weyl curvature condition.


2021 ◽  
Vol 10 (2) ◽  
pp. e020
Author(s):  
Iris Kantor ◽  
Thomás A. S. Haddad

To what extend the circulation of scientific knowledge was shaped by the European imperial geopolitics in the late-eighteenth century? Recruited to fulfill tasks increasingly considered essential to the very workings of imperial administrations, scientific practitioners of the time paradoxically seem to make use precisely of this encroachment in state apparatuses to secure some degree of autonomy for their nascent field. Thus, every material form of circulation of scientific information must be ultimately understood as an act of political consequences. Here we present these ideas through the analysis of two concrete scientific artifacts, which can exemplify the circulation of scientific information inside and across empires: two atlases, one terrestrial and one celestial (the latter being a version of Flamsteed’s famous atlas of 1729, by way of intermediate French editions), produced in Portugal at the turn of the nineteenth century. Discarding the simple assumption that such cartographic artifacts might have a “utilitarian” use to Portuguese imperial administration, we aim to insist on their political and communicative nature, grounded on their modes of participation in trans-imperial pathways of circulation of knowledge, people, practices, and models of scientific authority (entangling Britain, France, and the Americas in multiple time scales). We also highlight how the atlases contribute to the affirmation of new patriotic science in Portugal, and explore the markedly didactic vocation of both objects, which also stress the question of the recruitment and reproduction of a new kind of imperial elite.


2021 ◽  
Vol 3 ◽  
Author(s):  
Yuto Miyake ◽  
Tadashi Suga ◽  
Masafumi Terada ◽  
Takahiro Tanaka ◽  
Hiromasa Ueno ◽  
...  

The plantar flexor torque plays an important role in achieving superior sprint performance in sprinters. Because of the close relationship between joint torque and muscle size, a simple assumption can be made that greater plantar flexor muscles (i.e., triceps surae muscles) are related to better sprint performance. However, previous studies have reported the absence of these relationships. Furthermore, to examine these relationships, only a few studies have calculated the muscle volume (MV) of the plantar flexors. In this study, we hypothesized that the plantar flexor MVs may not be important morphological factors for sprint performance. To test our hypothesis, we examined the relationships between plantar flexor MVs and sprint performance in sprinters. Fifty-two male sprinters and 26 body size-matched male non-sprinters participated in this study. On the basis of the personal best 100 m sprint times [range, 10.21–11.90 (mean ± SD, 11.13 ± 0.42) s] in sprinters, a K-means cluster analysis was applied to divide them into four sprint performance level groups (n = 8, 8, 19, and 17 for each group), which was the optimal number of clusters determined by the silhouette coefficient. The MVs of the gastrocnemius lateralis (GL), gastrocnemius medialis (GM), and soleus (SOL) in participants were measured using magnetic resonance imaging. In addition to absolute MVs, the relative MVs normalized to body mass were used for the analyses. The absolute and relative MVs of the total and individual plantar flexors were significantly greater in sprinters than in non-sprinters (all p < 0.01, d = 0.64–1.39). In contrast, all the plantar flexor MV variables did not differ significantly among the four groups of sprinters (all p > 0.05, η2 = 0.02–0.07). Furthermore, all plantar flexor MV variables did not correlate significantly with personal best 100 m sprint time in sprinters (r = −0.253–0.002, all p > 0.05). These findings suggest that although the plantar flexor muscles are specifically developed in sprinters compared to untrained non-sprinters, the greater plantar flexor MVs in the sprinters may not be important morphological factors for their sprint performance.


2021 ◽  
Author(s):  
RUI MIRANDA GUEDES

How to predict the residual strength of polymer matrix composites (PMCs) after a fatigue cycle at multiple stress levels, based on the fatigue or Wöhler (S-N) curves, remains unsatisfactorily tackled. The Miner’s Rule is a widespread example of a simple way to account for damage accumulation under different fatigue cycles. Under certain combinations of stress levels, Miner’s Rule accurately predicts the lifetime of PMCs, but it fails in other cases. The reason is the simple assumption of linear cumulative damage, not accounting for sequence effects in the loading history. Several researchers have proposed modifications to Miner’s Rule. However, due to its simplicity, Miner’s Rule is still used by structural designers. Recent research work proposed compatibility conditions for fatigue damage functions in the S–N plane, leading to a simple model that fulfils those conditions contrary to the previous models, the Miner’s Rule and the Broutman and Sahu linear model. These models predict fatigue life at variable amplitude loading based on constant amplitude fatigue data. Forcibly, the analytical form of SıN influences the model lifetime predictions. Experimental data obtained in the literature serves to illustrate the models' predictions at different loading conditions. Although this work focused on composite materials, we foresaw extension to other materials.


Author(s):  
Giacomo Frulla

Aircraft preliminary design requires a lot of complex evaluations and assumptions related to design variables that are not completely known at a very initial stage. Didactical activity becomes unclear since students ask for precise values in the starting point. A tentative in providing a simple tool for wing weight estimation is presented for overcoming these common difficulties and explaining the following points: a) the intrinsic iterative nature of the preliminary design stage, b) provide useful and realistic calculation for the wing weight with very simple assumption not covered by cumbersome calculations and formulas. The purpose of the paper is to provide a didactic tool to facilitate the understanding of some steps in estimating wing weight at the preliminary design level. The problems of identifying the main variables for the initial estimation is dealt with and specifi aspects that are usually hidden by the complexity of the involved disciplines and by the usual calculation methods applied in structural design are pointed out. The procedure is addressed to highlight main steps in wing weight estimation for straight wing weight to highlight the main steps in estimating the wing weight for a general aviation straight wing aircraft at the preliminary design stage. The effect of the main variables on the wing weight variation is also presented confirming well-known results from literature and design manuals.


Author(s):  
Giacomo Frulla

Aircraft preliminary design requires a lot of complex evaluations and assumptions related to design variables that are not completely known at a very initial stage. Didactical activity becomes unclear since students ask for precise values in the starting point. A tentative in providing a simple tool for wing weight estimation is presented devoted to overcome these common difficulties and clarifies the following points: a) the intrinsic iterative nature of the preliminary design stage, b) provide useful and realistic calculation for the wing weight with very simple assumption not covered by cumbersome calculations and formulas. The procedure is applied to the calculation of wing weight for a typical general aviation aircraft in the preliminary design stage. The effect of the main variables on the wing weight variation is also presented confirming well-known results from literature and design manuals.


2021 ◽  
Author(s):  
Martin Gottlich ◽  
Macia Buades Rotger ◽  
Juliana Wiechert ◽  
Frederike Beyer ◽  
Ulrike M. Kramer

Many studies point toward volume reductions in the amygdala as a potential neurostructural marker for trait aggression. However, most of these findings stem from clinical samples, rendering unclear whether the findings generalize to non-clinical populations. Furthermore, the notion of neural networks suggests that interregional correlations in grey matter volume (i.e., structural covariance) can explain individual differences in aggressive behavior beyond local univariate associations. Here, we tested whether structural covariance between amygdala subregions and the rest of the brain is associated with self-reported aggression in a large sample of healthy young students (n=263; 51% women). Salivary testosterone concentrations were measured for a subset of n=76 participants (45% women), allowing us to investigate the influence of endogenous testosterone on structural covariance. Aggressive individuals showed enhanced covariance between superficial amygdala (SFA) and dorsal anterior insula (dAI), but lower covariance between laterobasal amygdala (LBA) and dorsolateral prefrontal cortex (dlPFC). These structural patterns overlap with functional networks involved in the genesis and regulation of aggressive behavior, respectively. With increasing endogenous testosterone, we observed stronger structural covariance between centromedial amygdala (CMA) and medial prefrontal cortex in men and between CMA and orbitofrontal cortex in women. These results speak for structural covariance of amygdala subregions as a robust correlate of trait aggression in healthy individuals. Moreover, regions that showed structural covariance with the amygdala modulated by either testosterone or aggression did not overlap, speaking for a more complex role of testosterone in human social behavior rather than the simple assumption that testosterone only increases aggressiveness.


Author(s):  
Michele Greco ◽  
Francesco Arbia ◽  
Raffaele Giampietro

According to the Water Framework Directive, the Ecological Flow (Eflow) is assumed to be the minimum water discharge required to achieve and maintain the environmental objectives of “good quality status” in a natural water body. It is highly recognized that, the hydrological regime of natural flow plays a primary and crucial role influencing the physical conditions of habitats, which in turn determines the biotic composition and sustainability of aquatic ecosystems. Furthermore, the simple assumption to supply a minimum instream during dry periods is not enough any longer in order to protect the river environment. The recent hydro-ecological understanding states that all flow components might be considered as operational targets for water management, starting from base flows (including low flows) to high and flood regimes in terms of magnitude, frequency, duration, timing and rate of change. Several codes have been developed and applied on different case studies in order to define common tools to be implemented for the Eflow assessment. The study proposes the application of the Indicators of Hydrologic Alteration methodology (IHA by TNC) coupled to the valuation of the Index of Hydrological Regime Alteration (IARI by ISPRA) as an operative tool to define the ecological flow in each monitoring cross section to support the sustainable water resources management and planning. The case study of Agri River, in Basilicata (Southern Italy) is presented. The analyses have been carried out on monthly discharge data derived applying the HEC-Hydrological Modelling System at the basin scale using the daily rain data measurements obtained by the regional rainfall gauge stations and calibrated through the observed inlet water discharge registered at the Lago del Pertusillo reservoir station.


2021 ◽  
Author(s):  
Roland Szatmári ◽  
Ferenc Kun

<p>Layers of dense pastes, colloids attached to a substrate often undergo sequential cracking due to shrinkage stresses caused by desiccation. From the spectacular crack patterns of dried out lake beds through the polygonal ground patterns of permafrost regions to the formation of columnar joints in cooling volcanic lava, shrinkage induced cracking is responsible for a large variety of complex crack structures in nature. Under laboratory conditions this phenomenon is usually investigated by desiccating thin layers of dense colloidal suspensions in a container, which typically leads to polygonal crack patterns with a high degree of isotropy.</p><p>It is of great interest how to control the structure of shrinkage induced two-dimensional crack patterns also due to its high importance for technological applications. Recently, it has been demonstrated experimentally for dense calcium carbonate and magnesium carbonate hydroxide pastes that applying mechanical excitation by means of vibration or flow of the paste the emerging desiccation crack pattern remembers the direction of excitation, i.e. main cracks get aligned and their orientation can be tuned by the direction of mechanical excitation.</p><p>In order to understand the mechanism of this memory effect, we investigate a fragmentation process of a brittle, cylindrical sample, where the driving force of the cracking coming from a continous shrinkage, which sooner or later destroys the cohesive forces between the structure’s building blocks. Our study is based on a two dimensional discrete element model, where the material is discretised via a special form of the Voronoi-tesselation, with the so-called randomised vector lattice which allows to fine-tune the initial disorder of the system. We assume that the initial mechanical vibration imprints plastic deformation into the paste, which is captured in the model by assuming that the local cohesive strength of the layer has a directional dependence: the layer is stronger along the direction of vibration. We demonstrate that - based on this simple assumption - the model well reproduces the qualitative features of the anisotropic crack patterns observed in experiments. Gradually increasing the degree of anisotropy the system exhibits a crossover from an isotropic cellular structure to an anisotropic one.</p>


Sign in / Sign up

Export Citation Format

Share Document