lower envelope
Recently Published Documents


TOTAL DOCUMENTS

42
(FIVE YEARS 11)

H-INDEX

8
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Hao Tian ◽  
Chao Gao ◽  
Xin Zhang ◽  
Hongbing Xiao ◽  
Chongchong Yu

Abstract Background: Frost stress is an abiotic stressor for plant growth that impacts the health and the regional distribution of plants. The freeze-thaw characteristics of plants during the overwintering period help to understand relevant issues in plant physiology, including plant cold resistance and cold acclimation. Therefore, we aimed to develop a non-invasive instrument and method for accurate in situ detection of changes in stem freeze-thaw characteristics during the overwintering period. Results: A sensor was designed based on standing wave ratio method (SWR) to measure stem volume water content (StVWC). We were able to measure stem volume ice content (StVIC) and stem freeze-thaw rate of ice (StFTRI) during the overwintering period. The resolution of the StVWC sensor is less than 0.05 %, the mean absolute error and root mean square error are less than 1 %, and the dynamic response time is 0.296 s. The peak point of the daily change rate of the lower envelope of the StVWC sequence occurs when the plant enters and exits the overwintering period. The peak point can be used to determine the moment of freeze-thaw occurrence, whereas the time point corresponding to the moment of freeze-thaw coincides with the rapid transition between high and low ambient temperatures. In the field, the StVIC and StFTRI of Juniperus virginiana L., Lagerstroemia indica L. and Populus alba L. gradually increased at the beginning, fluctuated steadily during, and then gradually decreased by the end of the overwintering period. The StVIC and StFTRI also showed significant variability due to differences among the tree species and latitude.Conclusions: The StVWC sensor has good resolution, accuracy, stability, and sensitivity. The envelope changes of the StVWC sequence and the correspondence between the freeze-thaw moment and the ambient temperature indicate that the determination of the freeze-thaw moment based on the peak point of the daily change rate of the lower envelope is reliable. The results show that the sensor is able to monitor changes in the freeze-thaw characteristics of plants and effectively characterize freeze-thaw differences and cold resistance of different tree species. Furthermore, this is a cost-effective tool for monitoring freeze-thaw conditions during the overwintering period.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yang Liu ◽  
Kaiwen Zhang ◽  
Denghang Tian ◽  
Liming Qu ◽  
Yang Liu

The reverse thrust in the deep site causes the upward propagation of stress and displacement in the overlying soil. The displacement field around the fault zone is maximum. As the spatial location becomes shallower, the soil displacement gradually becomes smaller. The deformation of the overlying soil is mainly affected by the vertical dislocation of the fracture zone. The monitoring curve showed no abrupt change value, indicating that the top surface of soil did not rupture, and only the influence of fault on the displacement transfer of the top surface of the soil. When a creeping dislocation occurs in the bottom fracture zone, the maximum principal stress of the upper boundary of the deep site is dominated by compressive stress. The maximum principal stress of the soil on both sides of the fracture zone has a maximum value, and the soil on the right side of the fracture zone has a significant compression effect. The maximum principal stress monitoring curve varies greatly, indicating the plastic failure development of soil, which is the same as the research results of the plastic failure zone in the following paper. When the bottom fracture zone starts to move, the plastic zone first appears at the junction area between the front end of the bottom fracture zone and the overlying soil. As the amount of dislocation of the fracture zone increases, the plastic zone continues to extend into the inner soil. The left and right sides of the fracture zone show tensile failure and compression failure, respectively. The development of the upper envelope curve in the plastic zone of the overlying soil satisfies the Boltzmann equation with a first-order exponential growth, while the development of the lower envelope curve satisfies the Gauss equation with a second-order exponential growth. The development curve equation of the plastic zone is verified according to the residual figures of the fitting result and the correlation parameters.


2021 ◽  
Vol 14 (2) ◽  
pp. 1125-1145 ◽  
Author(s):  
William J. Pringle ◽  
Damrongsak Wirasaet ◽  
Keith J. Roberts ◽  
Joannes J. Westerink

Abstract. This paper details and tests numerical improvements to the ADvanced CIRCulation (ADCIRC) model, a widely used finite-element method shallow-water equation solver, to more accurately and efficiently model global storm tides with seamless local mesh refinement in storm landfall locations. The sensitivity to global unstructured mesh design was investigated using automatically generated triangular meshes with a global minimum element size (MinEle) that ranged from 1.5 to 6 km. We demonstrate that refining resolution based on topographic seabed gradients and employing a MinEle less than 3 km are important for the global accuracy of the simulated astronomical tide. Our recommended global mesh design (MinEle = 1.5 km) based on these results was locally refined down to two separate MinEle values (500 and 150 m) at the coastal landfall locations of two intense storms (Hurricane Katrina and Super Typhoon Haiyan) to demonstrate the model's capability for coastal storm tide simulations and to test the sensitivity to local mesh refinement. Simulated maximum storm tide elevations closely follow the lower envelope of observed high-water marks (HWMs) measured near the coast. In general, peak storm tide elevations along the open coast are decreased, and the timing of the peak occurs later with local coastal mesh refinement. However, this mesh refinement only has a significant positive impact on HWM errors in straits and inlets narrower than the MinEle and in bays and lakes separated from the ocean by these passages. Lastly, we demonstrate that the computational performance of the new numerical treatment is 1 to 2 orders of magnitude faster than studies using previous ADCIRC versions because gravity-wave-based stability constraints are removed, allowing for larger computational time steps.


2020 ◽  
Vol 63 (12) ◽  
pp. 4300-4313
Author(s):  
Emily M. H. Lundberg ◽  
Song Hui Chon ◽  
James M. Kates ◽  
Melinda C. Anderson ◽  
Kathryn H. Arehart

Purpose The overall goal of the current study was to determine whether noise type plays a role in perceptual quality ratings. We compared quality ratings using various noise types and signal-to-noise ratio (SNR) ranges using hearing aid simulations to consider the effects of hearing aid processing features. Method Ten older adults with bilateral mild to moderately severe sensorineural hearing loss rated the sound quality of sentences processed through a hearing aid simulation and presented in the presence of five different noise types (six-talker babble, three-talker conversation, street traffic, kitchen, and fast-food restaurant) at four SNRs (3, 8, 12, and 20 dB). Results Everyday noise types differentially affected sound quality ratings even when presented at the same SNR: Kitchen and three-talker noises were rated significantly higher than restaurant, traffic, and multitalker babble, which were not different from each other. The effects of noise type were most pronounced at poorer SNRs. Conclusions The findings of this study showed that noise types differentially affected sound quality ratings. The differences we observed were consistent with the acoustic characteristics of the noise types. Noise types having lower envelope fluctuations yielded lower quality ratings than noise types characterized by sporadic high-intensity events at the same SNR.


2020 ◽  
Vol 64 (3) ◽  
pp. 838-904
Author(s):  
Haim Kaplan ◽  
Wolfgang Mulzer ◽  
Liam Roditty ◽  
Paul Seiferth ◽  
Micha Sharir

Abstract We describe a new data structure for dynamic nearest neighbor queries in the plane with respect to a general family of distance functions. These include $$L_p$$ L p -norms and additively weighted Euclidean distances. Our data structure supports general (convex, pairwise disjoint) sites that have constant description complexity (e.g., points, line segments, disks, etc.). Our structure uses $$O(n \log ^3 n)$$ O ( n log 3 n ) storage, and requires polylogarithmic update and query time, improving an earlier data structure of Agarwal, Efrat, and Sharir which required $$O(n^{\varepsilon })$$ O ( n ε ) time for an update and $$O(\log n)$$ O ( log n ) time for a query [SICOMP 1999]. Our data structure has numerous applications. In all of them, it gives faster algorithms, typically reducing an $$O(n^{\varepsilon })$$ O ( n ε ) factor in the previous bounds to polylogarithmic. In addition, we give here two new applications: an efficient construction of a spanner in a disk intersection graph, and a data structure for efficient connectivity queries in a dynamic disk graph. To obtain this data structure, we combine and extend various techniques from the literature. Along the way, we obtain several side results that are of independent interest. Our data structure depends on the existence and an efficient construction of “vertical” shallow cuttings in arrangements of bivariate algebraic functions. We prove that an appropriate level in an arrangement of a random sample of a suitable size provides such a cutting. To compute it efficiently, we develop a randomized incremental construction algorithm for computing the lowest k levels in an arrangement of bivariate algebraic functions (we mostly consider here collections of functions whose lower envelope has linear complexity, as is the case in the dynamic nearest-neighbor context, under both types of norm). To analyze this algorithm, we also improve a longstanding bound on the combinatorial complexity of the vertical decomposition of these levels. Finally, to obtain our structure, we combine our vertical shallow cutting construction with Chan’s algorithm for efficiently maintaining the lower envelope of a dynamic set of planes in $${{\mathbb {R}}}^3$$ R 3 . Along the way, we also revisit Chan’s technique and present a variant that uses a single binary counter, with a simpler analysis and improved amortized deletion time (by a logarithmic factor; the insertion and query costs remain asymptotically the same).


2020 ◽  
Author(s):  
William J. Pringle ◽  
Damrongsak Wirasaet ◽  
Keith J. Roberts ◽  
Joannes J. Westerink

Abstract. This paper details and tests numerical improvements to ADCIRC, a widely used finite element method shallow water equation solver, to more accurately and efficiently model global storm tides with seamless local mesh refinement in storm landfall locations. The sensitivity to global unstructured mesh design was investigated using automatically generated triangular meshes with a global minimum element size (MinEle) that ranged from 1.5 km to 6 km. We demonstrate that refining resolution based on topographic seabed gradients and employing a MinEle less than 3 km is important for the global accuracy of the simulated astronomical tide. Our recommended global mesh design (MinEle = 1.5 km) based on these results was locally refined down to two separate MinEle (500 m and 150 m) at the coastal landfall locations of two intense storms (Hurricane Katrina and Super Typhon Haiyan) to demonstrate the model's capability for coastal storm tide simulations and to test the sensitivity to local mesh refinement. Simulated maximum storm tide elevations closely follow the lower envelope of observed high water marks (HWMs) measured near the coast. In general, peak storm tide elevations along the open coast are decreased and the timing of the peak occurs later with local coastal mesh refinement. However, this mesh refinement only has a significant positive impact on HWM errors in straits and inlets narrower than the MinEle, and in bays and lakes separated from the ocean by these passages. Lastly, we demonstrate that the computational performance of the new numerical treatment is one-to-two orders of magnitude faster than studies using previous ADCIRC versions because gravity-wave based stability constraints are removed allowing for larger computational time steps.


2020 ◽  
Vol 495 (2) ◽  
pp. 2342-2353
Author(s):  
Tony Dalton ◽  
Simon L Morris

ABSTRACT It is known that the GRB equivalent hydrogen column density (NHX) changes with redshift and that, typically, NHX is greater than the GRB host neutral hydrogen column density. We have compiled a large sample of data for GRB NHX and metallicity [X/H]. The main aims of this paper are to generate improved NHX for our sample by using actual metallicities, dust corrected where available for detections, and for the remaining GRB, a more realistic average intrinsic metallicity using a standard adjustment from solar. Then, by approximating the GRB host intrinsic hydrogen column density using the measured neutral column (NHI, IC) adjusted for the ionization fraction, we isolate a more accurate estimate for the intergalactic medium (IGM) contribution. The GRB sample mean metallicity is = −1.17 ± 0.09 rms (or 0.07 ± 0.05 Z/Zsol) from a sample of 36 GRB with a redshift 1.76 ≤ z ≤ 5.91, substantially lower than the assumption of solar metallicity used as standard for many fitted NHX. Lower GRB host mean metallicity results in increased estimated NHX with the correction scaling with redshift as Δlog (NHX cm−2) = (0.59 ± 0.04)log(1 + z) + 0.18 ± 0.02. Of the 128 GRB with data for both NHX and NHI, IC in our sample, only six have NHI, IC > NHX when revised for realistic metallicity, compared to 32 when solar metallicity is assumed. The lower envelope of the revised NHX – NHI, IC, plotted against redshift can be fit by log(NHX – NHI, IC cm−2) = 20.3 + 2.4 log(1 + z). This is taken to be an estimate for the maximum IGM hydrogen column density as a function of redshift. Using this approach, we estimate an upper limit to the hydrogen density at redshift zero (n0) to be consistent with n0 = 0.17 × 10−7cm−3.


2020 ◽  
Author(s):  
David Johnny Peres ◽  
Antonino Cancelliere

<p>Landslide thresholds determined empirically through the combined analysis of rainfall and landslide data are at the core of early warning systems. Given a set of rainfall and landslide data, several methods do exist to determine the threshold: methods based on triggering events only, methods based on the non-triggering events only, and methods based on both type of rainfall events. The first are the most commonly encountered in literature. Early work determined the threshold by drawing the lower envelope curve of the triggering events “by eye”. More recent work used more sophisticated statistical approaches in order to reduce the subjectivity. Among these methods, the so-called frequentist method has become prominent in the literature. These methods have been criticized because they do not account uncertainty, i.e. the fact that there is not a clear separation between rainfall characteristics of triggering and non-triggering events. Hence, methods based on the optimization of Receiver operating characteristic indices – count of true and false positives/negatives – have been proposed. One of the first methods proposed in this sense referred to the use of Bayesian a-posteriori probability, which is the same of using the so-called ROC Precision index. Others have used the True Skill Statistic. On the other hand, use of non-triggering events only has been discussed just by a few researchers, and the potentialities of this way to proceed have been scarcely explored.</p><p>The choice of the method is usually dictated by external factors, such as the availability of data and their reliability, but it should also take into account of the theoretical statistical properties of each method.</p><p>Given this context, in the present work we compare, through Monte Carlo simulations, the statistical properties of each of the above-mentioned methods. In particular, we attempt to provide the answer to the following questions: What is the minimum number of landslides that is needed to perform a reliable determination of thresholds? How robust is the method for drawing the threshold – i.e. their sensitivity to artifacts in the data, such as exchanges of triggering events with non-triggering events due to incompleteness of landslide archives? What are the performances of the methods in terms of the whole ROC confusion matrix?</p><p>The analysis is performed for various levels of uncertainty in the data, i.e. noise in the separation by triggering and non-triggering events. Results show that methods based on non-triggering events only may be convenient when few landslide data are available. Also, in the case of high uncertainty in the data, the performances of methods based on triggering events may be poor compared to those based on non-triggering events. Finally, the methods based on both triggering and non-triggering events are the most robust.</p>


2020 ◽  
Vol 633 ◽  
pp. A41
Author(s):  
Ren Song ◽  
Xiangcun Meng ◽  
Philipp Podsiadlowski ◽  
Yingzhen Cui

Context. Although Type Ia supernovae (SNe Ia) are important in many astrophysical fields, the nature of their progenitors is still unclear. A new version of the single-degenerate model has been developed recently, the common-envelope wind (CEW) model, in which the binary is enshrouded in a common envelope (CE) during the main accretion phase. This model is still in development and has a number of open issues, for example what is the exact appearance of such a system during the CE phase? Aims. In this paper we investigate this question for a system with a massive CE. Methods. We use a thermally pulsing asymptotic giant branch (TPAGB) star with a CO core of 0.976 M⊙ and an envelope of 0.6 M⊙ to represent the binary system. The effects of the companion’s gravity and the rotation of the CE are mimicked by modifying the gravitational constant. The energy input from the friction between the binary and the CE is taken into account by an extra heating source. Results. For a thick envelope, the modified TPAGB star looks similar to a canonical TPAGB star but with a smaller radius, a higher effective temperature, and a higher surface luminosity. This is primarily caused by the effect of the companion’s gravity, which is the dominant factor in changing the envelope structure. The mixing length at the position of the companion can be larger than the local radius, implying a breakdown of mixing-length theory and suggesting the need for more turbulence in this region. The modified TPAGB star is more stable than the canonical TPAGB star and the CE density around the companion is significantly higher than that assumed in the original CEW model. Conclusions. Future work will require the modelling of systems with lower envelope masses and the inclusion of hydrodynamical effects during the CE phase.


Water ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 2130 ◽  
Author(s):  
Zhu ◽  
Zhang ◽  
Wu ◽  
Qi ◽  
Fu ◽  
...  

This paper assesses the uncertainties in the projected future runoff resulting from climate change and downscaling methods in the Biliu River basin (Liaoning province, Northeast China). One widely used hydrological model SWAT, 11 Global Climate Models (GCMs), two statistical downscaling methods, four dynamical downscaling datasets, and two Representative Concentration Pathways (RCP4.5 and RCP8.5) are applied to construct 22 scenarios to project runoff. Hydrology variables in historical and future periods are compared to investigate their variations, and the uncertainties associated with climate change and downscaling methods are also analyzed. The results show that future temperatures will increase under all scenarios and will increase more under RCP8.5 than RCP4.5, while future precipitation will increase under 16 scenarios. Future runoff tends to decrease under 13 out of the 22 scenarios. We also found that the mean runoff changes ranging from −38.38% to 33.98%. Future monthly runoff increases in May, June, September, and October and decreases in all the other months. Different downscaling methods have little impact on the lower envelope of runoff, and they mainly impact the upper envelope of the runoff. The impact of climate change can be regarded as the main source of the runoff uncertainty during the flood period (from May to September), while the impact of downscaling methods can be regarded as the main source during the non-flood season (from October to April). This study separated the uncertainty impact of different factors, and the results could provide very important information for water resource management.


Sign in / Sign up

Export Citation Format

Share Document