scholarly journals Determination of field capacity in the Chibunga and Guano rivers micro-basins

F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 172
Author(s):  
Benito Mendoza ◽  
Manuel Fiallos ◽  
Sandra Iturralde ◽  
Patricio Santillán ◽  
Nelly Guananga ◽  
...  

Background: The micro-basins of the Chibunga and Guano rivers are located within the sub-basin of the Chambo River, which starts at the thaw of the Chimborazo, crosses the cities of Guano and Riobamba, and ends in the Chambo River. These rivers are considered fluvial hydrological forces and geological limits of the aquifer, located in this sub-basin. For this reason, our investigation addressed the field capacity in the micro-basins of Chibunga and Guano rivers, to determine the maximum retention potential, i.e., the saturation of water in the soil. Methods: We investigated the change of precipitation to runoff through the correlations between the characteristics of the soil and its vegetation. We applied the Curve Number (CN) method introduced by the United States Soil Conservation Service (USSCS); this represents an empirical model, which relates the vegetation cover to the geological and topographic conditions of the soil. Along with the geographic information system, the model allows to represent the variation of runoffs for each micro-basin, according to the different land use categories, over the time frame from 2010 to 2014. Results: We found that the maximum retention potential is directly affected by CN values, representing the runoff potential. Highest values of 100 belong to the wetlands, urban area, snow, and water, as rain is converted directly into runoff, being impervious areas. The Guano river micro-basin possesses clay soil with CN of 78, the soil texture for eucalyptus forest is clay loam, and its CN value, 46, is the lowest of the data set. Knowledge of field capacity allows to properly evaluate the storage capacity of soil and water conservation. Conclusions: Results of this work will be useful in the quantification of the water balance, to determine the water supply and demand.

Agronomy ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 177 ◽  
Author(s):  
Mirko Castellini ◽  
Anna Maria Stellacci ◽  
Marcello Mastrangelo ◽  
Francesco Caputo ◽  
Luisa Maria Manici

Saving water resources in agriculture is a topic of current research in Mediterranean environments, and rational soil management can allow such purposes. The Beerkan Estimation of Soil Transfer parameters (BEST) procedure was applied in five olive orchards of Salento peninsula (southern Italy) to estimate the soil physical and hydraulic properties under alternative soil management (i.e., no-tillage (NT) and minimum tillage (MT)), and to quantify the impact of soil management on soil water conservation. Results highlighted the soundness of BEST predictions since they provided consistent results in terms of soil functions or capacitive-based soil indicators when (i) the entire data set was grouped by homogeneous classes of texture, bulk density, and capillarity of the soil, (ii) the predictions were compared with the corresponding water retention measures independently obtained in lab, and (iii) some correlations of literature were checked. BEST was applied to establish a comparison at Neviano (NE) and Sternatia (ST) sites. The two neighboring NT soils compared at NE showed substantial discrepancies in soil texture (i.e., sandy loam (NE-SL) or clay (NE-C)). This marked difference in soil texture could determine a worsening of the relative field capacity at the NE-SL site (relative field capacity, RFC < 0.6), as compared to NE-C where RFC was optimal. The current soil management determined a similar effect (RFC < 0.6) at Sternatia (ST-MT vs. ST-NT), but the worsening in soil properties, due to soil tillage, must be considered substantially transient, as progressive improvement is expected with the restoration of the soil structure. The results of this work suggest that strategic MT can be a viable solution to manage the soil of Salento olive orchards.


Water ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 704
Author(s):  
Hussein Al-Ghobari ◽  
Ahmed Z. Dewidar

An increasing scarcity of water, as well as rapid global climate change, requires more effective water conservation alternatives. One promising alternative is rainwater harvesting (RWH). Nevertheless, the evaluation of RWH potential together with the selection of appropriate sites for RWH structures is significantly difficult for the water managers. This study deals with this difficulty by identifying RWH potential areas and sites for RWH structures utilizing geospatial and multi-criteria decision analysis (MCDA) techniques. The conventional data and remote sensing data were employed to set up needed thematic layers using ArcGIS software. The soil conservation service curve number (SCS-CN) method was used to determine surface runoff, centered on which yearly runoff potential map was produced in the ArcGIS environment. Thematic layers such as drainage density, slope, land use/cover, and runoff were allotted appropriate weights to produced RWH potential areas and zones appropriate for RWH structures maps of the study location. Results analysis revealed that the outcomes of the spatial allocation of yearly surface runoff depth ranging from 83 to 295 mm. Moreover, RWH potential areas results showed that the study areas can be categorized into three RWH potential areas: (a) low suitability, (b) medium suitability, and (c) high suitability. Nearly 40% of the watershed zone falls within medium and high suitability RWH potential areas. It is deduced that the integrated MCDA and geospatial techniques provide a valuable and formidable resource for the strategizing of RWH within the study zones.


Animals ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 50
Author(s):  
Jennifer Salau ◽  
Jan Henning Haas ◽  
Wolfgang Junge ◽  
Georg Thaller

Machine learning methods have become increasingly important in animal science, and the success of an automated application using machine learning often depends on the right choice of method for the respective problem and data set. The recognition of objects in 3D data is still a widely studied topic and especially challenging when it comes to the partition of objects into predefined segments. In this study, two machine learning approaches were utilized for the recognition of body parts of dairy cows from 3D point clouds, i.e., sets of data points in space. The low cost off-the-shelf depth sensor Microsoft Kinect V1 has been used in various studies related to dairy cows. The 3D data were gathered from a multi-Kinect recording unit which was designed to record Holstein Friesian cows from both sides in free walking from three different camera positions. For the determination of the body parts head, rump, back, legs and udder, five properties of the pixels in the depth maps (row index, column index, depth value, variance, mean curvature) were used as features in the training data set. For each camera positions, a k nearest neighbour classifier and a neural network were trained and compared afterwards. Both methods showed small Hamming losses (between 0.007 and 0.027 for k nearest neighbour (kNN) classification and between 0.045 and 0.079 for neural networks) and could be considered successful regarding the classification of pixel to body parts. However, the kNN classifier was superior, reaching overall accuracies 0.888 to 0.976 varying with the camera position. Precision and recall values associated with individual body parts ranged from 0.84 to 1 and from 0.83 to 1, respectively. Once trained, kNN classification is at runtime prone to higher costs in terms of computational time and memory compared to the neural networks. The cost vs. accuracy ratio for each methodology needs to be taken into account in the decision of which method should be implemented in the application.


2006 ◽  
Vol 06 (04) ◽  
pp. 373-384
Author(s):  
ERIC BERTHONNAUD ◽  
JOANNÈS DIMNET

Joint centers are obtained from data treatment of a set of markers placed on the skin of moving limb segments. Finite helical axis (FHA) parameters are calculated between time step increments. Artifacts associated with nonrigid body movements of markers entail ill-determination of FHA parameters. Mean centers of rotation may be calculated over the whole movement, when human articulations are likened to spherical joints. They are obtained using numerical technique, defining point with minimal amplitude, during joint movement. A new technique is presented. Hip, knee, and ankle mean centers of rotation are calculated. Their locations depend on the application of two constraints. The joint center must be located next to the estimated geometric joint center. The geometric joint center may migrate inside a cube of possible location. This cube of error is located with respect to the marker coordinate systems of the two limb segments adjacent to the joint. Its position depends on the joint and the patient height, and is obtained from a stereoradiographic study with specimen. The mean position of joint center and corresponding dispersion are obtained through a minimization procedure. The location of mean joint center is compared with the position of FHA calculated between different sequential steps: time sequential step, and rotation sequential step where a minimal rotation amplitude is imposed between two joint positions. Sticks are drawn connecting adjacent mean centers. The animation of stick diagrams allows clinical users to estimate the displacements of long bones (femur and tibia) from the whole data set.


1989 ◽  
Vol 79 (2) ◽  
pp. 493-499
Author(s):  
Stuart A. Sipkin

Abstract The teleseismic long-period waveforms recorded by the Global Digital Seismograph Network from the two largest Superstition Hills earthquakes are inverted using an algorithm based on optimal filter theory. These solutions differ slightly from those published in the Preliminary Determination of Epicenters Monthly Listing because a somewhat different, improved data set was used in the inversions and a time-dependent moment-tensor algorithm was used to investigate the complexity of the main shock. The foreshock (origin time 01:54:14.5, mb 5.7, Ms 6.2) had a scalar moment of 2.3 × 1025 dyne-cm, a depth of 8 km, and a mechanism of strike 217°, dip 79°, rake 4°. The main shock (origin time 13:15:56.4, mb 6.0, Ms 6.6) was a complex event, consisting of at least two subevents, with a combined scalar moment of 1.0 × 1026 dyne-cm, a depth of 10 km, and a mechanism of strike 303°, dip 89°, rake −180°.


1996 ◽  
Vol 86 (2) ◽  
pp. 470-476 ◽  
Author(s):  
Cheng-Horng Lin ◽  
S. W. Roecker

Abstract Seismograms of earthquakes and explosions recorded at local, regional, and teleseismic distances by a small-aperture, dense seismic array located on Pinyon Flat, in southern California, reveal large (±15°) backazimuth anomalies. We investigate the causes and implications of these anomalies by first comparing the effectiveness of estimating backazimuth with an array using three different techniques: the broadband frequency-wavenumber (BBFK) technique, the polarization technique, and the beamforming technique. While each technique provided nearly the same direction as a most likely estimate, the beamforming estimate was associated with the smallest uncertainties. Backazimuth anomalies were then calculated for the entire data set by comparing the results from beamforming with backazimuths derived from earthquake locations reported by the Anza and Caltech seismic networks and the Preliminary Determination of Epicenters (PDE) Bulletin. These backazimuth anomalies have a simple sinelike dependence on azimuth, with the largest anomalies observed from the southeast and northwest directions. Such a trend may be explained as the effect of one or more interfaces dipping to the northeast beneath the array. A best-fit model of a single interface has a dip and strike of 20° and 315°, respectively, and a velocity contrast of 0.82 km/sec. Application of corrections computed from this simple model to ray directions significantly improves locations at all distances and directions, suggesting that this is an upper crustal feature. We confirm that knowledge of local structure can be very important for earthquake location by an array but also show that corrections computed from simple models may not only be adequate but superior to those determined by raytracing through smoothed laterally varying models.


2017 ◽  
Vol 48 (4) ◽  
pp. 537-553 ◽  
Author(s):  
A. Lowell ◽  
B. Suarez-Jimenez ◽  
L. Helpman ◽  
X. Zhu ◽  
A. Durosky ◽  
...  

BackgroundThe 11 September 2001 (9/11) attacks were unprecedented in magnitude and mental health impact. While a large body of research has emerged since the attacks, published reviews are few, and are limited by an emphasis on cross-sectional research, short time frame, and exclusion of treatment studies. Additionally, to date, there has been no systematic review of available longitudinal information as a unique data set. Consequently, knowledge regarding long-term trajectories of 9/11-related post-traumatic stress disorder (PTSD) among highly exposed populations, and whether available treatment approaches effectively address PTSD within the context of mass, man-made disaster, remains limited.MethodsThe present review aimed to address these gaps using a systematic review of peer-reviewed reports from October 2001 to May 2016. Eligible reports were of longitudinal studies of PTSD among highly exposed populations. We identified 20 reports of 9/11-related PTSD, including 13 longitudinal prevalence studies and seven treatment studies.ResultsFindings suggest a substantial burden of 9/11-related PTSD among those highly exposed to the attack, associated with a range of sociodemographic and back-ground factors, and characteristics of peri-event exposure. While most longitudinal studies show declining rates of prevalence of PTSD, studies of rescue/recovery workers have documented an increase over time. Treatment studies were few, and generally limited by methodological shortcomings, but support exposure-based therapies.ConclusionFuture directions for research, treatment, and healthcare policy are discussed.


Author(s):  
Todd D. Jack ◽  
Carl N. Ford ◽  
Shari-Beth Nadell ◽  
Vicki Crisp

A causal analysis of aviation accidents by engine type is presented. The analysis employs a top-down methodology that performs a detailed analysis of the causes and factors cited in accident reports to develop a “fingerprint” profile for each engine type. This is followed by an in-depth analysis of each fingerprint that produces a sequential breakdown. Analysis results of National Transportation Safety Board (NTSB) accidents, both fatal and non-fatal, that occurred during the time period of 1990–1998 are presented. Each data set is comprised of all accidents that involved aircraft with the following engine types: turbofan, turbojet, turboprop, and turboshaft (includes turbine helicopters). During this time frame there were 1461 accidents involving turbine powered aircraft; 306 of these involved propulsion malfunctions and/ or failures. Analyses are performed to investigate the sequential relationships between propulsion system malfunctions or failures with other causes and factors for each engine type. Other malfunctions or events prominent within each data set are also analyzed. Significant trends are identified. The results from this study can be used to identify areas for future research into intervention, prevention, and mitigation strategies.


2017 ◽  
Vol 43 (2) ◽  
pp. 95-100 ◽  
Author(s):  
Rubia Rodrigues ◽  
Clarice Rosa Olivo ◽  
Juliana Dias Lourenço ◽  
Alyne Riane ◽  
Daniela Aparecida de Brito Cervilha ◽  
...  

ABSTRACT Objective: To describe a murine model of emphysema induced by a combination of exposure to cigarette smoke (CS) and instillation of porcine pancreatic elastase (PPE). Methods: A total of 38 C57BL/6 mice were randomly divided into four groups: control (one intranasal instillation of 0.9% saline solution); PPE (two intranasal instillations of PPE); CS (CS exposure for 60 days); and CS + PPE (two intranasal instillations of PPE + CS exposure for 60 days). At the end of the experimental protocol, all animals were anesthetized and tracheostomized for calculation of respiratory mechanics parameters. Subsequently, all animals were euthanized and their lungs were removed for measurement of the mean linear intercept (Lm) and determination of the numbers of cells that were immunoreactive to macrophage (MAC)-2 antigen, matrix metalloproteinase (MMP)-12, and glycosylated 91-kDa glycoprotein (gp91phox) in the distal lung parenchyma and peribronchial region. Results: Although there were no differences among the four groups regarding the respiratory mechanics parameters assessed, there was an increase in the Lm in the CS + PPE group. The numbers of MAC-2-positive cells in the peribronchial region and distal lung parenchyma were higher in the CS + PPE group than in the other groups, as were the numbers of cells that were positive for MMP-12 and gp91phox, although only in the distal lung parenchyma. Conclusions: Our model of emphysema induced by a combination of PPE instillation and CS exposure results in a significant degree of parenchymal destruction in a shorter time frame than that employed in other models of CS-induced emphysema, reinforcing the importance of protease-antiprotease imbalance and oxidant-antioxidant imbalance in the pathogenesis of emphysema.


Sign in / Sign up

Export Citation Format

Share Document