scholarly journals A continuum model for concrete informed by mesoscale studies

2017 ◽  
Vol 27 (10) ◽  
pp. 1451-1481 ◽  
Author(s):  
Oleg Vorobiev ◽  
Eric Herbold ◽  
Souheil Ezzedine ◽  
Tarabay Antoun

The paper describes a novel computational approach to refine continuum models for penetration calculations which involves two stages. At the first stage, a trial continuum model is used to model penetration into a concrete target. Model parameters are chosen to match experimental data on penetration depth. Deformation histories are recorded at few locations in the target around the penetrator. In the second stage, these histories are applied to the boundaries of a representative volume comparable to the element size in large scale penetration simulation. Discrete-continuum approach is used to model the deformation and failure of the material within the representative volume. The same deformation histories are applied to a single element which uses the model to be improved. Continuum model may include multiple parameters or functions which cannot be easily found using experimental data. We propose using mesoscale response to constrain such parameters and functions. Such tuning of the continuum model using typical deformation histories experienced by the target material during the penetration allows us to minimize the parameter space and build better models for penetration problems which are based on physics of penetration rather than intuition and ad hoc assumptions.

Author(s):  
Shijie Qian ◽  
Kuiying Chen ◽  
Rong Liu ◽  
Ming Liang

An advanced erosion model that correlates two model parameters—the energies required to remove unit mass of target material during cutting wear and deformation wear, respectively, with particle velocity, particle size and density, as well as target material properties, is proposed. This model is capable of predicting the erosion rates for a material under solid-particle impact over a specific range of particle velocity at the impingement angle between [Formula: see text] and [Formula: see text], provided that the experimental data of erosion rate for the material at a particle velocity within this range and at impingement angles between [Formula: see text] and [Formula: see text] are available. The proposed model is applied on three distinct types of materials: aluminum, perspex and graphite, to investigate the dependence behavior of the model parameters on particle velocity for ductile and brittle materials. The predicted model parameters obtained from the model are validated by the experimental data of aluminum plate under Al2O3 particle impact. The significance and limitation of the model are discussed; possible improvements on the model are suggested.


1992 ◽  
Vol 23 (2) ◽  
pp. 89-104 ◽  
Author(s):  
Ole H. Jacobsen ◽  
Feike J. Leij ◽  
Martinus Th. van Genuchten

Breakthrough curves of Cl and 3H2O were obtained during steady unsaturated flow in five lysimeters containing an undisturbed coarse sand (Orthic Haplohumod). The experimental data were analyzed in terms of the classical two-parameter convection-dispersion equation and a four-parameter two-region type physical nonequilibrium solute transport model. Model parameters were obtained by both curve fitting and time moment analysis. The four-parameter model provided a much better fit to the data for three soil columns, but performed only slightly better for the two remaining columns. The retardation factor for Cl was about 10 % less than for 3H2O, indicating some anion exclusion. For the four-parameter model the average immobile water fraction was 0.14 and the Peclet numbers of the mobile region varied between 50 and 200. Time moments analysis proved to be a useful tool for quantifying the break through curve (BTC) although the moments were found to be sensitive to experimental scattering in the measured data at larger times. Also, fitted parameters described the experimental data better than moment generated parameter values.


Forests ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 59
Author(s):  
Olivier Fradette ◽  
Charles Marty ◽  
Pascal Tremblay ◽  
Daniel Lord ◽  
Jean-François Boucher

Allometric equations use easily measurable biometric variables to determine the aboveground and belowground biomasses of trees. Equations produced for estimating the biomass within Canadian forests at a large scale have not yet been validated for eastern Canadian boreal open woodlands (OWs), where trees experience particular environmental conditions. In this study, we harvested 167 trees from seven boreal OWs in Quebec, Canada for biomass and allometric measurements. These data show that Canadian national equations accurately predict the whole aboveground biomass for both black spruce and jack pine trees, but underestimated branches biomass, possibly owing to a particular tree morphology in OWs relative to closed-canopy stands. We therefore developed ad hoc allometric equations based on three power models including diameter at breast height (DBH) alone or in combination with tree height (H) as allometric variables. Our results show that although the inclusion of H in the model yields better fits for most tree compartments in both species, the difference is minor and does not markedly affect biomass C stocks at the stand level. Using these newly developed equations, we found that carbon stocks in afforested OWs varied markedly among sites owing to differences in tree growth and species. Nine years after afforestation, jack pine plantations had accumulated about five times more carbon than black spruce plantations (0.14 vs. 0.80 t C·ha−1), highlighting the much larger potential of jack pine for OW afforestation projects in this environment.


Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 219
Author(s):  
Phuoc Duc Nguyen ◽  
Lok-won Kim

People nowadays are entering an era of rapid evolution due to the generation of massive amounts of data. Such information is produced with an enormous contribution from the use of billions of sensing devices equipped with in situ signal processing and communication capabilities which form wireless sensor networks (WSNs). As the number of small devices connected to the Internet is higher than 50 billion, the Internet of Things (IoT) devices focus on sensing accuracy, communication efficiency, and low power consumption because IoT device deployment is mainly for correct information acquisition, remote node accessing, and longer-term operation with lower battery changing requirements. Thus, recently, there have been rich activities for original research in these domains. Various sensors used by processing devices can be heterogeneous or homogeneous. Since the devices are primarily expected to operate independently in an autonomous manner, the abilities of connection, communication, and ambient energy scavenging play significant roles, especially in a large-scale deployment. This paper classifies wireless sensor nodes into two major categories based the types of the sensor array (heterogeneous/homogeneous). It also emphasizes on the utilization of ad hoc networking and energy harvesting mechanisms as a fundamental cornerstone to building a self-governing, sustainable, and perpetually-operated sensor system. We review systems representative of each category and depict trends in system development.


Author(s):  
Cody Minks ◽  
Anke Richter

AbstractObjectiveResponding to large-scale public health emergencies relies heavily on planning and collaboration between law enforcement and public health officials. This study examines the current level of information sharing and integration between these domains by measuring the inclusion of public health in the law enforcement functions of fusion centers.MethodsSurvey of all fusion centers, with a 29.9% response rate.ResultsOnly one of the 23 responding fusion centers had true public health inclusion, a decrease from research conducted in 2007. Information sharing is primarily limited to information flowing out of the fusion center, with little public health information coming in. Most of the collaboration is done on a personal, informal, ad-hoc basis. There remains a large misunderstanding of roles, capabilities, and regulations by all parties (fusion centers and public health). The majority of the parties appear to be willing to work together, but there but there is no forward momentum to make these desires a reality. Funding and staffing issues seem to be the limiting factor for integration.ConclusionThese problems need to be urgently addressed to increase public health preparedness and enable a decisive and beneficial response to public health emergencies involving a homeland security response.


Author(s):  
Afshin Anssari-Benam ◽  
Andrea Bucchi ◽  
Giuseppe Saccomandi

AbstractThe application of a newly proposed generalised neo-Hookean strain energy function to the inflation of incompressible rubber-like spherical and cylindrical shells is demonstrated in this paper. The pressure ($P$ P ) – inflation ($\lambda $ λ or $v$ v ) relationships are derived and presented for four shells: thin- and thick-walled spherical balloons, and thin- and thick-walled cylindrical tubes. Characteristics of the inflation curves predicted by the model for the four considered shells are analysed and the critical values of the model parameters for exhibiting the limit-point instability are established. The application of the model to extant experimental datasets procured from studies across 19th to 21st century will be demonstrated, showing favourable agreement between the model and the experimental data. The capability of the model to capture the two characteristic instability phenomena in the inflation of rubber-like materials, namely the limit-point and inflation-jump instabilities, will be made evident from both the theoretical analysis and curve-fitting approaches presented in this study. A comparison with the predictions of the Gent model for the considered data is also demonstrated and is shown that our presented model provides improved fits. Given the simplicity of the model, its ability to fit a wide range of experimental data and capture both limit-point and inflation-jump instabilities, we propose the application of our model to the inflation of rubber-like materials.


Author(s):  
Clemens M. Lechner ◽  
Nivedita Bhaktha ◽  
Katharina Groskurth ◽  
Matthias Bluemke

AbstractMeasures of cognitive or socio-emotional skills from large-scale assessments surveys (LSAS) are often based on advanced statistical models and scoring techniques unfamiliar to applied researchers. Consequently, applied researchers working with data from LSAS may be uncertain about the assumptions and computational details of these statistical models and scoring techniques and about how to best incorporate the resulting skill measures in secondary analyses. The present paper is intended as a primer for applied researchers. After a brief introduction to the key properties of skill assessments, we give an overview over the three principal methods with which secondary analysts can incorporate skill measures from LSAS in their analyses: (1) as test scores (i.e., point estimates of individual ability), (2) through structural equation modeling (SEM), and (3) in the form of plausible values (PVs). We discuss the advantages and disadvantages of each method based on three criteria: fallibility (i.e., control for measurement error and unbiasedness), usability (i.e., ease of use in secondary analyses), and immutability (i.e., consistency of test scores, PVs, or measurement model parameters across different analyses and analysts). We show that although none of the methods are optimal under all criteria, methods that result in a single point estimate of each respondent’s ability (i.e., all types of “test scores”) are rarely optimal for research purposes. Instead, approaches that avoid or correct for measurement error—especially PV methodology—stand out as the method of choice. We conclude with practical recommendations for secondary analysts and data-producing organizations.


Viruses ◽  
2021 ◽  
Vol 13 (5) ◽  
pp. 902
Author(s):  
Daniel Cruceriu ◽  
Oana Baldasici ◽  
Loredana Balacescu ◽  
Stefana Gligor-Popa ◽  
Mirela Flonta ◽  
...  

The primary approach to controlling the spread of the pandemic SARS-CoV-2 is to diagnose and isolate the infected people quickly. Our paper aimed to investigate the efficiency and the reliability of a hierarchical pooling approach for large-scale PCR testing for SARS-CoV-2 diagnosis. To identify the best conditions for the pooling approach for SARS-CoV-2 diagnosis by RT-qPCR, we investigated four manual methods for both RNA extraction and PCR assessment targeting one or more of the RdRp, N, S, and ORF1a genes, by using two PCR devices and an automated flux for SARS-CoV-2 detection. We determined the most efficient and accurate diagnostic assay, taking into account multiple parameters. The optimal pool size calculation included the prevalence of SARS-CoV-2, the assay sensitivity of 95%, an assay specificity of 100%, and a range of pool sizes of 5 to 15 samples. Our investigation revealed that the most efficient and accurate procedure for detecting the SARS-CoV-2 has a detection limit of 2.5 copies/PCR reaction. This pooling approach proved to be efficient and accurate in detecting SARS-CoV-2 for all samples with individual quantification cycle (Cq) values lower than 35, accounting for more than 94% of all positive specimens. Our data could serve as a comprehensive practical guide for SARS-CoV-2 diagnostic centers planning to address such a pooling strategy.


Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4638
Author(s):  
Simon Pratschner ◽  
Pavel Skopec ◽  
Jan Hrdlicka ◽  
Franz Winter

A revolution of the global energy industry is without an alternative to solving the climate crisis. However, renewable energy sources typically show significant seasonal and daily fluctuations. This paper provides a system concept model of a decentralized power-to-green methanol plant consisting of a biomass heating plant with a thermal input of 20 MWth. (oxyfuel or air mode), a CO2 processing unit (DeOxo reactor or MEA absorption), an alkaline electrolyzer, a methanol synthesis unit, an air separation unit and a wind park. Applying oxyfuel combustion has the potential to directly utilize O2 generated by the electrolyzer, which was analyzed by varying critical model parameters. A major objective was to determine whether applying oxyfuel combustion has a positive impact on the plant’s power-to-liquid (PtL) efficiency rate. For cases utilizing more than 70% of CO2 generated by the combustion, the oxyfuel’s O2 demand is fully covered by the electrolyzer, making oxyfuel a viable option for large scale applications. Conventional air combustion is recommended for small wind parks and scenarios using surplus electricity. Maximum PtL efficiencies of ηPtL,Oxy = 51.91% and ηPtL,Air = 54.21% can be realized. Additionally, a case study for one year of operation has been conducted yielding an annual output of about 17,000 t/a methanol and 100 GWhth./a thermal energy for an input of 50,500 t/a woodchips and a wind park size of 36 MWp.


Sign in / Sign up

Export Citation Format

Share Document