Model Approach
Recently Published Documents


TOTAL DOCUMENTS

6893
(FIVE YEARS 3291)

H-INDEX

92
(FIVE YEARS 31)

Author(s):  
Mubashra Yasin ◽  
Ashfaq Ahmad ◽  
Tasneem Khaliq ◽  
Muhammad Habib-ur-Rahman ◽  
Salma Niaz ◽  
...  

AbstractFuture climate scenarios are predicting considerable threats to sustainable maize production in arid and semi-arid regions. These adverse impacts can be minimized by adopting modern agricultural tools to assess and develop successful adaptation practices. A multi-model approach (climate and crop) was used to assess the impacts and uncertainties of climate change on maize crop. An extensive field study was conducted to explore the temporal thermal variations on maize hybrids grown at farmer’s fields for ten sowing dates during two consecutive growing years. Data about phenology, morphology, biomass development, and yield were recorded by adopting standard procedures and protocols. The CSM-CERES, APSIM, and CSM-IXIM-Maize models were calibrated and evaluated. Five GCMs among 29 were selected based on classification into different groups and uncertainty to predict climatic changes in the future. The results predicted that there would be a rise in temperature (1.57–3.29 °C) during the maize growing season in five General Circulation Models (GCMs) by using RCP 8.5 scenarios for the mid-century (2040–2069) as compared with the baseline (1980–2015). The CERES-Maize and APSIM-Maize model showed lower root mean square error values (2.78 and 5.41), higher d-index (0.85 and 0.87) along reliable R2 (0.89 and 0.89), respectively for days to anthesis and maturity, while the CSM-IXIM-Maize model performed well for growth parameters (leaf area index, total dry matter) and yield with reasonably good statistical indices. The CSM-IXIM-Maize model performed well for all hybrids during both years whereas climate models, NorESM1-M and IPSL-CM5A-MR, showed less uncertain results for climate change impacts. Maize models along GCMs predicted a reduction in yield (8–55%) than baseline. Maize crop may face a high yield decline that could be overcome by modifying the sowing dates and fertilizer (fertigation) and heat and drought-tolerant hybrids.


2021 ◽  
Vol 12 ◽  
Author(s):  
Karoline Horgmo Jæger ◽  
Andrew G. Edwards ◽  
Wayne R. Giles ◽  
Aslak Tveito

Computational modeling has contributed significantly to present understanding of cardiac electrophysiology including cardiac conduction, excitation-contraction coupling, and the effects and side-effects of drugs. However, the accuracy of in silico analysis of electrochemical wave dynamics in cardiac tissue is limited by the homogenization procedure (spatial averaging) intrinsic to standard continuum models of conduction. Averaged models cannot resolve the intricate dynamics in the vicinity of individual cardiomyocytes simply because the myocytes are not present in these models. Here we demonstrate how recently developed mathematical models based on representing every myocyte can significantly increase the accuracy, and thus the utility of modeling electrophysiological function and dysfunction in collections of coupled cardiomyocytes. The present gold standard of numerical simulation for cardiac electrophysiology is based on the bidomain model. In the bidomain model, the extracellular (E) space, the cell membrane (M) and the intracellular (I) space are all assumed to be present everywhere in the tissue. Consequently, it is impossible to study biophysical processes taking place close to individual myocytes. The bidomain model represents the tissue by averaging over several hundred myocytes and this inherently limits the accuracy of the model. In our alternative approach both E, M, and I are represented in the model which is therefore referred to as the EMI model. The EMI model approach allows for detailed analysis of the biophysical processes going on in functionally important spaces very close to individual myocytes, although at the cost of significantly increased CPU-requirements.


2021 ◽  
Vol 10 (21) ◽  
pp. 4935
Author(s):  
Alberto Enrico Maraolo ◽  
Anna Crispo ◽  
Michela Piezzo ◽  
Piergiacomo Di Di Gennaro ◽  
Maria Grazia Vitale ◽  
...  

Background: Among the several therapeutic options assessed for the treatment of coronavirus disease 2019 (COVID-19), tocilizumab (TCZ), an antagonist of the interleukine-6 receptor, has emerged as a promising therapeutic choice, especially for the severe form of the disease. Proper synthesis of the available randomized clinical trials (RCTs) is needed to inform clinical practice. Methods: A systematic review with a meta-analysis of RCTs investigating the efficacy of TCZ in COVID-19 patients was conducted. PubMed, EMBASE, and the Cochrane COVID-19 Study Register were searched up until 30 April 2021. Results: The database search yielded 2885 records; 11 studies were considered eligible for full-text review, and nine met the inclusion criteria. Overall, 3358 patients composed the TCZ arm, and 3131 the comparator group. The main outcome was all-cause mortality at 28–30 days. Subgroup analyses according to trials’ and patients’ features were performed. A trial sequential analysis (TSA) was also carried out to minimize type I and type II errors. According to the fixed-effect model approach, TCZ was associated with a better survival odds ratio (OR) (0.84; 95% confidence interval (CI): 0.75–0.94; Iˆ2: 24% (low heterogeneity)). The result was consistent in the subgroup of severe disease (OR: 0.83; 95% CI: 0.74–0.93; I2: 53% (moderate heterogeneity)). However, the TSA illustrated that the required information size was not met unless the study that was the major source of heterogeneity was omitted. Conclusions: TCZ may represent an important weapon against severe COVID-19. Further studies are needed to consolidate this finding.


2021 ◽  
Vol 13 (21) ◽  
pp. 4224
Author(s):  
Eleni Dragozi ◽  
Theodore M. Giannaros ◽  
Vasiliki Kotroni ◽  
Konstantinos Lagouvardos ◽  
Ioannis Koletsis

The frequent occurrence of large and high-intensity wildfires in the Mediterranean region poses a major threat to people and the environment. In this context, the estimation of dead fine fuel moisture content (DFMC) has become an integrated part of wildfire management since it provides valuable information for the flammability status of the vegetation. This study investigates the effectiveness of a physically based fuel moisture model in estimating DFMC during severe fire events in Greece. Our analysis considers two approaches, the satellite-based (MODIS DFMC model) and the weather station-based (AWSs DFMC model) approach, using a fuel moisture model which is based on the relationship between the fuel moisture of the fine fuels and the water vapor pressure deficit (D). During the analysis we used weather station data and MODIS satellite data from fourteen wildfires in Greece. Due to the lack of field measurements, the models’ performance was assessed only in the case of the satellite data by using weather observations obtained from the network of automated weather stations operated by the National Observatory of Athens (NOA). Results show that, in general, the satellite-based model achieved satisfactory accuracy in estimating the spatial distribution of the DFMC during the examined fire events. More specifically, the validation of the satellite-derived DFMC against the weather-station based DFMC indicated that, in all cases examined, the MODIS DFMC model tended to underestimate DFMC, with MBE ranging from −0.3% to −7.3%. Moreover, in all of the cases examined, apart from one (Sartis’ fire case, MAE: 8.2%), the MAE of the MODIS DFMC model was less than 2.2%. The remaining numerical results align with the existing literature, except for the MAE case of 8.2%. The good performance of the satellite based DFMC model indicates that the estimation of DFMC is feasible at various spatial scales in Greece. Presently, the main drawback of this approach is the occurrence of data gaps in the MODIS satellite imagery. The examination and comparison of the two approaches, regarding their operational use, indicates that the weather station-based approach meets the requirements for operational DFMC mapping to a higher degree compared to the satellite-based approach.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Lei Yan ◽  
Yuting Zhu ◽  
Haiyan Wang

Since the commodity and financial attributes of crude oil will have a long-term or short-term impact on crude oil prices, we propose a de-dimension machine learning model approach to forecast the international crude oil prices. First, we use principal component analysis (PCA), multidimensional scale (MDS), and locally linear embedding (LLE) methods to reduce the dimensions of the data. Then, based on the recurrent neural network (RNN) and long-term and short-term memory (LSTM) models, we build eight models for predicting the future and spot prices of international crude oil. From the analysis and comparison of the prediction results, we find that reducing the dimension of the data can improve the accuracy of the model and the applicability of RNN and LSTM models. In addition, the LLE-RNN/LSTM models can most successfully capture the nonlinear characteristics of crude oil prices. When the moving window size is twenty, that is, when crude oil price data are lagging by almost a month, each model can minimize its error, and the LLE-RNN /LSTM models have the best robustness.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Inho Hwang ◽  
Sanghyun Kim ◽  
Carl Rebman

PurposeOrganizations invest in information security (IS) technology to be more competitive; however, implementing IS measures creates environmental conditions, such as overload uncertainty, and complexity, which can cause employees technostress, eventually resulting in poor security performance. This study seeks to contribute to the intersection of research on regulatory focus (promotion and prevention) as a type of individual personality traits, technostress, and IS.Design/methodology/approachA survey questionnaire was developed, collecting 346 responses from various organizations, which were analyzed using the structural equation model approach with AMOS 22.0 to test the proposed hypotheses.FindingsThe results indicate support for both the direct and moderating effects of security technostress inhibitors. Moreover, a negative relationship exists between promotion-focused employees and facilitators of security technostress, which negatively affects strains (organizational commitment and compliance intention).Practical implicationsOrganizations should develop various programs and establish a highly IS-aware environment to strengthen employees' behavior regarding IS. Furthermore, organizations should consider employees' focus types when engaging in efforts to minimize security technostress, as lowering technostress results in positive outcomes.Originality/valueIS management at the organizational level is directly related to employees' compliance with security rather than being a technical issue. Using the transaction theory perspective, this study seeks to enhance current research on employees' behavior, particularly focusing on the effect of individuals' personality types on IS. Moreover, this study theorizes the role of security technostress inhibitors for understanding employees' IS behaviors.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Arūnas Gudinavičius ◽  
Vincas Grigas

PurposeThe current study aims to identify and explore causes and consequences of unauthorized use of books from readers', publishers', and authors' points of view. The case of Lithuania also assessed, especially historical background (banned alphabet, book smuggling, theft as the social norm in Soviet times) of the country.Design/methodology/approachAiming for more understanding why readers, authors and publishers are using or not using technology for unauthorized access of books, technology acceptance model approach was used, a total of 30 respondents (publishers, authors and readers) were interviewed in semi-structured face-to-face interviews and thematic analysis of collected qualitative data was conducted. Interviews were coded in English with coding software for further analysis.FindingsFindings indicate that the main cause for the unauthorized use of books is a lack of legal e-book titles and acquisition options. This mainly points at publishers, however, instead of using unauthorized sources as opportunities for author promotion or marketing, they rather concentrate on the causes of unauthorized use of books which they are not in control of, including access to unauthorized sources, habits and economic causes. Some publishers believe that the lack of legal e-book titles is the consequence of unauthorized use of book rather than its cause.Originality/valueThis research contributed to the body of knowledge by investigating unauthorized use of books from readers', publishers' and authors' points of view which renders to have a better understanding of the causes and consequences of such behavior, as well as differences between these roles. The authors suggest that these causes lead to the intention to use and actual use of technology which is easier to use and which gives more perceived advantages – technology for unauthorized downloading and reading of books vs legal e-book acquisition options.Peer reviewThe peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-03-2021-0133.


Author(s):  
Torben Prill ◽  
Cornelius Fischer ◽  
Pavel Gavrilenko ◽  
Oleg Iliev

AbstractCurrent reactive transport model (RTM) uses transport control as the sole arbiter of differences in reactivity. For the simulation of crystal dissolution, a constant reaction rate is assumed for the entire crystal surface as a function of chemical parameters. However, multiple dissolution experiments confirmed the existence of an intrinsic variability of reaction rates, spanning two to three orders of magnitude. Modeling this variance in the dissolution process is vital for predicting the dissolution of minerals in multiple systems. Novel approaches to solve this problem are currently under discussion. Critical applications include reactions in reservoir rocks, corrosion of materials, or contaminated soils. The goal of this study is to provide an algorithm for multi-rate dissolution of single crystals, to discuss its software implementation, and to present case studies illustrating the difference between the single rate and multi-rate dissolution models. This improved model approach is applied to a set of test cases in order to illustrate the difference between the new model and the standard approach. First, a Kossel crystal is utilized to illustrate the existence of critical rate modes of crystal faces, edges, and corners. A second system exemplifies the effect of multiple rate modes in a reservoir rock system during calcite cement dissolution in a sandstone. The results suggest that reported variations in average dissolution rates can be explained by the multi-rate model, depending on the geometric configurations of the crystal surfaces.


2021 ◽  
Vol 461 ◽  
pp. 109763
Author(s):  
Mª Àngels Colomer ◽  
Antoni Margalida ◽  
Isabel Sanuy ◽  
Gustavo A. Llorente ◽  
Delfí Sanuy ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document