scholarly journals Hierarchy of calibrated global models reveals improved distributions and fluxes of biogeochemical tracers in models with explicit representation of iron

2019 ◽  
Vol 14 (11) ◽  
pp. 114009 ◽  
Author(s):  
Wanxuan Yao ◽  
Karin F Kvale ◽  
Eric Achterberg ◽  
Wolfgang Koeve ◽  
Andreas Oschlies
1998 ◽  
Vol 15 (1) ◽  
pp. 1-30
Author(s):  
Sohail Lnayatullah

This article is both a critique of ways of approaching the future and a presentation of scenarios of the Islamic world a generation ahead. The critique covers various global models, including The Club of Rome's classic Limits to Growth (L TG), 1 Mankind at the Turning Point (MTP), and World 2000, and other approaches to the understanding of the future. Drawing from poststructural theory, we ask: What is missing, who does the analysis privilege, and what epistemological frames or ways of knowing are accentuated, are made primary, by the models used? What can the Islamic world learn from these models? We attempt to go a step further than merely asking the Marxist class question of who benefits financially. For us, the issue is deeper. We are concerned with what knowledge frames and (more appropriately, from an Islamic per­spective) what civilizational frames are privileged, are considered more important. An appendix presents recommendations focused on making the Islamic urrunah more future oriented. However, global models are only one way of approaching or under­standing the future. There are other ways of approaching the study of the future from which can be derived specific assertions about issues, trends, and scenarios as to the likely and possible shape of the future. We also inquire into the utility of these models for better understanding the future of the Islamic ummah. We conclude with visions of the future of the ummah ...


2021 ◽  
pp. 875529302110279
Author(s):  
Sanaz Rezaeian ◽  
Linda Al Atik ◽  
Nicolas M Kuehn ◽  
Norman Abrahamson ◽  
Yousef Bozorgnia ◽  
...  

This article develops global models of damping scaling factors (DSFs) for subduction zone earthquakes that are functions of the damping ratio, spectral period, earthquake magnitude, and distance. The Next Generation Attenuation for subduction earthquakes (NGA-Sub) project has developed the largest uniformly processed database of recorded ground motions to date from seven subduction regions: Alaska, Cascadia, Central America and Mexico, South America, Japan, Taiwan, and New Zealand. NGA-Sub used this database to develop new ground motion models (GMMs) at a reference 5% damping ratio. We worked with the NGA-Sub project team to develop an extended database that includes pseudo-spectral accelerations (PSA) for 11 damping ratios between 0.5% and 30%. We use this database to develop parametric models of DSF for both interface and intraslab subduction earthquakes that can be used to adjust any subduction GMM from a reference 5% damping ratio to other damping ratios. The DSF is strongly influenced by the response spectral shape and the duration of motion; therefore, in addition to the damping ratio, the median DSF model uses spectral period, magnitude, and distance as surrogate predictor variables to capture the effects of the spectral shape and the duration of motion. We also develop parametric models for the standard deviation of DSF. The models presented in this article are for the RotD50 horizontal component of PSA and are compared with the models for shallow crustal earthquakes in active tectonic regions. Some noticeable differences arise from the considerably longer duration of interface records for very large magnitude events and the enriched high-frequency content of intraslab records, compared with shallow crustal earthquakes. Regional differences are discussed by comparing the proposed global models with the data from each subduction region along with recommendations on the applicability of the models.


2021 ◽  
Vol 15 (6) ◽  
pp. 1-20
Author(s):  
Dongsheng Li ◽  
Haodong Liu ◽  
Chao Chen ◽  
Yingying Zhao ◽  
Stephen M. Chu ◽  
...  

In collaborative filtering (CF) algorithms, the optimal models are usually learned by globally minimizing the empirical risks averaged over all the observed data. However, the global models are often obtained via a performance tradeoff among users/items, i.e., not all users/items are perfectly fitted by the global models due to the hard non-convex optimization problems in CF algorithms. Ensemble learning can address this issue by learning multiple diverse models but usually suffer from efficiency issue on large datasets or complex algorithms. In this article, we keep the intermediate models obtained during global model learning as the snapshot models, and then adaptively combine the snapshot models for individual user-item pairs using a memory network-based method. Empirical studies on three real-world datasets show that the proposed method can extensively and significantly improve the accuracy (up to 15.9% relatively) when applied to a variety of existing collaborative filtering methods.


Author(s):  
Pier Domenico Lamberti ◽  
Luigi Provenzano

AbstractWe consider the problem of describing the traces of functions in $$H^2(\Omega )$$ H 2 ( Ω ) on the boundary of a Lipschitz domain $$\Omega $$ Ω of $$\mathbb R^N$$ R N , $$N\ge 2$$ N ≥ 2 . We provide a definition of those spaces, in particular of $$H^{\frac{3}{2}}(\partial \Omega )$$ H 3 2 ( ∂ Ω ) , by means of Fourier series associated with the eigenfunctions of new multi-parameter biharmonic Steklov problems which we introduce with this specific purpose. These definitions coincide with the classical ones when the domain is smooth. Our spaces allow to represent in series the solutions to the biharmonic Dirichlet problem. Moreover, a few spectral properties of the multi-parameter biharmonic Steklov problems are considered, as well as explicit examples. Our approach is similar to that developed by G. Auchmuty for the space $$H^1(\Omega )$$ H 1 ( Ω ) , based on the classical second order Steklov problem.


2021 ◽  
Vol 13 (14) ◽  
pp. 7963
Author(s):  
Michiel van Harskamp ◽  
Marie-Christine P. J. Knippels ◽  
Wouter R. van Joolingen

Environmental Citizenship (EC) is a promising aim for science education. EC enables people not only to responsibly make decisions on sustainability issues—such as use of renewable energy sources—but also to take action individually and collectively. However, studies show that education for EC is challenging. Because our understanding of EC practice remains limited, an in-depth, qualitative view would help us better understand how to support science teachers during EC education. This study aims to describe current EC education practices. What do secondary science teachers think sustainability and citizenship entail? What are their experiences (both positive and negative) with education for EC? A total of 41 Dutch science teachers were interviewed in an individual, face-to-face setting. Analysis of the coded transcripts shows that most teachers see the added value of EC but struggle to fully implement it in their teaching. They think the curriculum is unsuitable to reach EC, and they see activities such as guiding discussions and opinion forming as challenging. Furthermore, science teachers’ interpretation of citizenship education remains narrow, thus making it unlikely that their lessons are successful in fostering EC. Improving EC education therefore may be supported by explicit representation in the curriculum and teacher professional development directed at its implementation.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew K. C. Wong ◽  
Pei-Yuan Zhou ◽  
Zahid A. Butt

AbstractMachine Learning has made impressive advances in many applications akin to human cognition for discernment. However, success has been limited in the areas of relational datasets, particularly for data with low volume, imbalanced groups, and mislabeled cases, with outputs that typically lack transparency and interpretability. The difficulties arise from the subtle overlapping and entanglement of functional and statistical relations at the source level. Hence, we have developed Pattern Discovery and Disentanglement System (PDD), which is able to discover explicit patterns from the data with various sizes, imbalanced groups, and screen out anomalies. We present herein four case studies on biomedical datasets to substantiate the efficacy of PDD. It improves prediction accuracy and facilitates transparent interpretation of discovered knowledge in an explicit representation framework PDD Knowledge Base that links the sources, the patterns, and individual patients. Hence, PDD promises broad and ground-breaking applications in genomic and biomedical machine learning.


Sign in / Sign up

Export Citation Format

Share Document