flexible models
Recently Published Documents


TOTAL DOCUMENTS

157
(FIVE YEARS 37)

H-INDEX

21
(FIVE YEARS 2)

2022 ◽  
Vol 2022 ◽  
pp. 1-23
Author(s):  
Ibrahim M. Hezam ◽  
Sarah A. H. Taher ◽  
Abdelaziz Foul ◽  
Adel Fahad Alrasheedi

We develop neutrosophic goal programming models for sustainable resource planning in a healthcare organization. The neutrosophic approach can help examine the imprecise aspiration levels of resources. For deneutrosophication, the neutrosophic value is transformed into three intervals based on the truth, falsity, and indeterminacy-membership functions. Then, a crisp value is derived. Moreover, multi-choice goal programming is also used to get a crisp value. The proposed models seek to draw a strategic plan and long-term vision for a healthcare organization. Accordingly, the specific aims of the proposed flexible models are meant to evaluate hospital service performance and to establish an optimal plan to meet the growing patient needs. As a result, sustainability’s economic and social goals will be achieved so that the total cost would be optimized, patients’ waiting time would be reduced, high-quality services would be offered, and appropriate medical drugs would be provided. The simplicity and feasibility of the proposed models are validated using real data collected from the Al-Amal Center for Oncology, Aden, Yemen. The results obtained indicate the robustness of the proposed models, which would be valuable for planners who could guide healthcare staff in providing the necessary resources for optimal annual planning.


2021 ◽  
Vol 163 (1) ◽  
pp. 19
Author(s):  
Rachael M. Roettenbacher ◽  
Samuel H. C. Cabot ◽  
Debra A. Fischer ◽  
John D. Monnier ◽  
Gregory W. Henry ◽  
...  

Abstract The distortions of absorption line profiles caused by photospheric brightness variations on the surfaces of cool, main-sequence stars can mimic or overwhelm radial velocity (RV) shifts due to the presence of exoplanets. The latest generation of precision RV spectrographs aims to detect velocity amplitudes ≲ 10 cm s−1, but requires mitigation of stellar signals. Statistical techniques are being developed to differentiate between Keplerian and activity-related velocity perturbations. Two important challenges, however, are the interpretability of the stellar activity component as RV models become more sophisticated, and ensuring the lowest-amplitude Keplerian signatures are not inadvertently accounted for in flexible models of stellar activity. For the K2V exoplanet host ϵ Eridani, we separately used ground-based photometry to constrain Gaussian processes for modeling RVs and TESS photometry with a light-curve inversion algorithm to reconstruct the stellar surface. From the reconstructions of TESS photometry, we produced an activity model that reduced the rms scatter in RVs obtained with EXPRES from 4.72 to 1.98 m s−1. We present a pilot study using the CHARA Array and MIRC-X beam combiner to directly image the starspots seen in the TESS photometry. With the limited phase coverage, our spot detections are marginal with current data but a future dedicated observing campaign should allow for imaging, as well as allow the stellar inclination and orientation with respect to the debris disk to be definitively determined. This work shows that stellar surface maps obtained with high-cadence, time-series photometric and interferometric data can provide the constraints needed to accurately reduce RV scatter.


2021 ◽  
Author(s):  
Mark D. Verhagen

`All models are wrong, but some are useful' is an often used mantra, particularly when a model's ability to capture the full complexities of social life is questioned. However, an appropriate functional form is key to valid statistical inference, and under-estimating model complexity can lead to biased results. Unfortunately, it is unclear a-priori what the appropriate complexity of a functional form should be. I propose to use methods from machine learning to generate an estimate of the fit potential in a dataset. By comparing this fit potential with that from a functional form originally hypothesized by a researcher, a lack of model complexity in the latter can be identified. These flexible models can then be unpacked to generate understanding into the type of complexity missing. I illustrate the approach using simulations, and real-world case studies, and show how the framework is easy to implement, and leads to improved model specification.


2021 ◽  
Vol 16 (4) ◽  
pp. 731-739
Author(s):  
Jan M. Hugo

Globally the adverse effects of climate change necessitate the implementation of resilient systems that respond to escalating weather fluctuations and increased urban vulnerability. This requires a shift from the traditional efficiency-focused solutions, towards robust, responsive and flexible models. While novel technologies are being developed to address these needs; existing vernacular examples also present innovative solutions. The purpose of this study is to analyse vernacular solutions, in this case Korean Hanoak housing typologies, in terms their integration of flexible and adaptable spatial and technological systems to inform modern applications. As research method, the study firstly employed an unstructured observational method to document the spatial and technological elements of these vernacular precedents, followed by an intersubjective literature review of these precedents to understand the historic context. As main conclusion the study identified seven design principles to inform the development of flexible and adaptable modern architecture solutions. These include: holistic, integrative design; articulated and reciprocally layered systems; nested levels of flexible and inflexible systems; appropriate scale identification; and appropriate technology use. As contribution, this article analyses existing vernacular precedents and highlights principles that can be applied in various contexts to develop locally responsive and flexible architecture.


Author(s):  
Desmond J. Higham ◽  
Henry-Louis de Kergorlay

Epidemic spreading is well understood when a disease propagates around a contact graph. In a stochastic susceptible–infected–susceptible setting, spectral conditions characterize whether the disease vanishes. However, modelling human interactions using a graph is a simplification which only considers pairwise relationships. This does not fully represent the more realistic case where people meet in groups. Hyperedges can be used to record higher order interactions, yielding more faithful and flexible models and allowing for the rate of infection of a node to depend on group size and also to vary as a nonlinear function of the number of infectious neighbours. We discuss different types of contagion models in this hypergraph setting and derive spectral conditions that characterize whether the disease vanishes. We study both the exact individual-level stochastic model and a deterministic mean field ODE approximation. Numerical simulations are provided to illustrate the analysis. We also interpret our results and show how the hypergraph model allows us to distinguish between contributions to infectiousness that (i) are inherent in the nature of the pathogen and (ii) arise from behavioural choices (such as social distancing, increased hygiene and use of masks). This raises the possibility of more accurately quantifying the effect of interventions that are designed to contain the spread of a virus.


2021 ◽  
pp. 136700692110319
Author(s):  
Lena V. Kremin ◽  
Krista Byers-Heinlein

Aims and Objectives: Bilingualism is a complex construct, and it can be difficult to define and model. This paper proposes that the field of bilingualism can draw from other fields of psychology, by integrating advanced psychometric models that incorporate both categorical and continuous properties. These models can unify the widespread use of bilingual and monolingual groups that exist in the literature with recent proposals that bilingualism should be viewed as a continuous variable. Approach: In the paper, we highlight two models of potential interest: the factor mixture model and the grade-of-membership model. These models simultaneously allow for the formation of different categories of speakers and for continuous variation to exist within these categories. We discuss how these models could be implemented in bilingualism research, including how to develop these models. When using either of the two models, researchers can conduct their analyses on either the categorical or continuous information, or a combination of the two, depending on which is most appropriate to address their research question. Conclusions: The field of bilingualism research could benefit from incorporating more complex models into definitions of bilingualism. To help various subfields of bilingualism research converge on appropriate models, we encourage researchers to pre-register their model selection and planned analyses, as well as to share their data and analysis scripts. Originality: The paper uniquely proposes the incorporation of advanced statistical psychometric methods for defining and modeling bilingualism. Significance: Conceptualizing bilingualism within the context of these more flexible models will allow a wide variety of research questions to be addressed. Ultimately, this will help to advance theory and lead to a fuller and deeper understanding of bilingualism.


2021 ◽  
Author(s):  
Wayne M Getz ◽  
Richard Salter ◽  
Ludovica Luisa Vissat ◽  
James S Koopman ◽  
Carl P Simon

We developed an elaborated susceptible-infected-recovered (SIR) individual-based model (IBM) with pathogen strain drift, waning and cross immunity, implemented as a novel Java Runtime-Alterable-Model Platform (J-RAMP). This platform allows parameter values, process formulations, and scriptable runtime drivers to be easily added at the start of simulation. It includes facility for integration into the R statistical and other data analysis platforms. We selected a set of parameter values and process descriptions relevant to the current COVID-19 pandemic. These include pathogen-specific shedding, environmental persistence, host transmission and mortality, within-host pathogen mutation and replication, adaptive social distancing, and time dependent vaccine rate and strain valency specifications. Our simulations illustrate that if waning immunity outpaces vaccination rates, then vaccination rollouts may fail to contain the most transmissible strains. Our study highlights the need for adaptive vaccination rollouts, which depend on reliable real-time monitoring and surveillance of strain proliferation and reinfection data needed to ensure that vaccines target emerging strains and constrain escape mutations. Together with such data, our platform has the potential to inform the design of vaccination programs that extirpate rather than exacerbate local outbreaks. Finally, our RAMP concept promotes the development of highly flexible models that can be easily shared among researchers and policymakers not only addressing healthcare crises, but other types of environmental crises as well.


2021 ◽  
Author(s):  
Pauline Guenser ◽  
Rachel C. M. Warnock ◽  
Philip Conrad James Donoghue ◽  
Emilia Jarochowska

The role of time (i.e. taxa ages) in phylogeny has been a source of intense debate within palaeontology for decades and has not yet been resolved fully. The fossilised birth-death range process is a model that explicitly accounts for information about species through time. It presents a fresh opportunity to examine the role of stratigraphic data in phylogenetic inference of fossil taxa. Here, we apply this model in a Bayesian framework to an exemplar dataset of well-dated conodonts from the Late Devonian. We compare the results to those obtained using traditional unconstrained tree inference. We show that the combined analysis of morphology and stratigraphic data under the FBD range process reduces overall phylogenetic uncertainty, compared to unconstrained tree inference. We find that previous phylogenetic hypotheses based on parsimony and stratophenetics are closer to trees generated under the FBD range process. However, the results also highlight that irrespective of the inclusion of age data, a large amount of topological uncertainty will remain. Bayesian inference provides the most intuitive way to represent the uncertainty inherent in fossil datasets and new flexible models increase opportunities to refine hypotheses in palaeobiology.


Chemosphere ◽  
2021 ◽  
Vol 272 ◽  
pp. 129611
Author(s):  
Komal Shukla ◽  
Nikhil Dadheech ◽  
Prashant Kumar ◽  
Mukesh Khare

Sign in / Sign up

Export Citation Format

Share Document