alternative assumption
Recently Published Documents


TOTAL DOCUMENTS

15
(FIVE YEARS 2)

H-INDEX

4
(FIVE YEARS 0)

2019 ◽  
Author(s):  
Niklas Korsbo ◽  
Henrik Jönsson

AbstractThoughtful use of simplifying assumptions is crucial to make systems biology models tractable while still representative of the underlying biology. A useful simplification can elucidate the core dynamics of a system. A poorly chosen assumption can, however, either render a model too complicated for making conclusions or it can prevent an otherwise accurate model from describing experimentally observed dynamics.Here, we perform a computational investigation of linear pathway models that contain fewer pathway steps than the system they are designed to emulate. We demonstrate when such models will fail to reproduce data and how detrimental truncation of a linear pathway leads to detectable signatures in model dynamics and its optimised parameters.An alternative assumption is suggested for simplifying linear pathways. Rather than assuming a truncated number of pathway steps, we propose to use the assumption that the rates of information propagation along the pathway is homogeneous and instead letting the length of the pathway be a free parameter. This results in a three-parameter representation of arbitrary linear pathways which consistently outperforms its truncated rival and a delay differential equation alternative in recapitulating observed dynamics.Our results provide a foundation for well-informed decision making during model simplifications.1Author summaryMathematical modelling can be a highly effective way of condensing our understanding of biological processes and highlight the most important aspects of them. Effective models are based on simplifying assumptions that reduce complexity while still retaining the core dynamics of the original problem. Finding such assumptions is, however, not trivial.In this paper, we explore ways in which one can simplify long chains of simple reactions wherein each step is linearly dependent on its predecessor. After generating synthetic data from models that describe such chains in explicit detail, we compare how well different simplifications retain the original dynamics. We show that the most common such simplification, which is to ignore parts of the chain, often renders models unable to account for time delays. However, we also show that when such a simplification has had a detrimental effect, it leaves a detectable signature in its optimal parameter values. We also propose an alternative assumption which leads to a highly effective model with only three parameters. By comparing the effects of these simplifying assumptions in thousands of different cases and for different conditions we are able to clearly show when and why one is preferred over the other.


2016 ◽  
Vol 8 (1) ◽  
pp. 38
Author(s):  
Maather Mohammed Al-Rawi

<p class="zhengwen"><span lang="X-NONE">In this study, I aim to investigate the ambiguity on the category of the non-modifying Arabic adjectives that occur independently without a modified noun and to provide an account for the following questions: (1) are independent adjectives in Arabic nouns or adjectives?; (2) do they undergo a deadjectivizing process?; and (3) if they do, at which layer in adjectival phases does nominalization take place? I attempt to investigate the bi-categorial nature of independent adjectives in Arabic showing that they are internally adjectival but externally nominal. This analysis postulates that these adjectives have undergone category-change by moving A to the nominalizer D, which has the abstract affix NOM. Semantically, the adjective becomes referential (or +[indiv(iduated)]) naming entities of certain attributes, rather than denoting the attribute. However, DP is not the mere layer at which category-change takes place. The category-change is observed to occur earlier than the DP layers as indicated by the subregularities in the adjective form. The plural morpheme indicates three layers of nominality: the lower nP, NumP, and DP. Adjectives that undergo a-to-n change are nominalized having singular nominal form. Adjectives that are nominalized in NumP are pluralized with the nominal broken plural, yet having a singular adjectival form. Finally, adjectives that are nominalized in the highest functional DP projection are marked with an adjectival sound plural morpheme. This analysis provides a neat account for the diversity in the adjective number form and is favored over the alternative assumption that adjectives in pro-drop languages drop the head noun.<strong></strong></span></p>


2014 ◽  
Vol 19 ◽  
pp. 271 ◽  
Author(s):  
Graham Ferris ◽  
Nick Johnson

<p>There has been an implicit assumption that legal education should be about exposition and evaluation, and should reward facility in exposition and theoretical awareness. This theoretically based assumption generates a theory-induced blindness. Specifically, it obscures the dynamic relationship between law and legal practice, despite it being a familiar aspect of the world. The lawyer as rule entrepreneur is lost sight of. One alternative assumption about legal education would be that law is a game like activity; and legal education should be directed towards promoting those qualities that would enhance performance in this game. In this approach to legal education it would be practical nous that would be sought and rewarded, and such qualities as facility in exposition and theoretical awareness would receive recognition merely as qualities that can be ancillary to and elements of practical nous. Doctrinal legal education naturally pulls towards the first theory, and clinical legal education naturally pulls towards the second. We argue for a clearer awareness of the role of rule entrepreneurship in clinical programmes and in legal education generally.</p>


Author(s):  
RUBING HUANG ◽  
XIAODONG XIE ◽  
DAVE TOWEY ◽  
TSONG YUEH CHEN ◽  
YANSHENG LU ◽  
...  

Combinatorial interaction testing is a well-recognized testing method, and has been widely applied in practice, often with the assumption that all test cases in a combinatorial test suite have the same fault detection capability. However, when testing resources are limited, an alternative assumption may be that some test cases are more likely to reveal failure, thus making the order of executing the test cases critical. To improve testing cost-effectiveness, prioritization of combinatorial test cases is employed. The most popular approach is based on interaction coverage, which prioritizes combinatorial test cases by repeatedly choosing an unexecuted test case that covers the largest number of uncovered parameter value combinations of a given strength (level of interaction among parameters). However, this approach suffers from some drawbacks. Based on previous observations that the majority of faults in practical systems can usually be triggered with parameter interactions of small strengths, we propose a new strategy of prioritizing combinatorial test cases by incrementally adjusting the strength values. Experimental results show that our method performs better than the random prioritization technique and the technique of prioritizing combinatorial test suites according to test case generation order, and has better performance than the interaction-coverage-based test prioritization technique in most cases.


2013 ◽  
Vol 23 (1) ◽  
pp. 135-166 ◽  
Author(s):  
Naomi Aradi

AbstractThe concept oftawḥīd(unity of God) is a central issue inKalāmtheological treatises. The discussion devoted to this concept follows a typical structure, a fact that has been recognized by scholars in the past. Shlomo Pines points to the similarity between theKalāmmodel of discussion and the structure of John Damascene's (d. 750)De Fide Orthodoxa. Pines suggested that it may indicate the profound impact of Christian theology on MuʿtaziliteKalām. Ulrich Rudolph adds two important pieces of evidence to the discussion: He analyzes Abū Manṣūr al-Māturīdī al-Samarqandī's (d. 944)Kitāb al-Tawḥīdand the Jacobite Moses bar-Kepha's (d. 903) introduction to his Hexaemeron, and argues that the structural and conceptual affinity between them actually indicates an opposite direction of influence than the one suggested by Pines. According to Rudolph, the similarity of the two treatises shows they were written as an imitation of an older Muʿtazilite book, which constituted a prototype for the discussion on the concept oftawḥīd. This paper will challenge Rudolph's thesis in two ways, first by questioning his arguments concerning the sources of theKitāb al-Tawḥīd, and second by suggesting an alternative assumption, which will strengthen Pines's theory regarding the development of theKalāmmodel of discussion. Through a comparative structural analysis of the works of Māturīdī and bar-Kepha, together with some fragments of earlier Jewish and Christian commentaries on Genesis, will rise the possibility that this model of discussion existed first in the Christian exegetical treatises assigned to Genesis and had already developed before it appeared in theKalāmworks.


2011 ◽  
Vol 26 (S2) ◽  
pp. 1727-1727
Author(s):  
M. Hossain

Viewed from a naturalistic and scientific perspective, death appears to represent the permanent cessation of human existence, contributing to the widespread experience of death anxiety. The present argument attempts to deconstruct this argument on epistemological grounds by analyzing1)the prevailing universal concept of death in naturalistic discourse,2)the issue of our adjustment to this presumed reality, and3)the relationship between existence and death in the context of their social evolution. Integrating this conceptual analysis with empirical observations, the paper then explores the contrasting postulate, namely that death may not be the end of our existence, and the moral implications of this alternative assumption. This position, termed the death adjustment hypotheses,” would seem to offer an alternative grounding for theory and research in Thanatology.


Author(s):  
Darlington C. Richards ◽  
Gladson I. Nwanna

The contextual framework or policy orthodoxy persuading the implementation of privatization was the prevalent thinking that economic systems functioned best in a “free market”, with little or no government intervention. In the same vein was the belief that a more productive allocation and rationalization of factors of production will dictate a wholesale transfer from public to private sector of the ownership and control of productive assets, their allocation and pricing, including the residual profits flowing from them. The most effective vehicle for such implementation of free market privatization was adjudged to be unfettered deregulation. To the extent that it enabled the untangling of bureaucratic impediments to the inflow and retention of capital to the countries by way of foreign direct investment (FDI) and portfolio investment (PI), including the repatriation of resultant profits, it was a welcome outcome. Unfettered deregulation, as clearly manifested in recent years, particularly in well-known developed economies, appears to have produced an outcome substantially inconsistent with the traditional suppositions, begging the obvious question in the minds of academicians and policymakers alike. Where to, from here? The answer to the apparent conflict and/or contradiction is more urgent in the developing and emerging economies where privatization, and in a broader sense, the ideas and practices based on free market principles and on free market prescriptions have been promoted and sold as sacrosanct, if not necessary for their economic growth and survival. Given the current state of the global financial market which, at best, can be said to be in a state of flux, and the myriads of supposedly economic development initiatives invoking the likes of privatization  and deregulation, we are tempted to ask the following questions:  Are there any fixes? Could there be better, more accommodating alternative assumption(s), doctrine(s) or paradigm(s)?


1990 ◽  
Vol 21 (3) ◽  
pp. 153-164 ◽  
Author(s):  
M. H. Diskin

The speed and direction of movement of rainfall patterns, assumed constant during any given storm, can be derived from data sets comprised of the coordinates (x, y) of a number of rainfall recording stations and the times of arrival (T) of some prominent feature of the recorded hyetographs. The result depends on the nature of the feature adopted. A measure of the significance of the result can also be derived by computing the reduction in the value of the RMS deviation of arrival times due to the assumption of a moving storm in contrast to the alternative assumption of random fluctuations about an equal arrival time. The paper also outlines a procedure for estimating the accuracy of the results based on repeated computations of the speed and direction using subsets of the data obtained by removing records of one rainfall measuring station. The procedure is demonstrated with data recorded in the city of Lund, Sweden, by a network of 12 stations for two different storms.


Sign in / Sign up

Export Citation Format

Share Document