scholarly journals Enforced symmetry: the necessity of symmetric waxing and waning

PeerJ ◽  
2019 ◽  
Vol 7 ◽  
pp. e8011
Author(s):  
Niklas Hohmann ◽  
Emilia Jarochowska

A fundamental question in ecology is how the success of a taxon changes through time and what drives this change. This question is commonly approached using trajectories averaged over a group of taxa. Using results from probability theory, we show analytically and using examples that averaged trajectories will be more symmetric as the number of averaged trajectories increases, even if none of the original trajectories they were derived from is symmetric. This effect is not only based on averaging, but also on the introduction of noise and the incorporation of a priori known origination and extinction times. This implies that averaged trajectories are not suitable for deriving information about the processes driving the success of taxa. In particular, symmetric waxing and waning, which is commonly observed and interpreted to be linked to a number of different paleobiological processes, does not allow drawing any conclusions about the nature of the underlying process.

2021 ◽  
pp. 86
Author(s):  
Vladimir I. Chervonyuk

For jurisprudence (doctrine and practice), the fundamental question of the principles of the nature of law is at the same time the question of the normative (special-legal) composition of law, the structure of its normative “substances" or a set of legal regulators. From the position of dominant views, legal regulators a priori recognize norms, or legal rules that are positive in legislation. This paradigm, which remains unshakable, is the basis of the assessment mechanism, the formed stable attitude to understanding the forms (sources) of law, the practice of making law enforcement decisions in situations of lack of legislation, as well as emerging defects in legal regulation, the need (validity) of the applicable norms. Rooted in Russian jurisprudence understanding of the principles of law as legal ideas” or more “general norms”, while the latter are not presumed as a legal basis for making individual legal decisions in resolving specific cases, is hopelessly outdated and does not correspond to the needs of developing practice. A paradigm change regarding understanding the structure and composition of regulators of law objectively requires a solution to an issue of fundamental importance, a kind of sui generis, the answer to which the pillars of philosophy and theory of law tried to answer (L.A. Hart, R.ºDvorkin, M. van Hook, R. Alexi, etc.): whether the law consists only of norms or are they a part of this whole; whether the norms of law are always perceived by law enforcement authorities, primarily by the courts, as the only legal basis, given that the norm to be applied is either absent, or differs in legal uncertainty, inconsistency, that is, is invalid.


2003 ◽  
Vol 15 (8) ◽  
pp. 1865-1896 ◽  
Author(s):  
Carsten Prodöhl ◽  
Rolf P. Würtz ◽  
Christoph von der Malsburg

The Gestalt principle of collinearity (and curvilinearity) is widely regarded as being mediated by the long-range connection structure in primary visual cortex. We review the neurophysiological and psychophysical literature to argue that these connections are developed from visual experience after birth, relying on coherent object motion. We then present a neural network model that learns these connections in an unsupervised Hebbian fashion with input from real camera sequences. The model uses spatiotemporal retinal filtering, which is very sensitive to changes in the visual input. We show that it is crucial for successful learning to use the correlation of the transient responses instead of the sustained ones. As a consequence, learning works best with video sequences of moving objects. The model addresses a special case of the fundamental question of what represents the necessary a priori knowledge the brain is equipped with at birth so that the self-organized process of structuring by experience can be successful.


It is proven that every zero-mean bounded irregular sequence (BIS) has three invariants, i.e. characteristics which stay the same when the environmental statistics changes. The existence of such invariants answers the question how far they ensure certainty of the obtained knowledge and the range of predictability of stable complex systems behavior in a positive way. The certainty of our knowledge is put to test by the lack of global rule for response makes impossible to adjust a priori the corresponding recording equipment to a long run. Then, it is to be expected that the recorded time series does not match the corresponding signal in a uniform way since the record is subject to local distortion which is generally non-linear and acts non-homogeneously on the recording. In turn, this poses the fundamental question whether it is ever possible to establish and/or predict the properties and the future behavior of the complex systems.


1979 ◽  
Vol 8 (2-3) ◽  
pp. 151-187 ◽  
Author(s):  
Paul Kay ◽  
Chad K. McDaniel

In the introduction we stated two frequently encountered a priori arguments against the variable rule methodology and attempted to refute them. The first rejects the variable rule methodology – and by implication any study of comparable data – on the grounds that variable rules govern token frequencies while generative grammar does not countenance token frequencies. We agree that variable rules are not generative rules of a new sort but an entirely different kind of logical object and that generative grammar indeed does not countenance token frequencies. But we are convinced by empirical work conducted within the variable rule paradigm that token frequencies often display clear patterns and that moreover some knowledge of these patterns forms part of the linguistic abilities of speakers. We conclude that, whatever the drawbacks of the variable rule formalism, studies employing variable rules have shown regularities in linguistic behavior that point to a serious lack in the generative paradigm, narrowly defined.A second line of argument against variable rules which we rejected was based on assumptions about human psychology and about probability theory. The assumption that human beings cannot assess probabilities and behave in accord with them in a natural and unconscious manner appears to be supported by no empirical evidence and does not seem to us plausible a priori. Moreover, experimental evidence to the contrary exists. The argument to probability theory was that a speaker would have to have an internal counting device to keep track of the relative frequencies of linguistic variants that he had heard from his own or other lips in order to behave in accordance with variable rules. But this would require a kind of probability theory that would differ in remarkable and unspecified ways from ordinary probability theory, since the paradigmatic empirical examples of the familiar theory, such as coins, dice, decks of cards, and so on, are not possessed of memories.


Slavic Review ◽  
1965 ◽  
Vol 24 (3) ◽  
pp. 450-465 ◽  
Author(s):  
Maxim W. Mikulak

In the course of the nineteenth century it became clear that the unfettered speculation obtaining in philosophy frequently could not be useful in science. However, in the Soviet Union it is asserted that natural science can draw its correct "theoretical conclusions" only by relying upon the philosophic and the methodological teachings of dialectical materialism. Certain Soviet Marxists have, on allegedly philosophic grounds, rejected Western genetics, the resonance theory of the chemical bond, the principle of uncertainty of quantum mechanics, relativist cosmology, the relativization of space, time, and matter, probability theory, and symbolic logic. The intriguing question then remains whether Soviet dialectical materialists determine the validity of scientific theories and accomplishments on the basis of a priori judgments derived from philosophic analysis or whether the Soviet attacks on Western scientific thought are, rather, political and ideological in nature.


Phainomenon ◽  
2008 ◽  
Vol 16-17 (1) ◽  
pp. 39-54
Author(s):  
Francese Pereña

Abstract At the beginning of Prolegomena zur Geschichte des Zeitbegriffs (Gesamtausgabe, volume 20), Heidegger extensively puts forward his views against phenomenology, especially that of Husserl, which is the one we are going to consider. By means of what he calls the “fundamental discoveries of phenomenology”, that is, intentionality, categorial intuition and the meaning of the a priori in Husserl’s Logical!nvestigations, Heidegger reaches a definition of phenomenology: “the analytic description of intentionality in its a priori”. Next, Heidegger proceeds to what he characterizes as an “ immanent critique” of phenomenology, that consists in highlighting that in Ideas Husserl does make but omits the fundamental question on “the being of consciousness” and on “the sense of being”, in a way that ends up in being un-phenomenological. We go into Heidegger’ S text in order to consider the legitimacy of its critique and, particularly, its alleged immanence.


2020 ◽  
Vol 10 (23) ◽  
pp. 8477 ◽  
Author(s):  
Jehyuk Jang ◽  
Heung-No Lee

Our aim in this paper is to investigate the profitability of double-spending (DS) attacks that manipulate an a priori mined transaction in a blockchain. It was well understood that a successful DS attack is established when the proportion of computing power an attacker possesses is higher than that of the honest network. What is not yet well understood is how threatening a DS attack with less than 50% computing power used can be. Namely, DS attacks at any proportion can be a threat as long as the chance to make a good profit exists. Profit is obtained when the revenue from making a successful DS attack is greater than the cost of carrying out one. We have developed a novel probability theory for calculating a finitetime attack probability. This can be used to size up attack resources needed to obtain the profit. The results enable us to derive a sufficient and necessary condition on the value of a transaction targeted by a DS attack. Our result is quite surprising: we theoretically show how a DS attack at any proportion of computing power can be made profitable. Given one’s transaction value, the results can also be used to assess the risk of a DS attack. An example of profitable DS attack against BitcoinCash is provided.


Author(s):  
D. E. Luzzi ◽  
L. D. Marks ◽  
M. I. Buckett

As the HREM becomes increasingly used for the study of dynamic localized phenomena, the development of techniques to recover the desired information from a real image is important. Often, the important features are not strongly scattering in comparison to the matrix material in addition to being masked by statistical and amorphous noise. The desired information will usually involve the accurate knowledge of the position and intensity of the contrast. In order to decipher the desired information from a complex image, cross-correlation (xcf) techniques can be utilized. Unlike other image processing methods which rely on data massaging (e.g. high/low pass filtering or Fourier filtering), the cross-correlation method is a rigorous data reduction technique with no a priori assumptions.We have examined basic cross-correlation procedures using images of discrete gaussian peaks and have developed an iterative procedure to greatly enhance the capabilities of these techniques when the contrast from the peaks overlap.


Author(s):  
H.S. von Harrach ◽  
D.E. Jesson ◽  
S.J. Pennycook

Phase contrast TEM has been the leading technique for high resolution imaging of materials for many years, whilst STEM has been the principal method for high-resolution microanalysis. However, it was demonstrated many years ago that low angle dark-field STEM imaging is a priori capable of almost 50% higher point resolution than coherent bright-field imaging (i.e. phase contrast TEM or STEM). This advantage was not exploited until Pennycook developed the high-angle annular dark-field (ADF) technique which can provide an incoherent image showing both high image resolution and atomic number contrast.This paper describes the design and first results of a 300kV field-emission STEM (VG Microscopes HB603U) which has improved ADF STEM image resolution towards the 1 angstrom target. The instrument uses a cold field-emission gun, generating a 300 kV beam of up to 1 μA from an 11-stage accelerator. The beam is focussed on to the specimen by two condensers and a condenser-objective lens with a spherical aberration coefficient of 1.0 mm.


2019 ◽  
Vol 4 (5) ◽  
pp. 878-892
Author(s):  
Joseph A. Napoli ◽  
Linda D. Vallino

Purpose The 2 most commonly used operations to treat velopharyngeal inadequacy (VPI) are superiorly based pharyngeal flap and sphincter pharyngoplasty, both of which may result in hyponasal speech and airway obstruction. The purpose of this article is to (a) describe the bilateral buccal flap revision palatoplasty (BBFRP) as an alternative technique to manage VPI while minimizing these risks and (b) conduct a systematic review of the evidence of BBFRP on speech and other clinical outcomes. A report comparing the speech of a child with hypernasality before and after BBFRP is presented. Method A review of databases was conducted for studies of buccal flaps to treat VPI. Using the principles of a systematic review, the articles were read, and data were abstracted for study characteristics that were developed a priori. With respect to the case report, speech and instrumental data from a child with repaired cleft lip and palate and hypernasal speech were collected and analyzed before and after surgery. Results Eight articles were included in the analysis. The results were positive, and the evidence is in favor of BBFRP in improving velopharyngeal function, while minimizing the risk of hyponasal speech and obstructive sleep apnea. Before surgery, the child's speech was characterized by moderate hypernasality, and after surgery, it was judged to be within normal limits. Conclusion Based on clinical experience and results from the systematic review, there is sufficient evidence that the buccal flap is effective in improving resonance and minimizing obstructive sleep apnea. We recommend BBFRP as another approach in selected patients to manage VPI. Supplemental Material https://doi.org/10.23641/asha.9919352


Sign in / Sign up

Export Citation Format

Share Document