forthcoming paper
Recently Published Documents


TOTAL DOCUMENTS

151
(FIVE YEARS 20)

H-INDEX

17
(FIVE YEARS 3)

2021 ◽  
pp. 1-21
Author(s):  
Paulo Augusto Lima da Silva ◽  
José Antônio Marin Fernandes

Abstract Grammedessa Correia & Fernandes, 2016 is a genus raised to include some species of Edessa Fabricius, 1803 that is a very common group of stink bugs inhabiting only the Neotropical region. Grammedessa was proposed excluding a few species that were morphologically similar but not completely fitting in the diagnostic requirements of the genus. Grammedessa was also proposed without considering a phylogenetic context. In this work, the monophyly of Grammedessa was tested using a cladistic analysis, including all species that were originally excluded, under both Maximum Parsimony and Bayesian methods. As a result, six new species are now included in Grammedessa, which will be described in a forthcoming paper; Edessa botocudo Kirkaldy, 1909 was considered an unnecessary new name for Edessa hamata Walker, 1868 that was transferred to Grammedessa, resulting in G. hamata (Walker, 1868) comb.n. Calcatedessa gen.n., a new genus sister to Grammedessa, is here proposed to include four new species: C. anthomorpha sp.n., C. clarimarginata sp.n., C. germana sp.n. and C. temnomarginata sp.n. The Calcatedessa–Grammedessa clade and both genera were recovered as monophyletic by Maximum Parsimony and Bayesian methods. An identification key to the species of Calcatedessa gen.n. is provided. The new genus is distributed in Guyana, Suriname, French Guyana, and Brazil.


2021 ◽  
Vol 36 (31) ◽  
Author(s):  
Koblandy Yerzhanov ◽  
Shynaray Myrzakul ◽  
Duman Kenzhalin ◽  
Martiros Khurshudyan

The phase space analysis has been used to probe the accelerated expansion of the Universe when [Formula: see text] dark energy interacts with cold dark matter. Non-gravitational interactions [Formula: see text] and [Formula: see text] considered in this work are one of the first models of sign changing interactions that appeared in the literature. Specific [Formula: see text] dark energy model with [Formula: see text] has been assumed and all late time scaling attractors have been found. This is a two-parameter model with [Formula: see text] and [Formula: see text] parameters to be determined, while [Formula: see text] is the deceleration parameter. In general the motivation to consider similar fluid models is directly related to the attempts to unify dark energy and dark matter involving the properties of the deceleration parameter. The previous study using similar dark energy model showed that the BOSS result for the expansion rate at [Formula: see text] can be explained without interaction with cold dark matter. In this way, the previous result provides a reasonable basis to organize future studies in this direction. This study is one of the first attempts in this direction. It should be mentioned that the full comparison of the models with observation data and the classification of future singularities have been left as a subject of a forthcoming paper. There are several ways that the model can be extended which also has been left as a subject of a forthcoming paper.


2021 ◽  
Author(s):  
Simone Puel ◽  
Eldar Khattatov ◽  
Umberto Villa ◽  
Dunyu Liu ◽  
Omar Ghattas ◽  
...  

We introduce a new finite-element (FE) based computational framework to solve forward and inverse elastic deformation problems for earthquake faulting via the adjoint method. Based on two advanced computational libraries, FEniCS and hIPPYlib for the forward and inverse problems, respectively, this framework is flexible, transparent, and easily extensible. We represent a fault discontinuity through a mixed FE elasticity formulation, which approximates the stress with higher order accuracy and exposes the prescribed slip explicitly in the variational form without using conventional split node and decomposition discrete approaches. This also allows the first order optimality condition, i.e., the vanishing of the gradient, to be expressed in continuous form, which leads to consistent discretizations of all field variables, including the slip. We show comparisons with the standard, pure displacement formulation and a model containing an in-plane mode II crack, whose slip is prescribed via the split node technique. We demonstrate the potential of this new computational framework by performing a linear coseismic slip inversion through adjoint-based optimization methods, without requiring computation of elastic Green's functions. Specifically, we consider a penalized least squares formulation, which in a Bayesian setting - under the assumption of Gaussian noise and prior - reflects the negative log of the posterior distribution. The comparison of the inversion results with a standard, linear inverse theory approach based on Okada's solutions shows analogous results. Preliminary uncertainties are estimated via eigenvalue analysis of the Hessian of the penalized least squares objective function. Our implementation is fully open-source and Jupyter notebooks to reproduce our results are provided. The extension to a fully Bayesian framework for detailed uncertainty quantification and non-linear inversions, including for heterogeneous media earthquake problems, will be analyzed in a forthcoming paper.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Hanaa M. Zayed

AbstractAn approach to the generalized Bessel–Maitland function is proposed in the present paper. It is denoted by $\mathcal{J}_{\nu , \lambda }^{\mu }$ J ν , λ μ , where $\mu >0$ μ > 0 and $\lambda ,\nu \in \mathbb{C\ }$ λ , ν ∈ C get increasing interest from both theoretical mathematicians and applied scientists. The main objective is to establish the integral representation of $\mathcal{J}_{\nu ,\lambda }^{\mu }$ J ν , λ μ by applying Gauss’s multiplication theorem and the representation for the beta function as well as Mellin–Barnes representation using the residue theorem. Moreover, the mth derivative of $\mathcal{J}_{\nu ,\lambda }^{\mu }$ J ν , λ μ is considered, and it turns out that it is expressed as the Fox–Wright function. In addition, the recurrence formulae and other identities involving the derivatives are derived. Finally, the monotonicity of the ratio between two modified Bessel–Maitland functions $\mathcal{I}_{\nu ,\lambda }^{\mu }$ I ν , λ μ defined by $\mathcal{I}_{\nu ,\lambda }^{\mu }(z)=i^{-2\lambda -\nu }\mathcal{J}_{ \nu ,\lambda }^{\mu }(iz)$ I ν , λ μ ( z ) = i − 2 λ − ν J ν , λ μ ( i z ) of a different order, the ratio between modified Bessel–Maitland and hyperbolic functions, and some monotonicity results for $\mathcal{I}_{\nu ,\lambda }^{\mu }(z)$ I ν , λ μ ( z ) are obtained where the main idea of the proofs comes from the monotonicity of the quotient of two Maclaurin series. As an application, some inequalities (like Turán-type inequalities and their reverse) are proved. Further investigations on this function are underway and will be reported in a forthcoming paper.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Quoc-Hung Nguyen ◽  
Yannick Sire ◽  
Juan-Luis Vázquez

Abstract This paper is devoted to a simple proof of the generalized Leibniz rule in bounded domains. The operators under consideration are the so-called spectral Laplacian and the restricted Laplacian. Equations involving such operators have lately been considered by Constantin and Ignatova in the framework of the SQG equation [P. Constantin and M. Ignatova, Critical SQG in bounded domains, Ann. PDE 2 2016, 2, Article ID 8] in bounded domains, and by two of the authors [Q.-H. Nguyen and J. L. Vázquez, Porous medium equation with nonlocal pressure in a bounded domain, Comm. Partial Differential Equations 43 2018, 10, 1502–1539] in the framework of the porous medium with nonlocal pressure in bounded domains. We will use the estimates in this work in a forthcoming paper on the study of porous medium equations with pressure given by Riesz-type potentials.


Author(s):  
Judith Felson Duchan ◽  
Susan Felsenfeld

BACKGROUND: Cluttering has been described in the literature on speech disorders for over 300 years. Despite this, it remains a poorly understood condition whose history has not been analyzed as a whole to identify common themes and underlying frameworks. OBJECTIVE: The purpose of this review is to identify thematic questions and frameworks contained within the literature on cluttering since the earliest found reference in 1717. METHODS: Information from influential historical and contemporary documents were analyzed. Particular attention was paid to the types of questions, both implicit and explicit, that were posed in these materials. This information was ultimately organized into five thematic strands, presented here in the form of key questions. RESULTS: Five questions were derived from our historical analysis: (1) What should the problem be called? (2) What kind of problem is it? (3) What are its defining features? (4) What are its causes? and (5) How should it be treated? The first four questions are discussed in this review. The fifth question will be addressed in a forthcoming paper. CONCLUSIONS: Consensus has been achieved on what to call the disorder (cluttering) and in what domain it should be placed (fluency). Less agreement exists regarding its defining features, causes, and treatment. We propose that alternative conceptual frameworks may be useful in breaking new ground in our understanding and management of this complex condition.


2021 ◽  
Vol 82 (2) ◽  
Author(s):  
Paola D’Aquino ◽  
Jamshid Derakhshan ◽  
Angus Macintyre

AbstractWe give axioms for a class of ordered structures, called truncated ordered abelian groups (TOAG’s) carrying an addition. TOAG’s come naturally from ordered abelian groups with a 0 and a $$+$$ + , but the addition of a TOAG is not necessarily even a cancellative semigroup. The main examples are initial segments $$[0, \tau ]$$ [ 0 , τ ] of an ordered abelian group, with a truncation of the addition. We prove that any model of these axioms (i.e. a truncated ordered abelian group) is an initial segment of an ordered abelian group. We define Presburger TOAG’s, and give a criterion for a TOAG to be a Presburger TOAG, and for two Presburger TOAG’s to be elementarily equivalent, proving analogues of classical results on Presburger arithmetic. Their main interest for us comes from the model theory of certain local rings which are quotients of valuation rings valued in a truncation [0, a] of the ordered group $${\mathbb {Z}}$$ Z or more general ordered abelian groups, via a study of these truncations without reference to the ambient ordered abelian group. The results are used essentially in a forthcoming paper (D’Aquino and Macintyre, The model theory of residue rings of models of Peano Arithmetic: The prime power case, 2021, arXiv:2102.00295) in the solution of a problem of Zilber about the logical complexity of quotient rings, by principal ideals, of nonstandard models of Peano arithmetic.


Author(s):  
Diaa E Fawzy ◽  
Manfred Cuntz

Abstract We present theoretical models of chromospheric heating for 55 Cancri, an orange dwarf of relatively low activity. Self-consistent, nonlinear and time-dependent ab-initio numerical computations are pursued encompassing the generation, propagation, and dissipation of waves. We consider longitudinal waves operating among arrays of flux tubes as well as acoustic waves pertaining to nonmagnetic stellar regions. Additionally, flux enhancements for the longitudinal waves are also taken into account as supplied by transverse tube waves. The Ca II K fluxes are computed (multi-ray treatment) assuming partial redistribution as well as time-dependent ionization. The self-consistent treatment of time-dependent ionization (especially for hydrogen) greatly impacts the atmospheric temperatures and electron densities (especially behind the shocks); it also affects the emergent Ca II fluxes. Particularly, we focus on the influence of magnetic heating on the stellar atmospheric structure and the emergent Ca II emission, as well as the impact of nonlinearities. Our study shows that a higher photospheric magnetic filling factor entails a larger Ca II emission; however, an increased initial wave energy flux (e.g., associated with mode coupling) is of little difference. Comparisons of our theoretical results with observations will be conveyed in forthcoming Paper II.


Universe ◽  
2021 ◽  
Vol 7 (1) ◽  
pp. 15
Author(s):  
Xiang Liu ◽  
Xin Wang ◽  
Ning Chang ◽  
Jun Liu ◽  
Lang Cui ◽  
...  

Two dozens of radio loud active galactic nuclei (AGNs) have been observed with Urumqi 25 m radio telescope in order to search for intra-day variability (IDV). The target sources are blazars (namely flat spectrum radio quasars and BL Lac objects) which are mostly selected from the observing list of RadioAstron AGN monitoring campaigns. The observations were carried out at 4.8 GHz in two sessions of 8–12 February 2014 and 7–9 March respectively. We report the data reduction and the first results of observations. The results show that the majority of the blazars exhibit IDV in 99.9% confidence level, some of them show quite strong IDV. We find the strong IDV of blazar 1357 + 769 for the first time. The IDV at centimeter-wavelength is believed to be predominately caused by the scintillation of blazar emission through the local interstellar medium in a few hundreds parsecs away from Sun. No significant correlation between the IDV strength and either redshift or Galactic latitude is found in our sample. The IDV timescale along with source structure and brightness temperature analysis will be presented in a forthcoming paper.


Author(s):  
Anatoly Zhigljavsky ◽  
Roger Whitaker ◽  
Ivan Fesenko ◽  
Kobi Kremnizer ◽  
Jack Noonan ◽  
...  

AbstractCoronavirus COVID-19 spreads through the population mostly based on social contact. To gauge the potential for widespread contagion, to cope with associated uncertainty and to inform its mitigation, more accurate and robust modelling is centrally important for policy making.We provide a flexible modelling approach that increases the accuracy with which insights can be made. We use this to analyse different scenarios relevant to the COVID-19 situation in the UK. We present a stochastic model that captures the inherently probabilistic nature of contagion between population members. The computational nature of our model means that spatial constraints (e.g., communities and regions), the susceptibility of different age groups and other factors such as medical pre-histories can be incorporated with ease. We analyse different possible scenarios of the COVID-19 situation in the UK. Our model is robust to small changes in the parameters and is flexible in being able to deal with different scenarios.This approach goes beyond the convention of representing the spread of an epidemic through a fixed cycle of susceptibility, infection and recovery (SIR). It is important to emphasise that standard SIR-type models, unlike our model, are not flexible enough and are also not stochastic and hence should be used with extreme caution. Our model allows both heterogeneity and inherent uncertainty to be incorporated. Due to the scarcity of verified data, we draw insights by calibrating our model using parameters from other relevant sources, including agreement on average (mean field) with parameters in SIR-based models.We use the model to assess parameter sensitivity for a number of key variables that characterise the COVID-19 epidemic. We also test several control parameters with respect to their influence on the severity of the outbreak. Our analysis shows that due to inclusion of spatial heterogeneity in the population and the asynchronous timing of the epidemic across different areas, the severity of the epidemic might be lower than expected from other models.We find that one of the most crucial control parameters that may significantly reduce the severity of the epidemic is the degree of separation of vulnerable people and people aged 70 years and over, but note also that isolation of other groups has an effect on the severity of the epidemic. It is important to remember that models are there to advise and not to replace reality, and that any action should be coordinated and approved by public health experts with experience in dealing with epidemics.The computational approach makes it possible for further extensive scenario-based analysis to be undertaken. This and a comprehensive study of sensitivity of the model to different parameters defining COVID-19 and its development will be the subject of our forthcoming paper. In that paper, we shall also extend the model where we will consider different probabilistic scenarios for infected people with mild and severe cases.


Sign in / Sign up

Export Citation Format

Share Document