exponential rates
Recently Published Documents


TOTAL DOCUMENTS

64
(FIVE YEARS 19)

H-INDEX

10
(FIVE YEARS 1)

Author(s):  
Andrew Larkin

AbstractWe study rates of mixing for small random perturbations of one-dimensional Lorenz maps. Using a random tower construction, we prove that, for Hölder observables, the random system admits exponential rates of quenched correlation decay.


2021 ◽  
Author(s):  
Megha Mathur ◽  
Sumeet Patiyal ◽  
Anjali Dhall ◽  
Shipra Jain ◽  
Ritu Tomer ◽  
...  

In the past few decades, public repositories on nucleotides have increased with exponential rates. This pose a major challenge to researchers to predict the structure and function of nucleotide sequences. In order to annotate function of nucleotide sequences it is important to compute features/attributes for predicting function of these sequences using machine learning techniques. In last two decades, several software/platforms have been developed to elicit a wide range of features for nucleotide sequences. In order to complement the existing methods, here we present a platform named Nfeature developed for computing wide range of features of DNA and RNA sequences. It comprises of three major modules namely Composition, Correlation, and Binary profiles. Composition module allow to compute different type of compositions that includes mono-/di-tri-nucleotide composition, reverse complement composition, pseudo composition. Correlation module allow to compute various type of correlations that includes auto-correlation, cross-correlation, pseudo-correlation. Similarly, binary profile is developed for computing binary profile based on nucleotides, di-nucleotides, di-/tri-nucleotide properties. Nfeature also allow to compute entropy of sequences, repeats in sequences and distribution of nucleotides in sequences. In addition to compute feature in whole sequence, it also allows to compute features from part of sequence like split-composition, N-terminal, C-terminal. In a nutshell, Nfeature amalgamates existing features as well as number of novel features like nucleotide repeat index, distance distribution, entropy, binary profile, and properties. This tool computes a total of 29217 and 14385 features for DNA and RNA sequence, respectively. In order to provide, a highly efficient and user-friendly tool, we have developed a standalone package and web-based platform (https://webs.iiitd.edu.in/raghava/nfeature).


2021 ◽  
Vol 11 (11) ◽  
pp. 1119
Author(s):  
Elizabeth B. Torres

In the last decade, Autism has broadened and often shifted its diagnostics criteria, allowing several neuropsychiatric and neurological disorders of known etiology. This has resulted in a highly heterogeneous spectrum with apparent exponential rates in prevalence. I ask if it is possible to leverage existing genetic information about those disorders making up Autism today and use it to stratify this spectrum. To that end, I combine genes linked to Autism in the SFARI database and genomic information from the DisGeNET portal on 25 diseases, inclusive of non-neurological ones. I use the GTEx data on genes’ expression on 54 human tissues and ask if there are overlapping genes across those associated to these diseases and those from SFARI-Autism. I find a compact set of genes across all brain-disorders which express highly in tissues fundamental for somatic-sensory-motor function, self-regulation, memory, and cognition. Then, I offer a new stratification that provides a distance-based orderly clustering into possible Autism subtypes, amenable to design personalized targeted therapies within the framework of Precision Medicine. I conclude that viewing Autism through this physiological (Precision) lens, rather than viewing it exclusively from a psychological behavioral construct, may make it a more manageable condition and dispel the Autism epidemic myth.


2021 ◽  
Vol 16 ◽  
pp. 88-103
Author(s):  
Maureen Goggin

We are living in an era where reality, truth, and facts are being turned upside down and inside out. Fake news and falsehoods are being spewed out in increasing exponential rates. I was prompted to do something about the propensity of fake news through post-truth discourse and designed an undergraduate course that I titled: Bullshit, Fake News, and Alternative Facts. In this piece, I critically reflect on and share my theoretical frames for constructing the course, the design of it, my experience in teaching it, and report on a survey about the class—and I call all of you to work at least some material on post-truth into your classes or into a full course as I have.


2021 ◽  
Author(s):  
Elizabeth B Torres

In the last decade, Autism has broadened and often shifted its diagnostics criteria, allowing several neuropsychiatric and neurological disorders of known etiology. This has resulted in a highly heterogeneous spectrum with apparent exponential rates in prevalence. We ask if it is possible to leverage existing genetic information about those disorders making up Autism today and use it to stratify this spectrum. To that end, we combine genes linked to Autism in the SFARI database and genomic information from the DisGeNet portal on 25 diseases, inclusive of non-neurological ones. We use the GTEx data on genes' expression on 54 human tissues and ask if there are overlapping genes across those associated to these diseases and those from Autism-SFARI. We find a compact set of genes across all brain-disorders which express highly in tissues fundamental for somatic-sensory-motor function, self-regulation, memory, and cognition. Then, we offer a new stratification that provides a distance-based orderly clustering into possible Autism subtypes, amenable to design personalized targeted therapies within the framework of Precision Medicine. We conclude that viewing Autism through this physiological (Precision) lens, rather than from a psychological behavioral construct, may make it a more manageable condition and dispel the Autism epidemic myth.


2021 ◽  
Author(s):  
Jake Harmon ◽  
Jeremiah Corrado ◽  
Branislav Notaros

We present an application of refinement-by-superposition (RBS) <i>hp</i>-refinement in computational electromagnetics (CEM), which permits exponential rates of convergence. In contrast to dominant approaches to <i>hp</i>-refinement for continuous Galerkin methods, which rely on constrained-nodes, the multi-level strategy presented drastically reduces the implementation complexity. Through the RBS methodology, enforcement of continuity occurs by construction, enabling arbitrary levels of refinement with ease and without the practical (but not theoretical) limitations of constrained-node refinement. We outline the construction of the RBS <i>hp</i>-method for refinement with <i>H</i>(curl)- and <i>H</i>(div)-conforming finite cells. Numerical simulations for the 2-D finite element method (FEM) solution of the Maxwell eigenvalue problem demonstrate the effectiveness of RBS <i>hp</i>-refinement. An additional goal of this work, we aim to promote the use of mixed-order (low- and high-order) elements in practical CEM applications.


2021 ◽  
Author(s):  
Jake Harmon ◽  
Jeremiah Corrado ◽  
Branislav Notaros

We present an application of refinement-by-superposition (RBS) <i>hp</i>-refinement in computational electromagnetics (CEM), which permits exponential rates of convergence. In contrast to dominant approaches to <i>hp</i>-refinement for continuous Galerkin methods, which rely on constrained-nodes, the multi-level strategy presented drastically reduces the implementation complexity. Through the RBS methodology, enforcement of continuity occurs by construction, enabling arbitrary levels of refinement with ease and without the practical (but not theoretical) limitations of constrained-node refinement. We outline the construction of the RBS <i>hp</i>-method for refinement with <i>H</i>(curl)- and <i>H</i>(div)-conforming finite cells. Numerical simulations for the 2-D finite element method (FEM) solution of the Maxwell eigenvalue problem demonstrate the effectiveness of RBS <i>hp</i>-refinement. An additional goal of this work, we aim to promote the use of mixed-order (low- and high-order) elements in practical CEM applications.


Author(s):  
Yassir Samadi ◽  
Mostapha Zbakh ◽  
Amine Haouari

Size of the data used by enterprises has been growing at exponential rates since last few years; handling such huge data from various sources is a challenge for Businesses. In addition, Big Data becomes one of the major areas of research for Cloud Service providers due to a large amount of data produced every day, and the inefficiency of traditional algorithms and technologies to handle these large amounts of data. In order to resolve the aforementioned problems and to meet the increasing demand for high-speed and data-intensive computing, several solutions have been developed by researches and developers. Among these solutions, there are Cloud Computing tools such as Hadoop MapReduce and Apache Spark, which work on the principles of parallel computing. This chapter focuses on how big data processing challenges can be handled by using Cloud Computing frameworks and the importance of using Cloud Computing by businesses


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Lars Diening ◽  
Christian Kreuzer

AbstractIt is an open question if the threshold condition \theta<\theta_{\star} for the Dörfler marking parameter is necessary to obtain optimal algebraic rates of adaptive finite element methods. We present a (non-PDE) example fitting into the common abstract convergence framework (axioms of adaptivity) which allows for convergence with exponential rates. However, for Dörfler marking \theta>\theta_{\star}, the algebraic convergence rate can be made arbitrarily small.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Thomas Perneger ◽  
Antoine Kevorkian ◽  
Thierry Grenet ◽  
Hubert Gallée ◽  
Angèle Gayet-Ageron

Abstract Background Classic epidemic curves – counts of daily events or cumulative events over time –emphasise temporal changes in the growth or size of epidemic outbreaks. Like any graph, these curves have limitations: they are impractical for comparisons of large and small outbreaks or of asynchronous outbreaks, and they do not display the relative growth rate of the epidemic. Our aim was to propose two additional graphical displays for the monitoring of epidemic outbreaks that overcome these limitations. Methods The first graph shows the growth of the epidemic as a function of its size; specifically, the logarithm of new cases on a given day, N(t), is plotted against the logarithm of cumulative cases C(t). Logarithm transformations facilitate comparisons of outbreaks of different sizes, and the lack of a time scale overcomes the need to establish a starting time for each outbreak. Notably, on this graph, exponential growth corresponds to a straight line with a slope equal to one. The second graph represents the logarithm of the relative rate of growth of the epidemic over time; specifically, log10(N(t)/C(t-1)) is plotted against time (t) since the 25th event. We applied these methods to daily death counts attributed to COVID-19 in selected countries, reported up to June 5, 2020. Results In most countries, the log(N) over log(C) plots showed initially a near-linear increase in COVID-19 deaths, followed by a sharp downturn. They enabled comparisons of small and large outbreaks (e.g., Switzerland vs UK), and identified outbreaks that were still growing at near-exponential rates (e.g., Brazil or India). The plots of log10(N(t)/C(t-1)) over time showed a near-linear decrease (on a log scale) of the relative growth rate of most COVID-19 epidemics, and identified countries in which this decrease failed to set in in the early weeks (e.g., USA) or abated late in the outbreak (e.g., Portugal or Russia). Conclusions The plot of log(N) over log(C) displays simultaneously the growth and size of an epidemic, and allows easy identification of exponential growth. The plot of the logarithm of the relative growth rate over time highlights an essential parameter of epidemic outbreaks.


Sign in / Sign up

Export Citation Format

Share Document