real sequence
Recently Published Documents


TOTAL DOCUMENTS

91
(FIVE YEARS 36)

H-INDEX

8
(FIVE YEARS 1)

2022 ◽  
Vol 962 (1) ◽  
pp. 012054
Author(s):  
A V Kurguzova ◽  
M V Morozov

Abstract The history of the the discovery of world’s largest Ni-Cu-PGM deposits of Norilsk-Talnakh is revised. The 1866 prospecting and geographic expedition of Innokenty Lopatin and Friedrich Schmidt, studied the lowest Yenissei territories, and collected information and mineralogical samples (chalcopyrite from ‘copper slates’) proving by this the presence of a copper ore deposit in the Norilsk mountains. The deposit was developed by at least two adits since 1865 and was managed by brothers Pyotr and Cyprian Sotnikov from the settlement of Dudino (now Dudinka). This information was documented in the diaries by I. Lopatin and was reported by F. Schmidt in transactions of the Saint Petersburg Academy of Sciences. After the ‘re-discovery’ of the deposit in the 20th century the followers have ignored, omitted and incorrectly cited the information published by Friedrich Schmidt in 1869 and 1872, as well as its republishing made by Vladimir Obrutchev in 1917. The real sequence of events resulting in the discovery of Norilsk deposits has to be rewritten. In memoriam of Sergey Gorbunov (1952–2021), archaeologist, traveler, Sakhalin history specialist


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8352
Author(s):  
Junrong Zhang ◽  
Huiming Tang ◽  
Dwayne D. Tannant ◽  
Chengyuan Lin ◽  
Ding Xia ◽  
...  

With the widespread application of machine learning methods, the continuous improvement of forecast accuracy has become an important task, which is especially crucial for landslide displacement predictions. This study aimed to propose a novel prediction model to improve accuracy in landslide prediction, based on the combination of multiple new algorithms. The proposed new method includes three parts: data preparation, multi-swarm intelligence (MSI) optimization, and displacement prediction. In the data preparation, the complete ensemble empirical mode decomposition (CEEMD) is adopted to separate the trend and periodic displacements from the observed cumulative landslide displacement. The frequency component and residual component of reconstructed inducing factors that related to landslide movements are also extracted by the CEEMD and t-test, and then picked out with edit distance on real sequence (EDR) as input variables for the support vector regression (SVR) model. MSI optimization algorithms are used to optimize the SVR model in the MSI optimization; thus, six predictions models can be obtained that can be used in the displacement prediction part. Finally, the trend and periodic displacements are predicted by six optimized SVR models, respectively. The trend displacement and periodic displacement with the highest prediction accuracy are added and regarded as the final prediction result. The case study of the Shiliushubao landslide shows that the prediction results match the observed data well with an improvement in the aspect of average relative error, which indicates that the proposed model can predict landslide displacements with high precision, even when the displacements are characterized by stepped curves that under the influence of multiple time-varying factors.


2021 ◽  
pp. 14-30
Author(s):  
Stephen M. Hart

Any biographical essay on the famous Colombian writer Gabriel García Márquez (1927–2014) must take into account biographies that have already been written—including, of course, Dasso Saldívar’s thoughtful García Márquez: El viaje a la semilla; La biografía (1997), Gerald Martin’s excellent Gabriel García Márquez: A Life (2008), and Stephen M. Hart’s Gabriel García Márquez (2010)—counterbalanced by García Márquez’s own autobiography, Vivir para contarla (2002). This article (1) sets out the intrinsically significant events of Gabo’s life and the impact they had on his development as a writer (journalist, film critic, cultural/political commentator, writer of short fiction and long fiction); (2) focuses on the osmosis between his life and his literary work, including an analysis of the first and only volume of his memoirs and how they overlap with his literary works and, indeed, are at times overwhelmed by them, as present in particular in El amor en los tiempos del cólera (1985), inspired by his parents’ love affair, in which the version of events provided by the novel supersedes the “real” sequence of events; and (3) uses the notion of doubleness—evident in his life via the opposition between his “real” family and his “false” family of illegitimate offspring, produced by his grandfather’s wanton ways, as well as the figure of the “double” in his fiction and particularly Cien años de soledad (1967)—as a structuring device of the article’s emplotment.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Sergiusz Kęska

Chaundy and Jolliffe proved that if a n is a nonnegative, nonincreasing real sequence, then series ∑ a n sin n x converges uniformly if and only if n a n ⟶ 0 . The purpose of this paper is to show that if n a n is nonincreasing and n a n ⟶ 0 , then the series f x = ∑ a n sin n x can be differentiated term-by-term on c , d for c , d > 0 . However, f ′ 0 may not exist.


2021 ◽  
Vol 87 (5) ◽  
pp. 56-60
Author(s):  
I. V. Gadolina ◽  
R. V. Voronkov

Estimation of the scatter of the durability at the second stage of fatigue, namely at the stage of crack propagation is a problem of scientific and obvious practical importance: machines operate according to their technical condition, which means monitoring of the actual crack length during their service life. The limits of the spread of the strength values at the stage of crack propagation in aluminum samples are studied using published data and a previously developed model. In view of the great importance of this problem, a special simulation model was used to generate the extrema of a random sequence based on target Markov matrices. On the one hand, this simulation method guarantees the characteristic traits of real sequence in exploitation (TWIST standard in this example). On the other hand, it contains reasonable randomness — these two parts together provide an opportunity to study the variability of the crack growth rate. For the simulation experiment, literature data on aluminum and steel samples were used along with popular fatigue crack growth models (Paris, Foreman and Willenborg models). In addition, the Miner’s summation rule was quantitatively tested to estimate the crack growth resistance coefficient under various loads. The agreement with the literature experimental data is shown. Preliminary data on the effect of the type of loading (random or block) on the durability are given on the basis of scientific literature data. The proposed simulation method can be useful for testing various models. It is also intended to develop an experimental design for laboratory testing in the future.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Samuel M. Gerner ◽  
Alexandra B. Graf ◽  
Thomas Rattei

Abstract Background Simulated metagenomic reads are widely used to benchmark software and workflows for metagenome interpretation. The results of metagenomic benchmarks depend on the assumptions about their underlying ecosystems. Conclusions from benchmark studies are therefore limited to the ecosystems they mimic. Ideally, simulations are therefore based on genomes, which resemble particular metagenomic communities realistically. Results We developed Tamock to facilitate the realistic simulation of metagenomic reads according to a metagenomic community, based on real sequence data. Benchmarks samples can be created from all genomes and taxonomic domains present in NCBI RefSeq. Tamock automatically determines taxonomic profiles from shotgun sequence data, selects reference genomes accordingly and uses them to simulate metagenomic reads. We present an example use case for Tamock by assessing assembly and binning method performance for selected microbiomes. Conclusions Tamock facilitates automated simulation of habitat-specific benchmark metagenomic data based on real sequence data and is implemented as a user-friendly command-line application, providing extensive additional information along with the simulated benchmark data. Resulting benchmarks enable an assessment of computational methods, workflows, and parameters specifically for a metagenomic habitat or ecosystem of a metagenomic study. Availability Source code, documentation and install instructions are freely available at GitHub (https://github.com/gerners/tamock).


Genetics ◽  
2021 ◽  
Author(s):  
Alan M Kwong ◽  
Thomas W Blackwell ◽  
Jonathon LeFaive ◽  
Mariza de Andrade ◽  
John Barnard ◽  
...  

Abstract Traditional Hardy–Weinberg equilibrium (HWE) tests (the χ2 test and the exact test) have long been used as a metric for evaluating genotype quality, as technical artifacts leading to incorrect genotype calls often can be identified as deviations from HWE. However, in data sets composed of individuals from diverse ancestries, HWE can be violated even without genotyping error, complicating the use of HWE testing to assess genotype data quality. In this manuscript, we present the Robust Unified Test for HWE (RUTH) to test for HWE while accounting for population structure and genotype uncertainty, and to evaluate the impact of population heterogeneity and genotype uncertainty on the standard HWE tests and alternative methods using simulated and real sequence data sets. Our results demonstrate that ignoring population structure or genotype uncertainty in HWE tests can inflate false-positive rates by many orders of magnitude. Our evaluations demonstrate different tradeoffs between false positives and statistical power across the methods, with RUTH consistently among the best across all evaluations. RUTH is implemented as a practical and scalable software tool to rapidly perform HWE tests across millions of markers and hundreds of thousands of individuals while supporting standard VCF/BCF formats. RUTH is publicly available at https://www.github.com/statgen/ruth.


2021 ◽  
Vol 270 (1319) ◽  
Author(s):  
Abed Bounemoura ◽  
Jacques Féjoz

Some scales of spaces of ultra-differentiable functions are introduced, having good stability properties with respect to infinitely many derivatives and compositions. They are well-suited for solving non-linear functional equations by means of hard implicit function theorems. They comprise Gevrey functions and thus, as a limiting case, analytic functions. Using majorizing series, we manage to characterize them in terms of a real sequence M M bounding the growth of derivatives. In this functional setting, we prove two fundamental results of Hamiltonian perturbation theory: the invariant torus theorem, where the invariant torus remains ultra-differentiable under the assumption that its frequency satisfies some arithmetic condition which we call BR M _M , and which generalizes the Bruno-Rüssmann condition; and Nekhoroshev’s theorem, where the stability time depends on the ultra-differentiable class of the pertubation, through the same sequence M M . Our proof uses periodic averaging, while a substitute for the analyticity width allows us to bypass analytic smoothing. We also prove converse statements on the destruction of invariant tori and on the existence of diffusing orbits with ultra-differentiable perturbations, by respectively mimicking a construction of Bessi (in the analytic category) and Marco-Sauzin (in the Gevrey non-analytic category). When the perturbation space satisfies some additional condition (we then call it matching), we manage to narrow the gap between stability hypotheses (e.g. the BR M _M condition) and instability hypotheses, thus circumbscribing the stability threshold. The formulas relating the growth M M of derivatives of the perturbation on the one hand, and the arithmetics of robust frequencies or the stability time on the other hand, bring light to the competition between stability properties of nearly integrable systems and the distance to integrability. Due to our method of proof using width of regularity as a regularizing parameter, these formulas are closer to optimal as the the regularity tends to analyticity.


2021 ◽  
Author(s):  
Krzysztof Waśniewski

Abstract This article attempts to formalize the Black Swan theory as a phenomenon of collective Behavioral change. A mathematical model of collectively intelligent social structure, which absorbs random external disturbances, has been built, with a component borrowed from quantum physics, i.e. that of transitory, impossible states, represented by negative probabilities. The model served as basis for building an artificial neural network, to simulate the behaviour of a collectively intelligent social structure optimizing a real sequence of observations in selected variables of Penn Tables 9.1. The simulation led to defining three different paths of collective learning: cyclical adjustment of structural proportions, long-term optimization of size, and long-term destabilization in markets. Capital markets seem to be the most likely to develop adverse long-term volatility in response to Black Swan events, as compared to other socio-economic variables. JEL: E01, E17, J01, J11


Sign in / Sign up

Export Citation Format

Share Document