iterative correction
Recently Published Documents


TOTAL DOCUMENTS

91
(FIVE YEARS 20)

H-INDEX

11
(FIVE YEARS 2)

Aerospace ◽  
2021 ◽  
Vol 8 (12) ◽  
pp. 366
Author(s):  
Alicia Herrero ◽  
Santiago Moll ◽  
José-A. Moraño ◽  
David Vázquez ◽  
Erika Vega

Interception of extrasolar objects is one of the major current astrophysical objectives since it allows gathering information on the formation and composition of other planetary systems. This paper develops a tool to design optimal orbits for the interception of these bodies considering the effects of different perturbation sources. The optimal trajectory is obtained by solving a Lambert’s problem that gives the required initial impulse. A numerical integration of a perturbed orbital model is calculated. This model considers the perturbations of the joint action of the gravitational potentials of the Solar System planets and the solar radiation pressure. These effects cause a deviation in the orbit that prevents the interception from taking place, so an iterative correction scheme of the initial estimated impulse is presented, capable of modifying the orbit and achieving a successful interception in a more realistic environment.


2021 ◽  
Author(s):  
José L Ruiz ◽  
Susanne Reimering ◽  
Mandy Sanders ◽  
Juan David Escobar-Prieto ◽  
Nicolas M. B. Brancucci ◽  
...  

Recent advances in long read technologies not only enable large consortia to aim to sequence all eukaryotes on Earth, but they also allow many laboratories to sequence their species of interest. Although there is a promise to obtain 'perfect genomes' with long read technologies, the number of contigs often exceeds the number of chromosomes significantly, containing many insertion and deletion errors around homopolymer tracks. To overcome these issues, we implemented ILRA to correct long reads-based assemblies, a pipeline that orders, names, merges, and circularizes contigs, filters erroneous small contigs and contamination, and corrects homopolymer errors with Illumina reads. We successfully tested our approach to assemble the genomes of four novel Plasmodium falciparum samples, and on existing assemblies of Trypanosoma brucei and Leptosphaeria spp. We found that correcting homopolymer tracks reduced the number of genes incorrectly annotated as pseudogenes, but an iterative correction seems to be needed to reduce high numbers of homopolymer errors. In summary, we described and compared the performance of a new tool, which improves the quality of long read assemblies. It can be used to correct genomes of a size of up to 300 Mb.


Author(s):  
L.S. Kuravsky ◽  
I.I. Greshnikov

The purpose of this work is to present the first attempt to provide quantitative analysis and objective justification for designers’ decisions that relate to the arrangement of pilot indicators on an aircraft dashboard with the use of video oculography measurements. To date, such decisions have been made only based on the practical experience accumulated by designers and subjective expert assessments. A new method for optimizing the mutual arrangement of the dashboard indicators is under consideration. This is based on iterative correction of the gaze transition probability matrix between the selected zones of attention, to minimize the difference between the stationary distribution of relative frequencies of gaze that are staying in these zones and the corresponding desirable target eye movements that are given for distribution for qualified pilots. When solving the subsequent multidimensional scaling problem, the gaze transition probability matrix that is obtained is considered to be the similarity matrix, the elements of which quantitatively characterize the proximity between the zones of attention. The main findings of this novel work are as follows: the use of oculography data to justify dashboard design decisions, the optimizing method itself, and its mathematical components, as well as analysis of the optimization in question from the viewpoint of quantum representations, all revealed design mistakes. The results that were obtained can be applied for prototyping variants of aircraft dashboards by rearranging the display areas associated with the corresponding zones of attention.


2021 ◽  
Vol 13 (11) ◽  
pp. 2106
Author(s):  
Haiyang Li ◽  
Guigen Nie ◽  
Shuguang Wu ◽  
Yuefan He

Integer ambiguity resolution is required to obtain precise coordinates for the global navigation satellite system (GNSS). Poorly observed data cause unfixed integer ambiguity and reduce the coordinate accuracy. Previous studies mostly used denoise filters and partial ambiguity resolution algorithms to address this problem. This study proposes a sequential ambiguity resolution method that includes a float solution substitution process and a double-difference (DD) iterative correction equation process. The float solution substitution process updates the initial float solution, while the DD iterative correction equation process is used to eliminate the residual biases. The satellite-selection experiment shows that the float solution substitution process is adequate to obtain a more accurate float solution. The iteration-correction experiment shows that the double-difference iterative correction equation process is feasible with an improvement in the ambiguity success rate from 28.4% to 96.2%. The superiority experiment shows significant improvement in the ambiguity success rate from 36.1% to 83.6% and a better baseline difference from about 0.1 m to 0.04 m. It is proved that the proposed sequential ambiguity resolution method can significantly optimize the results for poorly-observed GNSS data.


Author(s):  
Xia Li ◽  
Lingfang Sun ◽  
Jing Li ◽  
Heng Piao

In heat exchange fouling ultrasonic testing, the time-domain signal waveform often contains high aliasing due to small fouling thickness or high order echo interference, and so forth. This paper studies the method of acquiring time of flight from heat exchange fouling ultrasonic testing signal with high aliasing and presents the method that combined the Wiener deconvolution and high order cumulative spectrum estimation. For reference signal distortion problem, which may exist in real application, an iterative correction process is introduced in the form of Incremental Wiener algorithm. Simulation and experimental results show that the proposed method has better anti-noise ability, better time of flight accuracy and practicability.


2021 ◽  
Vol 70 ◽  
pp. 45-67
Author(s):  
Krisztian Benyo ◽  
Ayoub Charhabil ◽  
Mohamed-Ali Debyaoui ◽  
Yohan Penel

We study the Serre-Green-Naghdi system under a non-hydrostatic formulation, modelling incompressible free surface flows in shallow water regimes. This system, unlike the well-known (nonlinear) Saint-Venant equations, takes into account the effects of the non-hydrostatic pressure term as well as dispersive phenomena. Two numerical schemes are designed, based on a finite volume - finite difference type splitting scheme and iterative correction algorithms. The methods are compared by means of simulations concerning the propagation of solitary wave solutions. The model is also assessed with experimental data concerning the Favre secondary wave experiments [12].


2020 ◽  
Author(s):  
Huiwei Zhou ◽  
Zhe Liu ◽  
Chengkun Lang ◽  
Yingyu Lin ◽  
Junjie Hou

Abstract Background: Biomedical named entities recognition is one of the most essential tasks in biomedical information extraction. Previous studies suffer from inadequate annotation datasets, especially the limited knowledge contained in them. Methods: To remedy the above issue, we propose a novel Chemical and Disease Named Entity Recognition (CDNER) framework with label re-correction and knowledge distillation strategies, which could not only create large and high-quality datasets but also obtain a high-performance entity recognition model. Our framework is inspired by two points: 1) named entity recognition should be considered from the perspective of both coverage and accuracy; 2) trustable annotations should be yielded by iterative correction. Firstly, for coverage, we annotate chemical and disease entities in a large unlabeled dataset by PubTator to generate a weakly labeled dataset. For accuracy, we then filter it by utilizing multiple knowledge bases to generate another dataset. Next, the two datasets are revised by a label re-correction strategy to construct two high-quality datasets, which are used to train two CDNER models, respectively. Finally, we compress the knowledge in the two models into a single model with knowledge distillation. Results: Experiments on the BioCreative V chemical-disease relation corpus show that knowledge from large datasets significantly improves CDNER performance, leading to new state-of-the-art results.Conclusions: We propose a framework with label re-correction and knowledge distillation strategies. Comparison results show that the two perspectives of knowledge in the two re-corrected datasets respectively are complementary and both effective for biomedical named entity recognition.


Sign in / Sign up

Export Citation Format

Share Document