lower error rate
Recently Published Documents


TOTAL DOCUMENTS

18
(FIVE YEARS 9)

H-INDEX

2
(FIVE YEARS 1)

2021 ◽  
Vol 12 ◽  
Author(s):  
Kongyang Zhu ◽  
Panxin Du ◽  
Jianxue Xiong ◽  
Xiaoying Ren ◽  
Chang Sun ◽  
...  

The MGISEQ-2000 sequencer is widely used in various omics studies, but the performance of this platform for paleogenomics has not been evaluated. We here compare the performance of MGISEQ-2000 with the Illumina X-Ten on ancient human DNA using four samples from 1750BCE to 60CE. We found there were only slight differences between the two platforms in most parameters (duplication rate, sequencing bias, θ, δS, and λ). MGISEQ-2000 performed well on endogenous rate and library complexity although X-Ten had a higher average base quality and lower error rate. Our results suggest that MGISEQ-2000 and X-Ten have comparable performance, and MGISEQ-2000 can be an alternative platform for paleogenomics sequencing.


Author(s):  
Ming Yin

Estimating the depth of the scene from a monocular image is an essential step for image semantic understanding. Practically, some existing methods for this highly ill-posed issue are still in lack of robustness and efficiency. This paper proposes a novel end-to-end depth esti- mation model with skip connections from a pre- trained Xception model for dense feature extrac- tion, and three new modules are designed to im- prove the upsampling process. In addition, ELU activation and convolutions with smaller kernel size are added to improve the pixel-wise regres- sion process. The experimental results show that our model has fewer network parameters, a lower error rate than the most advanced networks and requires only half the training time. The evalu- ation is based on the NYU v2 dataset, and our proposed model can achieve clearer boundary de- tails with state-of-the-art effects and robustness.


2021 ◽  
Author(s):  
Oscar Gonzalez-Recio ◽  
Monica Gutierrez-Rivas ◽  
Ramon Peiro-Pastor ◽  
Pilar Aguilera-Sepulveda ◽  
Cristina Cano-Gomez ◽  
...  

Nanopore sequencing has emerged as a rapid and cost-efficient tool for diagnostic and epidemiological surveillance of SARS-CoV-2 during the COVID-19 pandemic. This study compared results from sequencing the SARS-CoV-2 genome using R9 vs R10 flow cells and Rapid Barcoding Kit (RBK) vs Ligation Sequencing Kit (LSK). The R9 chemistry provided a lower error rate (3.5%) than R10 chemistry (7%). The SARS-CoV-2 genome includes few homopolymeric regions. Longest homopolymers were composed of 7 (TTTTTTT) and 6 (AAAAAA) nucleotides. The R10 chemistry resulted in a lower rate of deletions in timine and adenine homopolymeric regions than R9, at expenses of a larger rate (~10%) of mismatches in these regions. The LSK had a larger yield than RBK, and provided longer reads than RBK. It also resulted in a larger percentage of aligned reads (99% vs 93%) and also in a complete consensus genome. The results from this study suggest that the LSK used on a R9 flow cell could maximize the yield and accuracy of the consensus sequence when used in epidemiological surveillance of SARS-CoV-2.


Author(s):  
Tian Ye ◽  
Fumikazu Furumi ◽  
Daniel Catarino da Silva ◽  
Antonia Hamilton

AbstractIn a busy space, people encounter many other people with different viewpoints, but classic studies of perspective-taking examine only one agent at a time. This paper explores the issue of selectivity in visual perspective-taking (VPT) when different people are available to interact with. We consider the hypothesis that humanization impacts on VPT in four studies using virtual reality methods. Experiments 1 and 2 use the director task to show that for more humanized agents (an in-group member or a virtual human agent), participants were more likely to use VPT to achieve lower error rate. Experiments 3 and 4 used a two-agent social mental rotation task to show that participants are faster and more accurate to recognize items which are oriented towards a more humanized agent (an in-group member or a naturally moving agent). All results support the claim that humanization alters the propensity to engage in VPT in rich social contexts.


2020 ◽  
Vol 9 (1) ◽  
pp. 31-40
Author(s):  
Arwin Datumaya Wahyudi Sumari ◽  
Dimas Rossiawan Hendra Putra ◽  
Muhammad Bisri Musthofa ◽  
Ngat Mari

This study aims to analyze the comparative performance of pandemic dynamics prediction methods on the island of Java, based on data from March to May 2020 covering the provinces of DKI Jakarta, West Java, Central Java, DI Yogyakarta, and East Java. The prediction uses Knowledge Growing System (KGS) and time series models, namely Single Moving Average (SMA) and Exponential Moving Average (EMA). Based on the Mean Absolute Percentage Error (MAPE) computational results, the EMA method produces a lower error rate than the SMA method with 47.94 % on average. The KGS prediction with a Degree of Certainty (DoC) produced a trend analysis that the pandemic dynamics in DKI Jakarta province will decrease gradually if the current policy is still implemented. Whereas in the other provinces, the KGS predicted the pandemic dynamics trends will still increase.


2020 ◽  
Author(s):  
Tian Ye ◽  
Fumikazu Furumi ◽  
Daniel Catarino da Silva ◽  
Antonia Hamilton

In a busy space, people encounter many other people with different viewpoints, but classic studies of VPT examine only one agent at a time. This paper explores the issue of selectivity in VPT when different people are available to interact with. We consider the hypothesis that humanisation impacts on VPT in four studies using virtual reality methods. Experiment 1 & 2 use the Director Task to show that for more humanised agents (an in-group member or a virtual human agent), participants were more likely to use VPT to achieve lower error rate. Experiment 3 & 4 used a two-agent social mental rotation task to show that participants are faster and more accurate to recognise items which are oriented towards a more humanised agent (an in-group member or a naturally-moving agent). All results support the claim that humanisation alters the propensity to engage in VPT in rich social contexts.


2019 ◽  
Vol 9 (21) ◽  
pp. 4492 ◽  
Author(s):  
González-Patiño ◽  
Villuendas-Rey ◽  
Argüelles-Cruz ◽  
Karray

Breast cancer is a current problem that causes the death of many women. In this work, we test meta-heuristics applied to the segmentation of mammographic images. Traditionally, the application of these algorithms has a direct relationship with optimization problems; however, in this study, its implementation is oriented to the segmentation of mammograms using the Dunn index as an optimization function, and the grey levels to represent each individual. The update of grey levels during the process results in the maximization of the Dunn’s index function; the higher the index, the better the segmentation will be. The results showed a lower error rate using these meta-heuristics for segmentation compared to a well-adopted classical approach known as the Otsu method.


Text documents stored on the system in an unstructured form, so that the information inside cannot be extracted directly. To be able to extract it, it takes text processing which is first carried out initial processing (preprocessing text) to convert text documents into more structured by selecting words that used as indexes. The smaller the index value, the more text documents are recognized on the system and the information is more easily extracted. The size of the index determined by the number of groups of words formed. To avoid forming many groups of words, then each word is changed to become a basic word first before grouping. The process of changing of affix word into a basic word using certain rules is called stemming. This research aims to produce a new Indonesian stemming algorithm named UG18 Stemmer algorithm, which can reduce or eliminate stemming errors such as over-stemming and under-stemming on existing stemming algorithms including the Enhanced Confix Stripping (ECS) Stemmer algorithm and the New Enhanced Confix Stripping (NECS) stemming algorithm. The method used is the morphophonemic process approach, which sees affixes as bound morphemes that experience phoneme changes, phoneme addition, and phoneme removal. The three processes are mapped, and Finite State Automata was made to obtain new affixed groups, sequences and new deletion methods that form the basis of the development of the UG18 Stemmer algorithm. This algorithm developed not using a list of decapitation rules used in pre-existing algorithms. Decapitation rules replaced with morphophonemic based elimination rules. Based on the evaluation results and testing of the UG18 Stemmer algorithm, it has a lower error rate compared to the results of stemming using NESC Stemmer. The result can be seen from the randomized test of 2500 word using Relevance Judgment validated by Indonesian language experts, from 1.48% over-stemming and 16.69% under-stemming using the NECS stemmer algorithm down to 0.12% overstemming and 0% understemming using the UG18 algorithm stemmer. Also, the UG18 Stemmer algorithm can improve the speed performance process in the information retrieval-based document similarity measurement application of 45.47% compared to using the ECS stemmer algorithm.


Sign in / Sign up

Export Citation Format

Share Document