error rates
Recently Published Documents


TOTAL DOCUMENTS

3314
(FIVE YEARS 1130)

H-INDEX

81
(FIVE YEARS 10)

2022 ◽  
Vol 15 (1) ◽  
pp. 1-27
Author(s):  
Franz-Josef Streit ◽  
Paul Krüger ◽  
Andreas Becher ◽  
Stefan Wildermann ◽  
Jürgen Teich

FPGA-based Physical Unclonable Functions (PUF) have emerged as a viable alternative to permanent key storage by turning effects of inaccuracies during the manufacturing process of a chip into a unique, FPGA-intrinsic secret. However, many fixed PUF designs may suffer from unsatisfactory statistical properties in terms of uniqueness, uniformity, and robustness. Moreover, a PUF signature may alter over time due to aging or changing operating conditions, rendering a PUF insecure in the worst case. As a remedy, we propose CHOICE , a novel class of FPGA-based PUF designs with tunable uniqueness and reliability characteristics. By the use of addressable shift registers available on an FPGA, we show that a wide configuration space for adjusting a device-specific PUF response is obtained without any sacrifice of randomness. In particular, we demonstrate the concept of address-tunable propagation delays, whereby we are able to increase or decrease the probability of obtaining “ 1 ”s in the PUF response. Experimental evaluations on a group of six 28 nm Xilinx Artix-7 FPGAs show that CHOICE PUFs provide a large range of configurations to allow a fine-tuning to an average uniqueness between 49% and 51%, while simultaneously achieving bit error rates below 1.5%, thus outperforming state-of-the-art PUF designs. Moreover, with only a single FPGA slice per PUF bit, CHOICE is one of the smallest PUF designs currently available for FPGAs. It is well-known that signal propagation delays are affected by temperature, as the operating temperature impacts the internal currents of transistors that ultimately make up the circuit. We therefore comprehensively investigate how temperature variations affect the PUF response and demonstrate how the tunability of CHOICE enables us to determine configurations that show a high robustness to such variations. As a case study, we present a cryptographic key generation scheme based on CHOICE PUF responses as device-intrinsic secret and investigate the design objectives resource costs, performance, and temperature robustness to show the practicability of our approach.


Author(s):  
Chanintorn Jittawiriyanukoon ◽  
Vilasinee Srisarkun

The IEEE 802.11ay wireless communication standard consents gadgets to link in the spectrum of millimeter wave (mm-Wave) 60 Giga Hertz band through 100 Gbps bandwidth. The development of promising high bandwidth in communication networks is a must as QoS, throughput and error rates of bandwidth-intensive applications like merged reality (MR), artificial intelligence (AI) related apps or wireless communication boggling exceed the extent of the chronic 802.11 standard established in 2012. Thus, the IEEE 802.11ay task group committee has newly amended recent physical (PHY) and medium access control (MAC) blueprints to guarantee a technical achievement especially in link delay on multipath fading channels (MPFC). However, due to the congestion of super bandwidth intensive apps such as IoT and big data, we propose to diversify a propagation delay to practical extension. This article then focuses on a real-world situation and how the IEEE 802.11ay design is affected by the performance of mm-Wave propagation. In specific, we randomize the unstable MPFC link capacity by taking the divergence of congested network parameters into account. The efficiency of congested MPFC-based wireless network is simulated and confirmed by advancements described in the standard.


2022 ◽  
Vol 8 ◽  
Author(s):  
Maik Sahm ◽  
Clara Danzer ◽  
Alexis Leonhard Grimm ◽  
Christian Herrmann ◽  
Rene Mantke

Background and AimsPublished studies repeatedly demonstrate an advantage of three-dimensional (3D) laparoscopic surgery over two-dimensional (2D) systems but with quite heterogeneous results. This raises the question whether clinics must replace 2D technologies to ensure effective training of future surgeons.MethodsWe recruited 45 students with no experience in laparoscopic surgery and comparable characteristics in terms of vision and frequency of video game usage. The students were randomly allocated to 3D (n = 23) or 2D (n = 22) groups and performed 10 runs of a laparoscopic “peg transfer” task in the Luebeck Toolbox. A repeated-measures ANOVA for operation times and a generalized linear mixed model for error rates were calculated. The main effects of laparoscopic condition and run, as well as the interaction term between the two, were examined.ResultsNo statistically significant differences in operation times and error rates were observed between 2D and 3D groups (p = 0.10 and p = 0.72, respectively). The learning curve showed a significant reduction in operation time and error rates (both p's < 0.001). No significant interactions between group and run were detected (operation time: p = 0.342, error rates: p = 0.83). With respect to both endpoints studied, the learning curves reached their plateau at the 7th run.ConclusionThe result of our study with laparoscopic novices revealed no significant difference between 2D and 3D technology with respect to performance time and the error rate in a simple standardized test. In the future, surgeons may thus still be trained in both techniques.


2022 ◽  
Vol 15 ◽  
Author(s):  
Eri Nakagawa ◽  
Takahiko Koike ◽  
Motofumi Sumiya ◽  
Koji Shimada ◽  
Kai Makita ◽  
...  

Japanese English learners have difficulty speaking Double Object (DO; give B A) than Prepositional Object (PO; give A to B) structures which neural underpinning is unknown. In speaking, syntactic and phonological processing follow semantic encoding, conversion of non-verbal mental representation into a structure suitable for expression. To test whether DO difficulty lies in linguistic or prelinguistic process, we conducted functional magnetic resonance imaging. Thirty participants described cartoons using DO or PO, or simply named them. Greater reaction times and error rates indicated DO difficulty. DO compared with PO showed parieto-frontal activation including left inferior frontal gyrus, reflecting linguistic process. Psychological priming in PO produced immediately after DO and vice versa compared to after control, indicated shared process between PO and DO. Cross-structural neural repetition suppression was observed in occipito-parietal regions, overlapping the linguistic system in pre-SMA. Thus DO and PO share prelinguistic process, whereas linguistic process imposes overload in DO.


Photonics ◽  
2022 ◽  
Vol 9 (1) ◽  
pp. 43
Author(s):  
Mónica Far Brusatori ◽  
Nicolas Volet

To increase the spectral efficiency of coherent communication systems, lasers with ever-narrower linewidths are required as they enable higher-order modulation formats with lower bit-error rates. In particular, semiconductor lasers are a key component due to their compactness, low power consumption, and potential for mass production. In field-testing scenarios their output is coupled to a fiber, making them susceptible to external optical feedback (EOF). This has a detrimental effect on its stability, thus it is traditionally countered by employing, for example, optical isolators and angled output waveguides. In this work, EOF is explored in a novel way with the aim to reduce and stabilize the laser linewidth. EOF has been traditionally studied in the case where it is applied to only one side of the laser cavity. In contrast, this work gives a generalization to the case of feedback on both sides. It is implemented using photonic components available via generic foundry platforms, thus creating a path towards devices with high technology-readiness level. Numerical results shows an improvement in performance of the double-feedback case with respect to the single-feedback case. In particularly, by appropriately selecting the phase of the feedback from both sides, a broad stability regime is discovered. This work paves the way towards low-cost, integrated and stable narrow-linewidth integrated lasers.


2022 ◽  
Author(s):  
Kumeren Nadaraj Govender ◽  
David W Eyre

Culture-independent metagenomic detection of microbial species has the potential to provide rapid and precise real-time diagnostic results. However, it is potentially limited by sequencing and classification errors. We use simulated and real-world data to benchmark rates of species misclassification using 100 reference genomes for each of ten common bloodstream pathogens and six frequent blood culture contaminants (n=1600). Simulating both with and without sequencing error for both the Illumina and Oxford Nanopore platforms, we evaluated commonly used classification tools including Kraken2, Bracken, and Centrifuge, utilising mini (8GB) and standard (30-50GB) databases. Bracken with the standard database performed best, the median percentage of reads across both sequencing platforms identified correctly to the species level was 98.46% (IQR 93.0:99.3) [range 57.1:100]. For Kraken2 with a mini database, a commonly used combination, median species-level identification was 79.3% (IQR 39.1:88.8) [range 11.2:100]. Classification performance varied by species, with E. coli being more challenging to classify correctly (59.4% to 96.4% reads with correct species, varying by tool used). By filtering out shorter Nanopore reads (<3500bp) we found performance similar or superior to Illumina sequencing, despite higher sequencing error rates. Misclassification was more common when the misclassified species had a higher average nucleotide identity to the true species. Our findings highlight taxonomic misclassification of sequencing data occurs and varies by sequencing and analysis workflow. This “bioinformatic contamination” should be accounted for in metagenomic pipelines to ensure accurate results that can support clinical decision making.


Author(s):  
Justine Mertz ◽  
Chiara Annucci ◽  
Valentina Aristodemo ◽  
Beatrice Giustolisi ◽  
Doriane Gras ◽  
...  

The study of articulatory complexity has proven to yield useful insights into the phonological mechanisms of spoken languages. In sign languages, this type of knowledge is scarcely documented. The current study compares a data-driven measure and a theory-driven measure of complexity for signs in French Sign Language (LSF). The former measure is based on error rates of handshape, location, orientation, movement and sign fluidity in a repetition task administered to non-signers; the latter measure is derived by applying a feature-geometry model of sign description on the same set of signs. A significant correlation is found between the two measures for the overall complexity. When looking at the impact of individual phonemic classes on complexity, a significant correlation is found for handshape and location but not for movement. We discuss how these results indicate that a fine-grained theoretical model of sign phonology/phonetics reflects the degree of complexity as resulting from the perceptual and articulatory properties of signs.


2022 ◽  
pp. 146906672110667
Author(s):  
Miroslav Hruska ◽  
Dusan Holub

Detection of peptides lies at the core of bottom-up proteomics analyses. We examined a Bayesian approach to peptide detection, integrating match-based models (fragments, retention time, isotopic distribution, and precursor mass) and peptide prior probability models under a unified probabilistic framework. To assess the relevance of these models and their various combinations, we employed a complete- and a tail-complete search of a low-precursor-mass synthetic peptide library based on oncogenic KRAS peptides. The fragment match was by far the most informative match-based model, while the retention time match was the only remaining such model with an appreciable impact––increasing correct detections by around 8 %. A peptide prior probability model built from a reference proteome greatly improved the detection over a uniform prior, essentially transforming de novo sequencing into a reference-guided search. The knowledge of a correct sequence tag in advance to peptide-spectrum matching had only a moderate impact on peptide detection unless the tag was long and of high certainty. The approach also derived more precise error rates on the analyzed combinatorial peptide library than those estimated using PeptideProphet and Percolator, showing its potential applicability for the detection of homologous peptides. Although the approach requires further computational developments for routine data analysis, it illustrates the value of peptide prior probabilities and presents a Bayesian approach for their incorporation into peptide detection.


Author(s):  
Xuan Song ◽  
Hai Yun Gao ◽  
Karl Herrup ◽  
Ronald P. Hart

Gene expression studies using xenograft transplants or co-culture systems, usually with mixed human and mouse cells, have proven to be valuable to uncover cellular dynamics during development or in disease models. However, the mRNA sequence similarities among species presents a challenge for accurate transcript quantification. To identify optimal strategies for analyzing mixed-species RNA sequencing data, we evaluate both alignment-dependent and alignment-independent methods. Alignment of reads to a pooled reference index is effective, particularly if optimal alignments are used to classify sequencing reads by species, which are re-aligned with individual genomes, generating [Formula: see text] accuracy across a range of species ratios. Alignment-independent methods, such as convolutional neural networks, which extract the conserved patterns of sequences from two species, classify RNA sequencing reads with over 85% accuracy. Importantly, both methods perform well with different ratios of human and mouse reads. While non-alignment strategies successfully partitioned reads by species, a more traditional approach of mixed-genome alignment followed by optimized separation of reads proved to be the more successful with lower error rates.


2022 ◽  
Vol 23 (1) ◽  
Author(s):  
Nadia M. Davidson ◽  
Ying Chen ◽  
Teresa Sadras ◽  
Georgina L. Ryland ◽  
Piers Blombery ◽  
...  

AbstractIn cancer, fusions are important diagnostic markers and targets for therapy. Long-read transcriptome sequencing allows the discovery of fusions with their full-length isoform structure. However, due to higher sequencing error rates, fusion finding algorithms designed for short reads do not work. Here we present JAFFAL, to identify fusions from long-read transcriptome sequencing. We validate JAFFAL using simulations, cell lines, and patient data from Nanopore and PacBio. We apply JAFFAL to single-cell data and find fusions spanning three genes demonstrating transcripts detected from complex rearrangements. JAFFAL is available at https://github.com/Oshlack/JAFFA/wiki.


Sign in / Sign up

Export Citation Format

Share Document