# significant performanceRecently Published Documents

869
(FIVE YEARS 437)

## H-INDEX

30
(FIVE YEARS 10)

2022 ◽
Vol 40 (3) ◽
pp. 1-24
Author(s):
Jiaul H. Paik ◽
Yash Agrawal ◽
Sahil Rishi ◽
Vaishal Shah
Keyword(s):

Existing probabilistic retrieval models do not restrict the domain of the random variables that they deal with. In this article, we show that the upper bound of the normalized term frequency ( tf ) from the relevant documents is much smaller than the upper bound of the normalized tf from the whole collection. As a result, the existing models suffer from two major problems: (i) the domain mismatch causes data modeling error, (ii) since the outliers have very large magnitude and the retrieval models follow tf hypothesis, the combination of these two factors tends to overestimate the relevance score. In an attempt to address these problems, we propose novel weighted probabilistic models based on truncated distributions. We evaluate our models on a set of large document collections. Significant performance improvement over six existing probabilistic models is demonstrated.

2022 ◽
Vol 18 (1) ◽
pp. 1-49
Author(s):
Lingjun Zhu ◽
Arjun Chaudhuri ◽
Sanmitra Banerjee ◽
Gauthaman Murali ◽
Pruek Vanna-Iampikul ◽
...
Keyword(s):

Monolithic 3D (M3D) is an emerging heterogeneous integration technology that overcomes the limitations of the conventional through-silicon-via (TSV) and provides significant performance uplift and power reduction. However, the ultra-dense 3D interconnects impose significant challenges during physical design on how to best utilize them. Besides, the unique low-temperature fabrication process of M3D requires dedicated design-for-test mechanisms to verify the reliability of the chip. In this article, we provide an in-depth analysis on these design and test challenges in M3D. We also provide a comprehensive survey of the state-of-the-art solutions presented in the literature. This article encompasses all key steps on M3D physical design, including partitioning, placement, clock routing, and thermal analysis and optimization. In addition, we provide an in-depth analysis of various fault mechanisms, including M3D manufacturing defects, delay faults, and MIV (monolithic inter-tier via) faults. Our design-for-test solutions include test pattern generation for pre/post-bond testing, built-in-self-test, and test access architectures targeting M3D.

2022 ◽
Vol 6 (POPL) ◽
pp. 1-30
Author(s):
Matthew Kolosick ◽
Shravan Narayan ◽
Evan Johnson ◽
Michael LeMay ◽
...
Keyword(s):

Software sandboxing or software-based fault isolation (SFI) is a lightweight approach to building secure systems out of untrusted components. Mozilla, for example, uses SFI to harden the Firefox browser by sandboxing third-party libraries, and companies like Fastly and Cloudflare use SFI to safely co-locate untrusted tenants on their edge clouds. While there have been significant efforts to optimize and verify SFI enforcement, context switching in SFI systems remains largely unexplored: almost all SFI systems use heavyweight transitions that are not only error-prone but incur significant performance overhead from saving, clearing, and restoring registers when context switching. We identify a set of zero-cost conditions that characterize when sandboxed code has sufficient structured to guarantee security via lightweight zero-cost transitions (simple function calls). We modify the Lucet Wasm compiler and its runtime to use zero-cost transitions, eliminating the undue performance tax on systems that rely on Lucet for sandboxing (e.g., we speed up image and font rendering in Firefox by up to 29.7% and 10% respectively). To remove the Lucet compiler and its correct implementation of the Wasm specification from the trusted computing base, we (1) develop a static binary verifier , VeriZero, which (in seconds) checks that binaries produced by Lucet satisfy our zero-cost conditions, and (2) prove the soundness of VeriZero by developing a logical relation that captures when a compiled Wasm function is semantically well-behaved with respect to our zero-cost conditions. Finally, we show that our model is useful beyond Wasm by describing a new, purpose-built SFI system, SegmentZero32, that uses x86 segmentation and LLVM with mostly off-the-shelf passes to enforce our zero-cost conditions; our prototype performs on-par with the state-of-the-art Native Client SFI system.

Author(s):
Timon Hummel ◽
Claude Coatantiec ◽
Xavier Gnata ◽
Tobias Lamour ◽
Rémi Rivière ◽
...
Keyword(s):

AbstractThe measurement accuracy of recent and future space-based imaging spectrometers with a high spectral and spatial resolution suffer from the inhomogeneity of the radiances of the observed Earth scene. The Instrument Spectral Response Function (ISRF) is distorted due to the inhomogeneous illumination from scene heterogeneity. This gives rise to a pseudo-random error on the measured spectra. In order to assess the spectral stability of the spectrograph, stringent requirements are typically defined on the ISRF such as shape knowledge and the stability of the centroid position of the spectral sample. The high level of spectral accuracy is particularly crucial for missions quantifying small variations in the total column of well-mixed trace gases like $$\hbox {CO}_{2}$$ CO 2 . In the framework of the $$\hbox {CO}_{2}$$ CO 2 Monitoring Mission (CO2M) industrial feasibility study (Phase A/B1 study), we investigated a new slit design called 2D-Slit Homogenizer (2DSH). This new concept aims to reduce the Earth scene contrast entering the instrument. The 2DSH is based on optical fibre waveguides assembled in a bundle, which scramble the light in across-track (ACT) and along-track (ALT) direction. A single fibre core dimension in ALT defines the spectral extent of the slit and the dimension in ACT represents the spatial sample of the instrument. The full swath is given by the total size of the adjoined fibres in ACT direction. In this work, we provide experimental measurement data on the stability of representative rectangular core shaped fibre as well as a preliminary pre-development of a 2DSH fibre bundle. In our study, the slit concept has demonstrated significant performance gains in the stability of the ISRF for several extreme high-contrast Earth scenes, achieving a shape stability of $$<0.5{\%}$$ < 0.5 % and a centroid stability of $$<0.25 \ \text {pm}$$ < 0.25 pm (NIR). Given this unprecedented ISRF stabilization, we conclude that the 2DSH concept efficiently desensitizes the instrument for radiometric and spectral errors with respect to the heterogeneity of the Earth scene radiance.

Author(s):
Song Li ◽
Mustafa Ozkan Yerebakan ◽
Yue Luo ◽
Ben Amaba ◽
William Swope ◽
...

Abstract Voice recognition has become an integral part of our lives, commonly used in call centers and as part of virtual assistants. However, voice recognition is increasingly applied to more industrial uses. Each of these use cases has unique characteristics that may impact the effectiveness of voice recognition, which could impact industrial productivity, performance, or even safety. One of the most prominent among them is the unique background noises that are dominant in each industry. The existence of different machinery and different work layouts are primary contributors to this. Another important characteristic is the type of communication that is present in these settings. Daily communication often involves longer sentences uttered under relatively silent conditions, whereas communication in industrial settings is often short and conducted in loud conditions. In this study, we demonstrated the importance of taking these two elements into account by comparing the performances of two voice recognition algorithms under several background noise conditions: a regular Convolutional Neural Network (CNN) based voice recognition algorithm to an Auto Speech Recognition (ASR) based model with a denoising module. Our results indicate that there is a significant performance drop between the typical background noise use (white noise) and the rest of the background noises. Also, our custom ASR model with the denoising module outperformed the CNN based model with an overall performance increase between 14-35% across all background noises. . Both results give proof that specialized voice recognition algorithms need to be developed for these environments to reliably deploy them as control mechanisms.

2022 ◽
Vol 15 (1) ◽
Author(s):
Qin Ni ◽
Zhuo Fan ◽
Lei Zhang ◽
Bo Zhang ◽
Xiaochen Zheng ◽
...
Keyword(s):

AbstractHuman activity recognition (HAR) has received more and more attention, which is able to play an important role in many fields, such as healthcare and intelligent home. Thus, we have discussed an application of activity recognition in the healthcare field in this paper. Essential tremor (ET) is a common neurological disorder that can make people with this disease rise involuntary tremor. Nowadays, the disease is easy to be misdiagnosed as other diseases. We have combined the essential tremor and activity recognition to recognize ET patients’ activities and evaluate the degree of ET for providing an auxiliary analysis toward disease diagnosis by utilizing stacked denoising autoencoder (SDAE) model. Meanwhile, it is difficult for model to learn enough useful features due to the small behavior dataset from ET patients. Thus, resampling techniques are proposed to alleviate small sample size and imbalanced samples problems. In our experiment, 20 patients with ET and 5 healthy people have been chosen to collect their acceleration data for activity recognition. The experimental results show the significant result on ET patients activity recognition and the SDAE model has achieved an overall accuracy of 93.33%. What’s more, this model is also used to evaluate the degree of ET and has achieved the accuracy of 95.74%. According to a set of experiments, the model we used is able to acquire significant performance on ET patients activity recognition and degree of tremor assessment.

2022 ◽
Vol 14 (1) ◽
pp. 1-22
Author(s):
Ivan Jajić ◽
Mario Spremić ◽
Ivan Miloloža

In this paper, the adoption of Augmented Reality, as one of the emerging and intriguing digital technologies, has been investigated. This research uses the extended Unified Theory on Acceptance and Use of Technology framework to analyze these factors. The student population respondents' data about Augmented Reality adoption was collected. The student population has been chosen due to the highest probability of accepting new technologies. The research results show a positive and significant performance expectancy and enjoyment, while effort expectancy showed a negative and significant impact on the behavioural intention dependent variable. These research results can be used for the potential development of Augmented Reality apps in the retail industry and the academic implications of the connections between variables in the UTAUT framework.

2021 ◽
Author(s):
Ajanthaa Lakkshmanan ◽
C. Anbu Ananth ◽
S. Tiroumalmouroughane S. Tiroumalmouroughane
Keyword(s):

PurposeThe advancements of deep learning (DL) models demonstrate significant performance on accurate pancreatic tumor segmentation and classification.Design/methodology/approachThe presented model involves different stages of operations, namely preprocessing, image segmentation, feature extraction and image classification. Primarily, bilateral filtering (BF) technique is applied for image preprocessing to eradicate the noise present in the CT pancreatic image. Besides, noninteractive GrabCut (NIGC) algorithm is applied for the image segmentation process. Subsequently, residual network 152 (ResNet152) model is utilized as a feature extractor to originate a valuable set of feature vectors. At last, the red deer optimization algorithm (RDA) tuned backpropagation neural network (BPNN), called RDA-BPNN model, is employed as a classification model to determine the existence of pancreatic tumor.FindingsThe experimental results are validated in terms of different performance measures and a detailed comparative results analysis ensured the betterment of the RDA-BPNN model with the sensitivity of 98.54%, specificity of 98.46%, accuracy of 98.51% and F-score of 98.23%.Originality/valueThe study also identifies several novel automated deep learning based approaches used by researchers to assess the performance of the RDA-BPNN model on benchmark dataset and analyze the results in terms of several measures.

2021 ◽
Vol 10 (1) ◽
pp. 112
Author(s):
Artem A. Lenskiy
Keyword(s):

Reconstruction-based approaches to anomaly detection tend to fall short when applied to complex datasets with target classes that possess high inter-class variance. Similar to the idea of self-taught learning used in transfer learning, many domains are rich with similar unlabeled datasets that could be leveraged as a proxy for out-of-distribution samples. In this paper we introduce the latent-insensitive autoencoder (LIS-AE) where unlabeled data from a similar domain are utilized as negative examples to shape the latent layer (bottleneck) of a regular autoencoder such that it is only capable of reconstructing one task. We provide theoretical justification for the proposed training process and loss functions along with an extensive ablation study highlighting important aspects of our model. We test our model in multiple anomaly detection settings presenting quantitative and qualitative analysis showcasing the significant performance improvement of our model for anomaly detection tasks.

2021 ◽
Vol 10 (4) ◽
pp. 146-159
Author(s):
Qusay Idrees Sarhan
Keyword(s):

Java is one of the most demanding programming languages nowadays and it is used for developing a wide range of software applications including desktop, mobile, embedded, and web applications. Writing efficient Java codes for those various types of applications (which some are critical and time-sensitive) is crucial and recommended best practices that every Java developer should consider. To date, there is a lack of in-depth experimental studies in the literature that evaluate the impact of writing efficient Java programming strategies on the performance of desktop applications in terms of runtime. Thus, this paper aims to perform a variety of experimental tests that have been carefully chosen and implemented to evaluate the most important aspects of desktop efficient Java programming in terms of runtime. The results of this study show that significant performance improvements can be achieved by applying different programming strategies.