scholarly journals Enhancing Accuracy in a Touch Operation Biometric System: A Case on the Android Pattern Lock Scheme

2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Allan Ng’ang’a ◽  
Paula M. W. Musuva

The main objective of this research study is to enhance the functionality of an Android pattern lock application by determining whether the time elements of a touch operation, in particular time on dot (TOD) and time between dot (TBD), can be accurately used as a biometric identifier. The three hypotheses that were tested through this study were the following–H1: there is a correlation between the number of touch stroke features used and the accuracy of the touch operation biometric system; H2: there is a correlation between pattern complexity and accuracy of the touch operation biometric system; H3: there is a correlation between user training and accuracy of the touch operation biometric system. Convenience sampling and a within-subjects design involving repeated measures were incorporated when testing an overall sample size of 12 subjects drawn from a university population who gave a total of 2,096 feature extracted data. Analysis was done using the Dynamic Time Warping (DTW) Algorithm. Through this study, it was shown that the extraction of one-touch stroke biometric feature coupled with user training was able to yield high average accuracy levels of up to 82%. This helps build a case for the introduction of biometrics into smart devices with average processing capabilities as they would be able to handle a biometric system without it compromising on the overall system performance. For future work, it is recommended that more work be done by applying other classification algorithms to the existing data set and comparing their results with those obtained with DTW.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rajit Nair ◽  
Santosh Vishwakarma ◽  
Mukesh Soni ◽  
Tejas Patel ◽  
Shubham Joshi

Purpose The latest 2019 coronavirus (COVID-2019), which first appeared in December 2019 in Wuhan's city in China, rapidly spread around the world and became a pandemic. It has had a devastating impact on daily lives, the public's health and the global economy. The positive cases must be identified as soon as possible to avoid further dissemination of this disease and swift care of patients affected. The need for supportive diagnostic instruments increased, as no specific automated toolkits are available. The latest results from radiology imaging techniques indicate that these photos provide valuable details on the virus COVID-19. User advanced artificial intelligence (AI) technologies and radiological imagery can help diagnose this condition accurately and help resolve the lack of specialist doctors in isolated areas. In this research, a new paradigm for automatic detection of COVID-19 with bare chest X-ray images is displayed. Images are presented. The proposed model DarkCovidNet is designed to provide correct binary classification diagnostics (COVID vs no detection) and multi-class (COVID vs no results vs pneumonia) classification. The implemented model computed the average precision for the binary and multi-class classification of 98.46% and 91.352%, respectively, and an average accuracy of 98.97% and 87.868%. The DarkNet model was used in this research as a classifier for a real-time object detection method only once. A total of 17 convolutionary layers and different filters on each layer have been implemented. This platform can be used by the radiologists to verify their initial application screening and can also be used for screening patients through the cloud. Design/methodology/approach This study also uses the CNN-based model named Darknet-19 model, and this model will act as a platform for the real-time object detection system. The architecture of this system is designed in such a way that they can be able to detect real-time objects. This study has developed the DarkCovidNet model based on Darknet architecture with few layers and filters. So before discussing the DarkCovidNet model, look at the concept of Darknet architecture with their functionality. Typically, the DarkNet architecture consists of 5 pool layers though the max pool and 19 convolution layers. Assume as a convolution layer, and as a pooling layer. Findings The work discussed in this paper is used to diagnose the various radiology images and to develop a model that can accurately predict or classify the disease. The data set used in this work is the images bases on COVID-19 and non-COVID-19 taken from the various sources. The deep learning model named DarkCovidNet is applied to the data set, and these have shown signification performance in the case of binary classification and multi-class classification. During the multi-class classification, the model has shown an average accuracy 98.97% for the detection of COVID-19, whereas in a multi-class classification model has achieved an average accuracy of 87.868% during the classification of COVID-19, no detection and Pneumonia. Research limitations/implications One of the significant limitations of this work is that a limited number of chest X-ray images were used. It is observed that patients related to COVID-19 are increasing rapidly. In the future, the model on the larger data set which can be generated from the local hospitals will be implemented, and how the model is performing on the same will be checked. Originality/value Deep learning technology has made significant changes in the field of AI by generating good results, especially in pattern recognition. A conventional CNN structure includes a convolution layer that extracts characteristics from the input using the filters it applies, a pooling layer that reduces calculation efficiency and the neural network's completely connected layer. A CNN model is created by integrating one or more of these layers, and its internal parameters are modified to accomplish a specific mission, such as classification or object recognition. A typical CNN structure has a convolution layer that extracts features from the input with the filters it applies, a pooling layer to reduce the size for computational performance and a fully connected layer, which is a neural network. A CNN model is created by combining one or more such layers, and its internal parameters are adjusted to accomplish a particular task, such as classification or object recognition.


2003 ◽  
Vol 12 (2) ◽  
pp. 95-103 ◽  
Author(s):  
William R. Holcomb ◽  
Chris Blank

Context:Ultrasound significantly raises tissue temperature, but the time of temperature elevation is short.Objective:To assess the effectiveness of superficial preheating on temperature elevation and decline when using ultrasound.Design:Within-subjects design to test the independent variable, treatment condition; repeated-measures ANOVAs to analyze the dependent variables, temperature elevation and decline.Setting:Athletic training laboratory.Intervention:Temperature at a depth of 3.75 cm was measured during ultrasound after superficial heating and with ultrasound alone.Subjects:10 healthy men.Main Outcome Measure:Temperature was recorded every 30 s during 15 min of ultrasound and for 15 min afterward.Results:Temperature elevation with ultrasound was significantly greater with preheating (4.0 ± 0.21 °C) than with ultrasound alone (3.0 ± 0.22 °C). Temperature decline was not significantly different between preheating and ultrasound alone.Conclusions:Superficial preheating significantly increases temperature elevation but has no effect on temperature decline during a 15-min cooling period.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Aolin Che ◽  
Yalin Liu ◽  
Hong Xiao ◽  
Hao Wang ◽  
Ke Zhang ◽  
...  

In the past decades, due to the low design cost and easy maintenance, text-based CAPTCHAs have been extensively used in constructing security mechanisms for user authentications. With the recent advances in machine/deep learning in recognizing CAPTCHA images, growing attack methods are presented to break text-based CAPTCHAs. These machine learning/deep learning-based attacks often rely on training models on massive volumes of training data. The poorly constructed CAPTCHA data also leads to low accuracy of attacks. To investigate this issue, we propose a simple, generic, and effective preprocessing approach to filter and enhance the original CAPTCHA data set so as to improve the accuracy of the previous attack methods. In particular, the proposed preprocessing approach consists of a data selector and a data augmentor. The data selector can automatically filter out a training data set with training significance. Meanwhile, the data augmentor uses four different image noises to generate different CAPTCHA images. The well-constructed CAPTCHA data set can better train deep learning models to further improve the accuracy rate. Extensive experiments demonstrate that the accuracy rates of five commonly used attack methods after combining our preprocessing approach are 2.62% to 8.31% higher than those without preprocessing approach. Moreover, we also discuss potential research directions for future work.


Product evaluations are precious for upcoming clients in supporting them make choices. To this, numerous mining techniques have been proposed, wherein judging a evaluation sentence’s orientation (e.g. Outstanding or bad) is considered as one of their key worrying conditions. Lately, deep studying has emerged as a powerful technique for fixing sentiment kind issues. A neural network intrinsically learns useful instance routinely without human efforts. But, the fulfilment of deep getting to know pretty is primarily based totally on the supply of big-scale education data. We recommend a unique deep studying framework for product review sentiment classification which employs prevalently to be had rankings as susceptible supervision signs and symptoms. The framework consists of steps: (1) studying a high level representation (an embedding region) which captures the general sentiment distribution of sentences thru score facts; (2) such as a class layer-on top of the embedding layer and use labelled sentences for supervised fine-tuning. We discover styles of low stage community structure for modelling evaluation sentences, specifically, convolution function extractors and prolonged brieftime period memory. To have a take a look at the proposed framework, we gather a data set containing 1.1M weakly classified evaluate sentences and eleven, 754 labelled review sentences from Amazon. Experimental effects display the efficacy of the proposed framework and its superiority over baselines. In this future work todetect false reviews given by robots or by malicious people by taking amount, sometimessome companies may hire people to boost their product ranking higher by assigning fake rating and this malicious people or robots give continuous ranking or review to such product and we can detect such fake rating by analysingratingandremove suchfake rating to give only genuine reviews to users.


2021 ◽  
Vol 2 ◽  
Author(s):  
Zekun Cao ◽  
Jeronimo Grandi ◽  
Regis Kopper

Dynamic field of view (FOV) restrictors have been successfully used to reduce visually induced motion sickness (VIMS) during continuous viewpoint motion control (virtual travel) in virtual reality (VR). This benefit, however, comes at the cost of losing peripheral awareness during provocative motion. Likewise, the use of visual references that are stable in relation to the physical environment, called rest frames (RFs), has also been shown to reduce discomfort during virtual travel tasks in VR. We propose a new RF-based design called Granulated Rest Frames (GRFs) with a soft-edged circular cutout in the center that leverages the rest frames’ benefits without completely blocking the user’s peripheral view. The GRF design is application-agnostic and does not rely on context-specific RFs, such as commonly used cockpits. We report on a within-subjects experiment with 20 participants. The results suggest that, by strategically applying GRFs during a visual search session in VR, we can achieve better item searching efficiency as compared to restricted FOV. The effect of GRFs on reducing VIMS remains to be determined by future work.


1976 ◽  
Vol 43 (2) ◽  
pp. 532-534 ◽  
Author(s):  
James C. Norton

10 subjects were studied to determine AEP effects of square, circle, and blank stimuli with variable stimulus intensity For the group as a whole, object and intensity effects were significant on a number of amplitude and latency measures, but the object effect appears largely to reflect the presence or absence of a figure, rather than its nature. Increased intensity differentially affected latency, shortening the first negative deflection while lengthening the second positive. Amplitude is generally increased with higher intensity. Analysis of within-subjects effects showed considerable variability as to which parameters were significantly related to the independent variables in individual subjects. A repeated-measures, within-subjects research strategy is seen as appropriate on the basis of these data.


1997 ◽  
Vol 20 (3) ◽  
pp. 529-547 ◽  
Author(s):  
Bettina Hosenfeld ◽  
Han L.J. van der Maas ◽  
Dymphna C. van den Boom

This paper reports on modelling six frequency distributions representing the analogical reasoning performance of four different samples of elementary schoolchildren. A two-component model outperformed a one-component model in all investigated data sets, discriminating accurate performers with high success probabilities and inaccurate performers with low success probabilities, whereas for two data sets a three-component model provided the best fit. In a treatment-control group data set, the treatment group comprised a larger proportion of accurate performers than the control group, whereas the success probabilities of the two latent classes were nearly identical in both groups. In a repeated-measures data set, both the success probabilities of the two latent classes and the proportion of accurate performers increased from the first to the second test session. The results provided a first indication of a transition in the development of analogical reasoning in elementary schoolchildren.


1989 ◽  
Vol 33 (18) ◽  
pp. 1223-1227 ◽  
Author(s):  
James R. Lewis

This paper discusses methods with which one can simultaneously counterbalance immediate sequential effects and pairing of conditions and stimuli in a within-subjects design using pairs of Latin squares. Within-subjects (repeated measures) experiments are common in human factors research. The designer of such an experiment must develop a scheme to ensure that the conditions and stimuli are not confounded, or randomly order stimuli and conditions. While randomization ensures balance in the long run, it is possible that a specific random sequence may not be acceptable. An alternative to randomization is to use Latin squares. The usual Latin square design ensures that each condition appears an equal number of times in each column of the square. Latin squares have been described which have the effect of counterbalancing immediate sequential effects. The objective of this work was to extend these earlier efforts by developing procedures for designing pairs of Latin squares which ensure complete counterbalancing of immediate sequential effects for both conditions and stimuli, and also ensure that conditions and stimuli are paired in the squares an equal number of times.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Juliann Saquib ◽  
Haneen A. AlMohaimeed ◽  
Sally A. AlOlayan ◽  
Nora A. AlRebdi ◽  
Jana I. AlBulaihi ◽  
...  

Abstract Objectives Scientific evidence suggests that virtual reality (VR) could potentially help patients tolerate painful medical procedures and conditions. The aim of this study was to evaluate the efficacy of virtual reality on pain tolerance and threshold. Methods A within-subjects experimental study design was conducted on 53 female students at Qassim University in Saudi Arabia. Each participant completed three rounds of assessment: one baseline (no VR) and two VR immersion (passive and interactive) in random order sequence. During each round, participants submerged their non-dominant hand into an ice bath; pain threshold and tolerance were measured as outcomes and analyzed using repeated measures ANOVA. Results Participants had both higher pain threshold and tolerance during interactive and passive VR rounds in comparison to the non-VR baseline assessment (p<0.05). Participants had greater pain tolerance during the interactive VR condition compared to the passive VR condition (p<0.001). Conclusions VR experiences increase pain threshold and tolerance with minimal side effects, and the larger effects were demonstrated using interactive games. Interactive VR gaming should be considered and tested as a treatment for pain.


Author(s):  
Владимир Борисович Барахнин ◽  
Светлана Валентиновна Мальцева ◽  
Константин Владимирович Данилов ◽  
Василий Вячеславович Корнилов

Современные социотехнические системы в различных областях характеризуются наличием в их составе большого количества интеллектуального оборудования, которое может самостоятельно регулировать собственное потребление энергии, а также взаимодействовать с другими потребителями в процессах принятия решений и управления. Одна из таких отраслей - энергетика, где самоорганизация и системы коллективного потребления являются наиболее перспективными с точки зрения обеспечения эффективности использования энергоресурсов. Рассмотрены подходы к установлению статических и динамических тарифов на электроэнергию. Проведено сравнение двух моделей энергопотребления - статического двухтарифного и динамического, учитывающих рациональное поведение умных устройств, способных выбирать лучшие режимы для потребления электроэнергии. Показано влияние количества таких устройств на возможность достижения равномерного потребления при использовании второй модели. Modern socio-technical systems in various fields include a large number of smart equipment that can independently regulate its own energy consumption, as well as interact with other consumers in decision-making and management processes. Energy is one of these areas. Self-organization and collective self-consumption are the most promising in terms of ensuring the efficiency of energy use. Existing and prospective approaches to using static and dynamic time-based tariffs are under consideration. The paper presents a mathematical description of two models of energy consumption: a static model based on the allocation of two zones with a fixed duration and tariffs for each one and a dynamic model of two-tariff accounting with feedback, which assumes tariffs changing based on the results of the analysis of current electricity consumption. A pilot study of both models was conducted by using energy consumption data and taking into account the rational behavior of smart devices as consumers who can choose the best periods for electricity consumption. During the experiments it was investigated how an increase in the share of smart devices in the composition of electricity consumers as well as options for establishing zones and tariffs, affect the possibility of achieving uniform consumption during the day. Experiments have shown that with a small proportion of smart devices, acceptable results that reduce the variation in the consumption function can favor usage of the model without feedback. An increase in the number of actors in the system inevitably requires including a feedback mechanism into the system that allows the resource supplier to prevent excessive concentration of smart devices during the period of the cheaper tariff. However, when the share of smart devices exceeds a certain critical value, a pronounced inversion of the times of cheap and expensive tariffs occurs in two successive iterations. In this case, in order to ensure a quite even distribution of electricity consumption, it is advisable for the supplier to return to the single tariff rate. Thus, an excessive increase in the number of actors in the system can neutralize the effect of their use


Sign in / Sign up

Export Citation Format

Share Document