scholarly journals Categorizing Touch-Input Locations from Touchscreen Device Interfaces via On-Board Mechano-Acoustic Transducers

2021 ◽  
Vol 11 (11) ◽  
pp. 4834
Author(s):  
Kai Ren Teo ◽  
Balamurali B T ◽  
Jianying Zhou ◽  
Jer-Ming Chen

Many mobile electronics devices, including smartphones and tablets, require the user to interact physically with the device via tapping the touchscreen. Conveniently, these compact devices are also equipped with high-precision transducers such as accelerometers and microphones, integrated mechanically and designed on-board to support a range of user functionalities. However, unintended access to these transducer signals (bypassing normal on-board data access controls) may allow sensitive user interaction information to be detected and thereby exploited. In this study, we show that acoustic features extracted from the on-board microphone signals, supported with accelerometer and gyroscope signals, may be used together with machine learning techniques to successfully determine the user’s touch input location on a touchscreen: our ensemble model, namely the random forest model, predicts touch input location with up to 86% accuracy in a realistic scenario. Accordingly, we present the approach and techniques used, the performance of the model developed, and also discuss limitations and possible mitigation methods to thwart possible exploitation of such unintended signal channels.

2012 ◽  
pp. 969-985
Author(s):  
Floriana Esposito ◽  
Teresa M.A. Basile ◽  
Nicola Di Mauro ◽  
Stefano Ferilli

One of the most important features of a mobile device concerns its flexibility and capability to adapt the functionality it provides to the users. However, the main problems of the systems present in literature are their incapability to identify user needs and, more importantly, the insufficient mappings of those needs to available resources/services. In this paper, we present a two-phase construction of the user model: firstly, an initial static user model is built for the user connecting to the system the first time. Then, the model is revised/adjusted by considering the information collected in the logs of the user interaction with the device/context in order to make the model more adequate to the evolving user’s interests/ preferences/behaviour. The initial model is built by exploiting the stereotype concept, its adjustment is performed exploiting machine learning techniques and particularly, sequence mining and pattern discovery strategies.


Author(s):  
Floriana Esposito ◽  
Teresa M.A. Basile ◽  
Nicola Di Mauro ◽  
Stefano Ferilli

One of the most important features of a mobile device concerns its flexibility and capability to adapt the functionality it provides to the users. However, the main problems of the systems present in literature are their incapability to identify user needs and, more importantly, the insufficient mappings of those needs to available resources/services. In this paper, we present a two-phase construction of the user model: firstly, an initial static user model is built for the user connecting to the system the first time. Then, the model is revised/adjusted by considering the information collected in the logs of the user interaction with the device/context in order to make the model more adequate to the evolving user’s interests/ preferences/behaviour. The initial model is built by exploiting the stereotype concept, its adjustment is performed exploiting machine learning techniques and particularly, sequence mining and pattern discovery strategies.


2016 ◽  
Vol 2016 ◽  
pp. 1-8
Author(s):  
Vasilisa Verkhodanova ◽  
Vladimir Shapranov

The development and popularity of voice-user interfaces made spontaneous speech processing an important research field. One of the main focus areas in this field is automatic speech recognition (ASR) that enables the recognition and translation of spoken language into text by computers. However, ASR systems often work less efficiently for spontaneous than for read speech, since the former differs from any other type of speech in many ways. And the presence of speech disfluencies is its prominent characteristic. These phenomena are an important feature in human-human communication and at the same time they are a challenging obstacle for the speech processing tasks. In this paper we address an issue of voiced hesitations (filled pauses and sound lengthenings) detection in Russian spontaneous speech by utilizing different machine learning techniques, from grid search and gradient descent in rule-based approaches to such data-driven ones as ELM and SVM based on the automatically extracted acoustic features. Experimental results on the mixed and quality diverse corpus of spontaneous Russian speech indicate the efficiency of the techniques for the task in question, with SVM outperforming other methods.


The basic aim of the electric power sector is to produce power as when required at the suitable sites, then transmitting and distributing to various load centres or consumers, retaining the quality (sustaining frequency and voltage at stated value) and fidelity of supply at an economical tariff. The primary purpose is to have a brief idea about various power quality advancement techniques and to learn the prospects of various harmonic mitigation methods with a focus on Sinusoidal current control strategy (SCCS). The SCCS is a time domain control strategy established on instantaneous p-q theory. The control strategy has been elaborated here in details and has been implemented using MATLAB 2016A. The results have been given and described in details explaining efficacy of the above control strategy. Since sinusoidal current control strategy is a simple and effective control strategy it has tremendous potential for application in the Distributed Generation oriented system. Further researches can be extended toward application machine learning techniques for the improvisation of control performance of the active power filters.


2006 ◽  
Vol 15 (04) ◽  
pp. 673-691 ◽  
Author(s):  
RAFAEL RAMIREZ ◽  
AMAURY HAZAN

In this paper we present a machine learning approach to modeling the knowledge applied by a musician when performing a score in order to produce an expressive performance of a piece. We describe a tool for both generating and explaining expressive music performances of monophonic Jazz melodies. The tool consists of three components: (a) a melodic transcription component which extracts a set of acoustic features from monophonic recordings, (b) a machine learning component which induce both an expressive transformation model and a set of expressive performance rules from the extracted acoustic features, and (c) a melody synthesis component which generates expressive monophonic output (MIDI or audio) from inexpressive melody descriptions using the induced expressive transformation model. We compare several machine learning techniques we have explored for inducing the expressive transformation model.


2006 ◽  
Author(s):  
Christopher Schreiner ◽  
Kari Torkkola ◽  
Mike Gardner ◽  
Keshu Zhang

2020 ◽  
Vol 12 (2) ◽  
pp. 84-99
Author(s):  
Li-Pang Chen

In this paper, we investigate analysis and prediction of the time-dependent data. We focus our attention on four different stocks are selected from Yahoo Finance historical database. To build up models and predict the future stock price, we consider three different machine learning techniques including Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN) and Support Vector Regression (SVR). By treating close price, open price, daily low, daily high, adjusted close price, and volume of trades as predictors in machine learning methods, it can be shown that the prediction accuracy is improved.


Diabetes ◽  
2020 ◽  
Vol 69 (Supplement 1) ◽  
pp. 389-P
Author(s):  
SATORU KODAMA ◽  
MAYUKO H. YAMADA ◽  
YUTA YAGUCHI ◽  
MASARU KITAZAWA ◽  
MASANORI KANEKO ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document