scholarly journals VIRTUAL DIGITAL ASSISTANT WITH VOICE INTERFACE SUPPORT

2021 ◽  
Vol 2 (133) ◽  
pp. 42-51
Author(s):  
Vyacheslav Spirintsev ◽  
Dmitry Popov ◽  
Olga Spirintseva

A virtual digital assistant which can work with arbitrary systems and provide an effective solution of narrowly focused user tasks for interaction with Ukrainian services voice inter-face supported has been proposed. The developed web service was implemented by using the PHP programming language, Wit.ai service for audio signal processing, FANN library for neural network construction, Telegram service for creating an interface.

2008 ◽  
Vol 2008 (1) ◽  
Author(s):  
Jonathan Taquet ◽  
Bernard Besserer ◽  
Abdelali Hassaine ◽  
Etienne Decenciere

2021 ◽  
Vol 2 ◽  
Author(s):  
Anderson Antonio Carvalho Alves ◽  
Lucas Tassoni Andrietta ◽  
Rafael Zinni Lopes ◽  
Fernando Oliveira Bussiman ◽  
Fabyano Fonseca e Silva ◽  
...  

This study focused on assessing the usefulness of using audio signal processing in the gaited horse industry. A total of 196 short-time audio files (4 s) were collected from video recordings of Brazilian gaited horses. These files were converted into waveform signals (196 samples by 80,000 columns) and divided into training (N = 164) and validation (N = 32) datasets. Twelve single-valued audio features were initially extracted to summarize the training data according to the gait patterns (Marcha Batida—MB and Marcha Picada—MP). After preliminary analyses, high-dimensional arrays of the Mel Frequency Cepstral Coefficients (MFCC), Onset Strength (OS), and Tempogram (TEMP) were extracted and used as input information in the classification algorithms. A principal component analysis (PCA) was performed using the 12 single-valued features set and each audio-feature dataset—AFD (MFCC, OS, and TEMP) for prior data visualization. Machine learning (random forest, RF; support vector machine, SVM) and deep learning (multilayer perceptron neural networks, MLP; convolution neural networks, CNN) algorithms were used to classify the gait types. A five-fold cross-validation scheme with 10 repetitions was employed for assessing the models' predictive performance. The classification performance across models and AFD was also validated with independent observations. The models and AFD were compared based on the classification accuracy (ACC), specificity (SPEC), sensitivity (SEN), and area under the curve (AUC). In the logistic regression analysis, five out of the 12 audio features extracted were significant (p < 0.05) between the gait types. ACC averages ranged from 0.806 to 0.932 for MFCC, from 0.758 to 0.948 for OS and, from 0.936 to 0.968 for TEMP. Overall, the TEMP dataset provided the best classification accuracies for all models. The most suitable method for audio-based horse gait pattern classification was CNN. Both cross and independent validation schemes confirmed that high values of ACC, SPEC, SEN, and AUC are expected for yet-to-be-observed labels, except for MFCC-based models, in which clear overfitting was observed. Using audio-generated data for describing gait phenotypes in Brazilian horses is a promising approach, as the two gait patterns were correctly distinguished. The highest classification performance was achieved by combining CNN and the rhythmic-descriptive AFD.


Sign in / Sign up

Export Citation Format

Share Document