scholarly journals SpeakingFaces: A Large-Scale Multimodal Dataset of Voice Commands with Visual and Thermal Video Streams

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3465
Author(s):  
Madina Abdrakhmanova ◽  
Askat Kuzdeuov ◽  
Sheikh Jarju ◽  
Yerbolat Khassanov ◽  
Michael Lewis ◽  
...  

We present SpeakingFaces as a publicly-available large-scale multimodal dataset developed to support machine learning research in contexts that utilize a combination of thermal, visual, and audio data streams; examples include human–computer interaction, biometric authentication, recognition systems, domain transfer, and speech recognition. SpeakingFaces is comprised of aligned high-resolution thermal and visual spectra image streams of fully-framed faces synchronized with audio recordings of each subject speaking approximately 100 imperative phrases. Data were collected from 142 subjects, yielding over 13,000 instances of synchronized data (∼3.8 TB). For technical validation, we demonstrate two baseline examples. The first baseline shows classification by gender, utilizing different combinations of the three data streams in both clean and noisy environments. The second example consists of thermal-to-visual facial image translation, as an instance of domain transfer.

2019 ◽  
Author(s):  
Meg Cychosz ◽  
Rachel R Romeo ◽  
Melanie Soderstrom ◽  
Camila Scaff ◽  
Hillary Ganek ◽  
...  

Recent advances in large-scale data storage and processing offer unprecedented opportunities for behavioral scientists to collect and analyze naturalistic data, including from under-represented groups. Audio data, particularly real-world audio recordings, are of particular interest to behavioral scientists because they provide high-fidelity access to subtle aspects of daily life and social interactions. However, these methodological advances pose novel risks to research participants and communities. In this article, we outline the benefits and challenges associated with collecting, analyzing, and sharing multi-hour audio recording data. Guided by the principles of autonomy, privacy, beneficence, and justice, we propose a set of ethical guidelines for the use of longform audio recordings in behavioral research. This article is also accompanied by an Open Science Framework Ethics Repository that includes informed consent resources such as frequent participant concerns and sample consent forms.


2021 ◽  
Author(s):  
Edwin Lughofer ◽  
Mahardhika Pratama

AbstractEvolving fuzzy systems (EFS) have enjoyed a wide attraction in the community to handle learning from data streams in an incremental, single-pass and transparent manner. The main concentration so far lied in the development of approaches for single EFS models, basically used for prediction purposes. Forgetting mechanisms have been used to increase their flexibility, especially for the purpose to adapt quickly to changing situations such as drifting data distributions. These require forgetting factors steering the degree of timely out-weighing older learned concepts, whose adequate setting in advance or in adaptive fashion is not an easy and not a fully resolved task. In this paper, we propose a new concept of learning fuzzy systems from data streams, which we call online sequential ensembling of fuzzy systems (OS-FS). It is able to model the recent dependencies in streams on a chunk-wise basis: for each new incoming chunk, a new fuzzy model is trained from scratch and added to the ensemble (of fuzzy systems trained before). This induces (i) maximal flexibility in terms of being able to apply variable chunk sizes according to the actual system delay in receiving target values and (ii) fast reaction possibilities in the case of arising drifts. The latter are realized with specific prediction techniques on new data chunks based on the sequential ensemble members trained so far over time. We propose four different prediction variants including various weighting concepts in order to put higher weights on the members with higher inference certainty during the amalgamation of predictions of single members to a final prediction. In this sense, older members, which keep in mind knowledge about past states, may get dynamically reactivated in the case of cyclic drifts, which induce dynamic changes in the process behavior which are re-occurring from time to time later. Furthermore, we integrate a concept for properly resolving possible contradictions among members with similar inference certainties. The reaction onto drifts is thus autonomously handled on demand and on the fly during the prediction stage (and not during model adaptation/evolution stage as conventionally done in single EFS models), which yields enormous flexibility. Finally, in order to cope with large-scale and (theoretically) infinite data streams within a reasonable amount of prediction time, we demonstrate two concepts for pruning past ensemble members, one based on atypical high error trends of single members and one based on the non-diversity of ensemble members. The results based on two data streams showed significantly improved performance compared to single EFS models in terms of a better convergence of the accumulated chunk-wise ahead prediction error trends, especially in the case of regular and cyclic drifts. Moreover, the more advanced prediction schemes could significantly outperform standard averaging over all members’ outputs. Furthermore, resolving contradictory outputs among members helped to improve the performance of the sequential ensemble further. Results on a wider range of data streams from different application scenarios showed (i) improved error trend lines over single EFS models, as well as over related AI methods OS-ELM and MLPs neural networks retrained on data chunks, and (ii) slightly worse trend lines than on-line bagged EFS (as specific EFS ensembles), but with around 100 times faster processing times (achieving low processing times way below requiring milli-seconds for single samples updates).


2020 ◽  
Vol 204 ◽  
pp. 106186 ◽  
Author(s):  
Fang Liu ◽  
Yanwei Yu ◽  
Peng Song ◽  
Yangyang Fan ◽  
Xiangrong Tong

2020 ◽  
Author(s):  
Filipe Barata ◽  
Peter Tinschert ◽  
Frank Rassouli ◽  
Claudia Steurer-Stey ◽  
Elgar Fleisch ◽  
...  

BACKGROUND Asthma is one of the most prevalent chronic respiratory diseases. Despite increased investment in treatment, little progress has been made in the early recognition and treatment of asthma exacerbations over the last decade. Nocturnal cough monitoring may provide an opportunity to identify patients at risk for imminent exacerbations. Recently developed approaches enable smartphone-based cough monitoring. These approaches, however, have not undergone longitudinal overnight testing nor have they been specifically evaluated in the context of asthma. Also, the problem of distinguishing partner coughs from patient coughs when two or more people are sleeping in the same room using contact-free audio recordings remains unsolved. OBJECTIVE The objective of this study was to evaluate the automatic recognition and segmentation of nocturnal asthmatic coughs and cough epochs in smartphone-based audio recordings that were collected in the field. We also aimed to distinguish partner coughs from patient coughs in contact-free audio recordings by classifying coughs based on sex. METHODS We used a convolutional neural network model that we had developed in previous work for automated cough recognition. We further used techniques (such as ensemble learning, minibatch balancing, and thresholding) to address the imbalance in the data set. We evaluated the classifier in a classification task and a segmentation task. The cough-recognition classifier served as the basis for the cough-segmentation classifier from continuous audio recordings. We compared automated cough and cough-epoch counts to human-annotated cough and cough-epoch counts. We employed Gaussian mixture models to build a classifier for cough and cough-epoch signals based on sex. RESULTS We recorded audio data from 94 adults with asthma (overall: mean 43 years; SD 16 years; female: 54/94, 57%; male 40/94, 43%). Audio data were recorded by each participant in their everyday environment using a smartphone placed next to their bed; recordings were made over a period of 28 nights. Out of 704,697 sounds, we identified 30,304 sounds as coughs. A total of 26,166 coughs occurred without a 2-second pause between coughs, yielding 8238 cough epochs. The ensemble classifier performed well with a Matthews correlation coefficient of 92% in a pure classification task and achieved comparable cough counts to that of human annotators in the segmentation of coughing. The count difference between automated and human-annotated coughs was a mean –0.1 (95% CI –12.11, 11.91) coughs. The count difference between automated and human-annotated cough epochs was a mean 0.24 (95% CI –3.67, 4.15) cough epochs. The Gaussian mixture model cough epoch–based sex classification performed best yielding an accuracy of 83%. CONCLUSIONS Our study showed longitudinal nocturnal cough and cough-epoch recognition from nightly recorded smartphone-based audio from adults with asthma. The model distinguishes partner cough from patient cough in contact-free recordings by identifying cough and cough-epoch signals that correspond to the sex of the patient. This research represents a step towards enabling passive and scalable cough monitoring for adults with asthma.


2016 ◽  
Vol 194 ◽  
pp. 107-116 ◽  
Author(s):  
Jingsong Shan ◽  
Jianxin Luo ◽  
Guiqiang Ni ◽  
Zhaofeng Wu ◽  
Weiwei Duan

Author(s):  
Steve T.K. Jan ◽  
Joseph Messou ◽  
Yen-Chen Lin ◽  
Jia-Bin Huang ◽  
Gang Wang

While deep learning models have achieved unprecedented success in various domains, there is also a growing concern of adversarial attacks against related applications. Recent results show that by adding a small amount of perturbations to an image (imperceptible to humans), the resulting adversarial examples can force a classifier to make targeted mistakes. So far, most existing works focus on crafting adversarial examples in the digital domain, while limited efforts have been devoted to understanding the physical domain attacks. In this work, we explore the feasibility of generating robust adversarial examples that remain effective in the physical domain. Our core idea is to use an image-to-image translation network to simulate the digital-to-physical transformation process for generating robust adversarial examples. To validate our method, we conduct a large-scale physical-domain experiment, which involves manually taking more than 3000 physical domain photos. The results show that our method outperforms existing ones by a large margin and demonstrates a high level of robustness and transferability.


2017 ◽  
Vol 6 (2) ◽  
pp. 266-284 ◽  
Author(s):  
Carolyn Birdsall ◽  
Danielle Drozdzewski

This paper details the contribution of mobile devices to capturing commemoration in action. It investigates the incorporation of audio and sound recording devices, observation, and note-taking into a mobile (auto)ethnographic research methodology, to research a large-scale commemorative event in Amsterdam, the Netherlands. On May 4, 2016, the sounds of a Silent March—through the streets of Amsterdam to Dam Square—were recorded and complemented by video grabs of the march’s participants and onlookers. We discuss how the mixed method enabled a multilevel analysis across visual, textual, and aural layers of the commemorative atmosphere. Our visual data aided in our evaluation of the construction of collective spectacle, while the audio data necessitated that we venture into new analytic territory. Using Sonic Visualiser, we uncovered alternative methods of “reading” landscape by identifying different sound signatures in the acoustic environment. Together, this aural and visual representation of the May 4 events enabled the identification of spatial markers and the temporal unfolding of the Silent March and the national 2 minutes’ silence in Amsterdam’s Dam Square.


Sign in / Sign up

Export Citation Format

Share Document