Evaluation of Audio Feature Groups for the Prediction of Arousal and Valence in Music

Author(s):  
Igor Vatolkin ◽  
Anil Nagathil
2020 ◽  
Author(s):  
Donna Erickson ◽  
Shigeto Kawahara ◽  
Albert Rilliard ◽  
Ryoko Hayashi ◽  
Toshiyuki Sadanobu ◽  
...  

2015 ◽  
Author(s):  
Suhaib A. ◽  
Khairunizam Wan ◽  
Azri A. Aziz ◽  
D. Hazry ◽  
Zuradzman M. Razlan ◽  
...  

Author(s):  
Yunzhi Wang ◽  
Xiangdong Wang ◽  
Yueliang Qian ◽  
Haiyong Luo ◽  
Fujiang Ge ◽  
...  

The smart grid is an important application field of the Internet of things. This paper presents a method of user electricity consumption pattern analysis for smart grid applications based on the audio feature EEUPC. A novel similarity function based on EEUPC is adapted to support clustering analysis of residential load patterns. The EEUPC similarity exploits features of peaks and valleys on curves instead of directly comparing values and obtains better performance for clustering analysis. Moreover, the proposed approach performs load pattern clustering, extracts a typical pattern for each cluster, and gives suggestions toward better power consumption for each typical pattern. Experimental results demonstrate that the EEUPC similarity is more consistent with human judgment than the Euclidean distance and higher clustering performance can be achieved for residential electric load data.


2020 ◽  
Vol 287 (1929) ◽  
pp. 20201148
Author(s):  
Roza G. Kamiloğlu ◽  
Katie E. Slocombe ◽  
Daniel B. M. Haun ◽  
Disa A. Sauter

Vocalizations linked to emotional states are partly conserved among phylogenetically related species. This continuity may allow humans to accurately infer affective information from vocalizations produced by chimpanzees. In two pre-registered experiments, we examine human listeners' ability to infer behavioural contexts (e.g. discovering food) and core affect dimensions (arousal and valence) from 155 vocalizations produced by 66 chimpanzees in 10 different positive and negative contexts at high, medium or low arousal levels. In experiment 1, listeners ( n = 310), categorized the vocalizations in a forced-choice task with 10 response options, and rated arousal and valence. In experiment 2, participants ( n = 3120) matched vocalizations to production contexts using yes/no response options. The results show that listeners were accurate at matching vocalizations of most contexts in addition to inferring arousal and valence. Judgments were more accurate for negative as compared to positive vocalizations. An acoustic analysis demonstrated that, listeners made use of brightness and duration cues, and relied on noisiness in making context judgements, and pitch to infer core affect dimensions. Overall, the results suggest that human listeners can infer affective information from chimpanzee vocalizations beyond core affect, indicating phylogenetic continuity in the mapping of vocalizations to behavioural contexts.


Sign in / Sign up

Export Citation Format

Share Document