scholarly journals Encouraging Attention and Exploration in a Hybrid Recommender System for Libraries of Unfamiliar Music

2019 ◽  
Vol 2 ◽  
pp. 205920431989317
Author(s):  
John R. Taylor ◽  
Roger T. Dean

There are few studies of user interaction with music libraries comprising solely of unfamiliar music, despite such music being represented in national music information centre collections. We aim to develop a system that encourages exploration of such a library. This study investigates the influence of 69 users’ pre-existing musical genre and feature preferences on their ongoing continuous real-time psychological affect responses during listening and the acoustic features of the music on their liking and familiarity ratings for unfamiliar art music (the collection of the Australian Music Centre) during a sequential hybrid recommender-guided interaction. We successfully mitigated the unfavorable starting conditions (no prior item ratings or participants’ item choices) by using each participant’s pre-listening music preferences, translated into acoustic features and linked to item view count from the Australian Music Centre database, to choose their seed item. We found that first item liking/familiarity ratings were on average higher than the subsequent 15 items and comparable with the maximal values at the end of listeners’ sequential responses, showing acoustic features to be useful predictors of responses. We required users to give a continuous response indication of their perception of the affect expressed as they listened to 30-second excerpts of music, with our system successfully providing either a “similar” or “dissimilar” next item, according to—and confirming—the utility of the items’ acoustic features, but chosen from the affective responses of the preceding item. We also developed predictive statistical time series analysis models of liking and familiarity, using music preferences and preceding ratings. Our analyses suggest our users were at the starting low end of the commonly observed inverted-U relationship between exposure and both liking and perceived familiarity, which were closely related. Overall, our hybrid recommender worked well under extreme conditions, with 53 unique items from 100 chosen as “seed” items, suggesting future enhancement of our approach can productively encourage exploration of libraries of unfamiliar music.

2021 ◽  
Vol 11 (11) ◽  
pp. 4834
Author(s):  
Kai Ren Teo ◽  
Balamurali B T ◽  
Jianying Zhou ◽  
Jer-Ming Chen

Many mobile electronics devices, including smartphones and tablets, require the user to interact physically with the device via tapping the touchscreen. Conveniently, these compact devices are also equipped with high-precision transducers such as accelerometers and microphones, integrated mechanically and designed on-board to support a range of user functionalities. However, unintended access to these transducer signals (bypassing normal on-board data access controls) may allow sensitive user interaction information to be detected and thereby exploited. In this study, we show that acoustic features extracted from the on-board microphone signals, supported with accelerometer and gyroscope signals, may be used together with machine learning techniques to successfully determine the user’s touch input location on a touchscreen: our ensemble model, namely the random forest model, predicts touch input location with up to 86% accuracy in a realistic scenario. Accordingly, we present the approach and techniques used, the performance of the model developed, and also discuss limitations and possible mitigation methods to thwart possible exploitation of such unintended signal channels.


2020 ◽  
Vol 17 (9) ◽  
pp. 4145-4149
Author(s):  
A. N. Myna ◽  
K. Deepthi ◽  
Samvrudhi V. Shankar

Music plays an integral role in our lives as the most popular type of recreation. With the advent of new technologies such as Internet and portable media players, large amount of music data is available online which can be distributed and easily made available to people. Enormous amount of music data is released every year by several artists with songs varying in features, genre and so on. Because of this, a need for reliable and easy access of songs based on user preferences is necessary. The recommender system focuses on generating playlists based on the physical, perceptual and acoustical properties of the song (content based filtering approach), or on commonalities between users on a particular basis like ratings or user data history (collaborative filtering). The system thus developed is a hybrid music recommender tool which creates a user centric suggestion system accompanied by feature extraction which in turn enhances the accuracy of music recommendations.


Author(s):  
Monishkanna Barathan ◽  
Ershad Sharifahmadian

Due to the increase in amount of available information, finding places and planning of the activities to be done during a tour can be strenuous. Tourists are looking for information about a place in which they have not been before, which worsen the selection of places that fit better with user’s preferences. Recommendation systems have been fundamentally applicable in tourism, suggest suitable places, and effectively prune large information from different locations, so tourists are directed toward those places where are matched with their needs and preferences. Several techniques have been studied for point-of-interest (POI) recommendation, including content-based which builds based on user preferences, collaborative filtering which exploits the behavior of other users, and different places, knowledge-based method, and several other techniques. These methods are vulnerable to some limitations and shortcomings related to recommendation environment such as scalability, sparsity, first-rater or gray sheep problems. This paper tries to identify the drawbacks that prevent wide spread use of these methodologies in recommendation. To improve performance of recommendation systems, these methods are combined to form hybrid recommenders. This paper proposes a novel hybrid recommender system which suggests tourism destinations to a user with minimal user interaction. Furthermore, we use sentiment analysis of user’s comments to enhance the efficiency of the proposed system.


2019 ◽  
Vol 36 (4) ◽  
pp. 335-352 ◽  
Author(s):  
Jan-Peter Herbst

Research on rock harmony accords with common practice in guitar playing in that power chords (fifth interval) with an indeterminate chord quality as well as major chords are preferred to more complex chords when played with a distorted tone. This study explored the interrelated effects of distortion and harmonic structure on acoustic features and perceived pleasantness of electric guitar chords. Extracting psychoacoustic parameters from guitar tones with Music Information Retrieval technology revealed that the level of distortion and the complexity of interval relations affects sensorial pleasantness. A listening test demonstrated power and major chords being perceived as significantly more pleasant than minor and altered dominant chords when being played with an overdriven or distorted guitar tone. This result accords with musical practice within rock genres. Rather clean rock styles such as blues or classic rock use major chords frequently, whereas subgenres with more distorted guitars such as heavy metal largely prefer power chords. Considering individual differences, electric guitar players rated overdriven and distorted chords as significantly more pleasant. Results were ambiguous in terms of gender but indicated that women perceive distorted guitar tones as less pleasant than men. Rock music listeners were more tolerant of sensorial unpleasant sounds.


2018 ◽  
Vol 36 (2) ◽  
pp. 217-242 ◽  
Author(s):  
Elke B. Lange ◽  
Klaus Frieler

Music information retrieval (MIR) is a fast-growing research area. One of its aims is to extract musical characteristics from audio. In this study, we assumed the roles of researchers without further technical MIR experience and set out to test in an exploratory way its opportunities and challenges in the specific context of musical emotion perception. Twenty sound engineers rated 60 musical excerpts from a broad range of styles with respect to 22 spectral, musical, and cross-modal features (perceptual features) and perceived emotional expression. In addition, we extracted 86 features (acoustic features) of the excerpts with the MIRtoolbox (Lartillot & Toiviainen, 2007). First, we evaluated the perceptual and extracted acoustic features. Both perceptual and acoustic features posed statistical challenges (e.g., perceptual features were often bimodally distributed, and acoustic features highly correlated). Second, we tested the suitability of the acoustic features for modeling perceived emotional content. Four nearly disjunctive feature sets provided similar results, implying a certain arbitrariness of feature selection. We compared the predictive power of perceptual and acoustic features using linear mixed effects models, but the results were inconclusive. We discuss critical points and make suggestions to further evaluate MIR tools for modeling music perception and processing.


2019 ◽  
Author(s):  
Aluizio Oliveira Neto

This piece explores some possibilities of using Music Information Retrieval and Signal Processing techniques to extract acoustic features from recorded material and use this data to inform the decision making process that is intrinsic to music composition. By trying to identify or create sound descriptors that correlate to the composer’s subjective sensations of listening it was possible to compare and manipulate samples on the basis of this information, bridging the gap between the imagined acoustic targets and the actions required to achieve it. “Iterative Meditations” was created through an iterative process of listening, analyzing, acting and refining the analysis techniques used, having as end product the musical piece itself as well as gathering a collection of tools for writing music.


Author(s):  
Kanawat Sorussa ◽  
Anant Choksuriwong ◽  
Montri Karnjanadecha

Music selection is difficult without efficient organization based on metadata or tags, and one effective tag scheme is based on the emotion expressed by the music. However, manual annotation is labor intensive and unstable because the perception of music emotion varies from person to person. This paper presents an emotion classification system for digital music with a resolution of eight emotional classes. Russell’s emotion model was adopted as common ground for emotional annotation. The music information retrieval (MIR) toolbox was employed to extract acoustic features from audio files. The classification system utilized a supervised machine learning technique to recognize acoustic features and create predictive models. Four predictive models were proposed and compared. The models were composed by crossmatching two types of neural networks, i.e., Levenberg-Marquardt (LM) and resilient backpropagation (Rprop), with two types of structures: a traditional multiclass model and the cascaded structure of a binary-class model. The performance of each model was evaluated via the MediaEval Database for Emotional Analysis (DEAM) benchmark. The best result was achieved by the model trained with the cascaded Rprop neural network (accuracy of 89.5%). In addition, correlation coefficient analysis showed that timbre features were the most impactful for prediction. Our work offers an opportunity for a competitive advantage in music classification because only a few music providers currently tag music with emotional terms.


Sign in / Sign up

Export Citation Format

Share Document