Predicting Key Recognition Difficulty in Music Using Statistical Learning Techniques

Author(s):  
Ching-Hua Chuan ◽  
Aleksey Charapko

In this paper, the authors use statistical models to predict the difficulty of recognizing musical keys from polyphonic audio signals. The key recognition difficulty provides important background information when comparing the performance of audio key finding algorithms that often evaluated using different private data sets. Given an audio recording, represented as extracted acoustic features, the authors applied multiple linear regression and proportional odds model to predict the difficulty level of the recording, annotated by three musicians as an integer on a 5-point Likert scale. The authors evaluated the predictions by using root mean square error, Pearson correlation coefficient, exact accuracy, and adjacent accuracy. The authors also discussed issues such as differences found between the musicians' annotations and the consistency of those annotations. To identify potential causes to the perceived difficulty for the individual musicians, the authors applied decision tree-based filtering with bagging. By using weighted naïve Bayes, the authors examined the effectiveness of each identified feature via a classification task.

2021 ◽  
Vol 71 (5) ◽  
pp. 647-655
Author(s):  
Girish Mishra ◽  
S. K. Pal ◽  
S. V. S. S. N. V. G. Krishna Murthy ◽  
Kanishk Vats ◽  
Rakshak Raina

Modern day lightweight block ciphers provide powerful encryption methods for securing IoT communication data. Tiny digital devices exchange private data which the individual users might not be willing to get disclosed. On the other hand, the adversaries try their level best to capture this private data. The first step towards this is to identify the encryption scheme. This work is an effort to construct a distinguisher to identify the cipher used in encrypting the traffic data. We try to establish a deep learning based method to identify the encryption scheme used from a set of three lightweight block ciphers viz. LBlock, PRESENT and SPECK. We make use of images from MNIST and fashion MNIST data sets for establishing the cryptographic distinguisher. Our results show that the overall classification accuracy depends firstly on the type of key used in encryption and secondly on how frequently the pixel values change in original input image.


2021 ◽  
pp. 002203452110138
Author(s):  
C.M. Mörch ◽  
S. Atsu ◽  
W. Cai ◽  
X. Li ◽  
S.A. Madathil ◽  
...  

Dentistry increasingly integrates artificial intelligence (AI) to help improve the current state of clinical dental practice. However, this revolutionary technological field raises various complex ethical challenges. The objective of this systematic scoping review is to document the current uses of AI in dentistry and the ethical concerns or challenges they imply. Three health care databases (MEDLINE [PubMed], SciVerse Scopus, and Cochrane Library) and 2 computer science databases (ArXiv, IEEE Xplore) were searched. After identifying 1,553 records, the documents were filtered, and a full-text screening was performed. In total, 178 studies were retained and analyzed by 8 researchers specialized in dentistry, AI, and ethics. The team used Covidence for data extraction and Dedoose for the identification of ethics-related information. PRISMA guidelines were followed. Among the included studies, 130 (73.0%) studies were published after 2016, and 93 (52.2%) were published in journals specialized in computer sciences. The technologies used were neural learning techniques for 75 (42.1%), traditional learning techniques for 76 (42.7%), or a combination of several technologies for 20 (11.2%). Overall, 7 countries contributed to 109 (61.2%) studies. A total of 53 different applications of AI in dentistry were identified, involving most dental specialties. The use of initial data sets for internal validation was reported in 152 (85.4%) studies. Forty-five ethical issues (related to the use AI in dentistry) were reported in 22 (12.4%) studies around 6 principles: prudence (10 times), equity (8), privacy (8), responsibility (6), democratic participation (4), and solidarity (4). The ratio of studies mentioning AI-related ethical issues has remained similar in the past years, showing that there is no increasing interest in the field of dentistry on this topic. This study confirms the growing presence of AI in dentistry and highlights a current lack of information on the ethical challenges surrounding its use. In addition, the scarcity of studies sharing their code could prevent future replications. The authors formulate recommendations to contribute to a more responsible use of AI technologies in dentistry.


2021 ◽  
Vol 13 (2) ◽  
pp. 164
Author(s):  
Chuyao Luo ◽  
Xutao Li ◽  
Yongliang Wen ◽  
Yunming Ye ◽  
Xiaofeng Zhang

The task of precipitation nowcasting is significant in the operational weather forecast. The radar echo map extrapolation plays a vital role in this task. Recently, deep learning techniques such as Convolutional Recurrent Neural Network (ConvRNN) models have been designed to solve the task. These models, albeit performing much better than conventional optical flow based approaches, suffer from a common problem of underestimating the high echo value parts. The drawback is fatal to precipitation nowcasting, as the parts often lead to heavy rains that may cause natural disasters. In this paper, we propose a novel interaction dual attention long short-term memory (IDA-LSTM) model to address the drawback. In the method, an interaction framework is developed for the ConvRNN unit to fully exploit the short-term context information by constructing a serial of coupled convolutions on the input and hidden states. Moreover, a dual attention mechanism on channels and positions is developed to recall the forgotten information in the long term. Comprehensive experiments have been conducted on CIKM AnalytiCup 2017 data sets, and the results show the effectiveness of the IDA-LSTM in addressing the underestimation drawback. The extrapolation performance of IDA-LSTM is superior to that of the state-of-the-art methods.


2019 ◽  
Author(s):  
Ulrike Niemeier ◽  
Claudia Timmreck ◽  
Kirstin Krüger

Abstract. In 1963 a series of eruptions of Mt. Agung, Indonesia, resulted in the 3rd largest eruption of the 20th century and claimed about 1900 lives. Two eruptions of this series injected SO2 into the stratosphere, a requirement to get a long lasting stratospheric sulfate layer. The first eruption on March 17th injected 4.7 Tg SO2 into the stratosphere, the second eruption 2.3 Tg SO2 on May, 16th. In recent volcanic emission data sets these eruption phases are merged together to one large eruption phase for Mt. Agung in March 1963 with an injection rate of 7 Tg SO2. The injected sulfur forms a sulfate layer in the stratosphere. The evolution of sulfur is non-linear and depends on the injection rate and aerosol background conditions. We performed ensembles of two model experiments, one with a single and a second one with two eruptions. The two smaller eruptions result in a lower burden, smaller particles and 0.1 to 0.3 Wm−2 (10–20 %) lower radiative forcing in monthly mean global average compared to the individual eruption experiment. The differences are the consequence of slightly stronger meridional transport due to different seasons of the eruptions, lower injection height of the second eruption and the resulting different aerosol evolution. The differences between the two experiments are significant but smaller than the variance of the individual ensemble means. Overall, the evolution of the volcanic clouds is different in case of two eruptions than with a single eruption only. We conclude that there is no justification to use one eruption only and both climatic eruptions should be taken into account in future emission datasets.


2016 ◽  
Vol 2016 ◽  
pp. 1-18 ◽  
Author(s):  
Mustafa Yuksel ◽  
Suat Gonul ◽  
Gokce Banu Laleci Erturkmen ◽  
Ali Anil Sinaci ◽  
Paolo Invernizzi ◽  
...  

Depending mostly on voluntarily sent spontaneous reports, pharmacovigilance studies are hampered by low quantity and quality of patient data. Our objective is to improve postmarket safety studies by enabling safety analysts to seamlessly access a wide range of EHR sources for collecting deidentified medical data sets of selected patient populations and tracing the reported incidents back to original EHRs. We have developed an ontological framework where EHR sources and target clinical research systems can continue using their own local data models, interfaces, and terminology systems, while structural interoperability and Semantic Interoperability are handled through rule-based reasoning on formal representations of different models and terminology systems maintained in the SALUS Semantic Resource Set. SALUS Common Information Model at the core of this set acts as the common mediator. We demonstrate the capabilities of our framework through one of the SALUS safety analysis tools, namely, the Case Series Characterization Tool, which have been deployed on top of regional EHR Data Warehouse of the Lombardy Region containing about 1 billion records from 16 million patients and validated by several pharmacovigilance researchers with real-life cases. The results confirm significant improvements in signal detection and evaluation compared to traditional methods with the missing background information.


2016 ◽  
Vol 2016 (3) ◽  
pp. 96-116 ◽  
Author(s):  
Chad Spensky ◽  
Jeffrey Stewart ◽  
Arkady Yerukhimovich ◽  
Richard Shay ◽  
Ari Trachtenberg ◽  
...  

AbstractModern mobile devices place a wide variety of sensors and services within the personal space of their users. As a result, these devices are capable of transparently monitoring many sensitive aspects of these users’ lives (e.g., location, health, or correspondences). Users typically trade access to this data for convenient applications and features, in many cases without a full appreciation of the nature and extent of the information that they are exposing to a variety of third parties. Nevertheless, studies show that users remain concerned about their privacy and vendors have similarly been increasing their utilization of privacy-preserving technologies in these devices. Still, despite significant efforts, these technologies continue to fail in fundamental ways, leaving users’ private data exposed.In this work, we survey the numerous components of mobile devices, giving particular attention to those that collect, process, or protect users’ private data. Whereas the individual components have been generally well studied and understood, examining the entire mobile device ecosystem provides significant insights into its overwhelming complexity. The numerous components of this complex ecosystem are frequently built and controlled by different parties with varying interests and incentives. Moreover, most of these parties are unknown to the typical user. The technologies that are employed to protect the users’ privacy typically only do so within a small slice of this ecosystem, abstracting away the greater complexity of the system. Our analysis suggests that this abstracted complexity is the major cause of many privacy-related vulnerabilities, and that a fundamentally new, holistic, approach to privacy is needed going forward. We thus highlight various existing technology gaps and propose several promising research directions for addressing and reducing this complexity.


2021 ◽  
pp. M56-2021-22
Author(s):  
Mirko Scheinert ◽  
Olga Engels ◽  
Ernst J. O. Schrama ◽  
Wouter van der Wal ◽  
Martin Horwath

AbstractGeodynamic processes in Antarctica such as glacial isostatic adjustment (GIA) and post-seismic deformation are measured by geodetic observations such as GNSS and satellite gravimetry. GNSS measurements have been comprising continuous measurements as well as episodic measurements since the mid-1990s. The estimated velocities typically reach an accuracy of 1 mm/a for horizontal and 2 mm/a for vertical velocities. However, the elastic deformation due to present-day ice-load change needs to be considered accordingly.Space gravimetry derives mass changes from small variations in the inter-satellite distance of a pair of satellites, starting with the GRACE satellite mission in 2002 and continuing with the GRACE-FO mission launched in 2018. The spatial resolution of the measurements is low (about 300 km) but the measurement error is homogeneous across Antarctica. The estimated trends contain signals from ice mass change, local and global GIA signal. To combine the strengths of the individual data sets statistical combinations of GNSS, GRACE and satellite altimetry data have been developed. These combinations rely on realistic error estimates and assumptions of snow density. Nevertheless, they capture signal that is missing from geodynamic forward models such as the large uplift in the Amundsen Sea sector due to low-viscous response to century-scale ice-mass changes.


Author(s):  
Gediminas Adomavicius ◽  
Yaqiong Wang

Numerical predictive modeling is widely used in different application domains. Although many modeling techniques have been proposed, and a number of different aggregate accuracy metrics exist for evaluating the overall performance of predictive models, other important aspects, such as the reliability (or confidence and uncertainty) of individual predictions, have been underexplored. We propose to use estimated absolute prediction error as the indicator of individual prediction reliability, which has the benefits of being intuitive and providing highly interpretable information to decision makers, as well as allowing for more precise evaluation of reliability estimation quality. As importantly, the proposed reliability indicator allows the reframing of reliability estimation itself as a canonical numeric prediction problem, which makes the proposed approach general-purpose (i.e., it can work in conjunction with any outcome prediction model), alleviates the need for distributional assumptions, and enables the use of advanced, state-of-the-art machine learning techniques to learn individual prediction reliability patterns directly from data. Extensive experimental results on multiple real-world data sets show that the proposed machine learning-based approach can significantly improve individual prediction reliability estimation as compared with a number of baselines from prior work, especially in more complex predictive scenarios.


Sign in / Sign up

Export Citation Format

Share Document