scholarly journals Improvised music follows human language quantitative properties to optimize music processing

2021 ◽  
Author(s):  
Sarig Sela

Music is a cognitively demanding task. New tones override the previous tones in quick succession, with only a short window to process them. Language presents similar constraints on the brain. The cognitive constraints associated with language processing have been argued to promote the Chunk-and-Pass processing hypothesis and may influence the statistical regularities associated with word and phenome presentation that have been identified in language and are thought to allow optimal communication. If this hypothesis were true, then similar statistical properties should be identified in music as in language. By searching for real-life musical corpora, rather than relying on the artificial generation of musical stimuli, a novel approach to melodic fragmentation was developed specifically for a corpus comprised of improvisation transcriptions that represent a popular performance practice tradition from the 16th century. These improvisations were created by following a very detailed technique, which was disseminated through music tutorials and treatises across Europe during the 16th century. These music tutorials present a very precise methodology for improvisation, using a pre-defined vocabulary of melodic fragments (similar to modern jazz licks). I have found that these corpora follow two paramount, quantitative linguistics characteristics: (1) Zipf’s rank-frequency law and (2) Zipf’s abbreviation law. According to the working hypothesis, adherence to these laws ensures the optimal coding of the examined music corpora, which facilitates the improved cognitive processing for both the listener and the improviser. Although these statistical characteristics are not consciously implemented by the improviser, they might play a critical role in music processing for both the listener and the improviser.

2021 ◽  
pp. 030573562110611
Author(s):  
Sarig Sela

Music is a cognitively demanding task. New tones override the previous tones in quick succession, with only a short window to process them. Language presents similar constraints on the brain. The cognitive constraints associated with language processing have been argued to promote the Chunk-and-Pass processing hypothesis and may influence the statistical regularities associated with word and phenome presentation that have been identified in language and are thought to allow optimal communication. If this hypothesis were true, then similar statistical properties should be identified in music as in language. By searching for real-life musical corpora, rather than relying on the artificial generation of musical stimuli, a novel approach to melodic fragmentation was developed specifically for a corpus comprised of improvisation transcriptions that represent a popular performance practice tradition from the 16th century. These improvisations were created by following a very detailed technique, which was disseminated through music tutorials and treatises across Europe during the 16th century. These music tutorials present a very precise methodology for improvisation, using a pre-defined vocabulary of melodic fragments (similar to modern jazz licks). I have found that these corpora follow two paramount, quantitative linguistics characteristics: (1) Zipf’s rank-frequency law and (2) Zipf’s abbreviation law. According to the working hypothesis, adherence to these laws ensures the optimal coding of the examined music corpora, which facilitates the improved cognitive processing for both the listener and the improviser. Although these statistical characteristics are not consciously implemented by the improviser, they might play a critical role in music processing for both the listener and the improviser.


Author(s):  
Jennifer M. Roche ◽  
Arkady Zgonnikov ◽  
Laura M. Morett

Purpose The purpose of the current study was to evaluate the social and cognitive underpinnings of miscommunication during an interactive listening task. Method An eye and computer mouse–tracking visual-world paradigm was used to investigate how a listener's cognitive effort (local and global) and decision-making processes were affected by a speaker's use of ambiguity that led to a miscommunication. Results Experiments 1 and 2 found that an environmental cue that made a miscommunication more or less salient impacted listener language processing effort (eye-tracking). Experiment 2 also indicated that listeners may develop different processing heuristics dependent upon the speaker's use of ambiguity that led to a miscommunication, exerting a significant impact on cognition and decision making. We also found that perspective-taking effort and decision-making complexity metrics (computer mouse tracking) predict language processing effort, indicating that instances of miscommunication produced cognitive consequences of indecision, thinking, and cognitive pull. Conclusion Together, these results indicate that listeners behave both reciprocally and adaptively when miscommunications occur, but the way they respond is largely dependent upon the type of ambiguity and how often it is produced by the speaker.


2020 ◽  
Author(s):  
Kun Sun

Expectations or predictions about upcoming content play an important role during language comprehension and processing. One important aspect of recent studies of language comprehension and processing concerns the estimation of the upcoming words in a sentence or discourse. Many studies have used eye-tracking data to explore computational and cognitive models for contextual word predictions and word processing. Eye-tracking data has previously been widely explored with a view to investigating the factors that influence word prediction. However, these studies are problematic on several levels, including the stimuli, corpora, statistical tools they applied. Although various computational models have been proposed for simulating contextual word predictions, past studies usually preferred to use a single computational model. The disadvantage of this is that it often cannot give an adequate account of cognitive processing in language comprehension. To avoid these problems, this study draws upon a massive natural and coherent discourse as stimuli in collecting the data on reading time. This study trains two state-of-art computational models (surprisal and semantic (dis)similarity from word vectors by linear discriminative learning (LDL)), measuring knowledge of both the syntagmatic and paradigmatic structure of language. We develop a `dynamic approach' to compute semantic (dis)similarity. It is the first time that these two computational models have been merged. Models are evaluated using advanced statistical methods. Meanwhile, in order to test the efficiency of our approach, one recently developed cosine method of computing semantic (dis)similarity based on word vectors data adopted is used to compare with our `dynamic' approach. The two computational and fixed-effect statistical models can be used to cross-verify the findings, thus ensuring that the result is reliable. All results support that surprisal and semantic similarity are opposed in the prediction of the reading time of words although both can make good predictions. Additionally, our `dynamic' approach performs better than the popular cosine method. The findings of this study are therefore of significance with regard to acquiring a better understanding how humans process words in a real-world context and how they make predictions in language cognition and processing.


2021 ◽  
Vol 55 (1) ◽  
pp. 1-2
Author(s):  
Bhaskar Mitra

Neural networks with deep architectures have demonstrated significant performance improvements in computer vision, speech recognition, and natural language processing. The challenges in information retrieval (IR), however, are different from these other application areas. A common form of IR involves ranking of documents---or short passages---in response to keyword-based queries. Effective IR systems must deal with query-document vocabulary mismatch problem, by modeling relationships between different query and document terms and how they indicate relevance. Models should also consider lexical matches when the query contains rare terms---such as a person's name or a product model number---not seen during training, and to avoid retrieving semantically related but irrelevant results. In many real-life IR tasks, the retrieval involves extremely large collections---such as the document index of a commercial Web search engine---containing billions of documents. Efficient IR methods should take advantage of specialized IR data structures, such as inverted index, to efficiently retrieve from large collections. Given an information need, the IR system also mediates how much exposure an information artifact receives by deciding whether it should be displayed, and where it should be positioned, among other results. Exposure-aware IR systems may optimize for additional objectives, besides relevance, such as parity of exposure for retrieved items and content publishers. In this thesis, we present novel neural architectures and methods motivated by the specific needs and challenges of IR tasks. We ground our contributions with a detailed survey of the growing body of neural IR literature [Mitra and Craswell, 2018]. Our key contribution towards improving the effectiveness of deep ranking models is developing the Duet principle [Mitra et al., 2017] which emphasizes the importance of incorporating evidence based on both patterns of exact term matches and similarities between learned latent representations of query and document. To efficiently retrieve from large collections, we develop a framework to incorporate query term independence [Mitra et al., 2019] into any arbitrary deep model that enables large-scale precomputation and the use of inverted index for fast retrieval. In the context of stochastic ranking, we further develop optimization strategies for exposure-based objectives [Diaz et al., 2020]. Finally, this dissertation also summarizes our contributions towards benchmarking neural IR models in the presence of large training datasets [Craswell et al., 2019] and explores the application of neural methods to other IR tasks, such as query auto-completion.


2020 ◽  
Vol 11 (1) ◽  
pp. 24
Author(s):  
Jin Tao ◽  
Kelly Brayton ◽  
Shira Broschat

Advances in genome sequencing technology and computing power have brought about the explosive growth of sequenced genomes in public repositories with a concomitant increase in annotation errors. Many protein sequences are annotated using computational analysis rather than experimental verification, leading to inaccuracies in annotation. Confirmation of existing protein annotations is urgently needed before misannotation becomes even more prevalent due to error propagation. In this work we present a novel approach for automatically confirming the existence of manually curated information with experimental evidence of protein annotation. Our ensemble learning method uses a combination of recurrent convolutional neural network, logistic regression, and support vector machine models. Natural language processing in the form of word embeddings is used with journal publication titles retrieved from the UniProtKB database. Importantly, we use recall as our most significant metric to ensure the maximum number of verifications possible; results are reported to a human curator for confirmation. Our ensemble model achieves 91.25% recall, 71.26% accuracy, 65.19% precision, and an F1 score of 76.05% and outperforms the Bidirectional Encoder Representations from Transformers for Biomedical Text Mining (BioBERT) model with fine-tuning using the same data.


2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Shu-Bo Chen ◽  
Saima Rashid ◽  
Muhammad Aslam Noor ◽  
Zakia Hammouch ◽  
Yu-Ming Chu

Abstract Inequality theory provides a significant mechanism for managing symmetrical aspects in real-life circumstances. The renowned distinguishing feature of integral inequalities and fractional calculus has a solid possibility to regulate continuous issues with high proficiency. This manuscript contributes to a captivating association of fractional calculus, special functions and convex functions. The authors develop a novel approach for investigating a new class of convex functions which is known as an n-polynomial $\mathcal{P}$ P -convex function. Meanwhile, considering two identities via generalized fractional integrals, provide several generalizations of the Hermite–Hadamard and Ostrowski type inequalities by employing the better approaches of Hölder and power-mean inequalities. By this new strategy, using the concept of n-polynomial $\mathcal{P}$ P -convexity we can evaluate several other classes of n-polynomial harmonically convex, n-polynomial convex, classical harmonically convex and classical convex functions as particular cases. In order to investigate the efficiency and supremacy of the suggested scheme regarding the fractional calculus, special functions and n-polynomial $\mathcal{P}$ P -convexity, we present two applications for the modified Bessel function and $\mathfrak{q}$ q -digamma function. Finally, these outcomes can evaluate the possible symmetric roles of the criterion that express the real phenomena of the problem.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Bo Sun ◽  
Fei Zhang ◽  
Jing Li ◽  
Yicheng Yang ◽  
Xiaolin Diao ◽  
...  

Abstract Background With the development and application of medical information system, semantic interoperability is essential for accurate and advanced health-related computing and electronic health record (EHR) information sharing. The openEHR approach can improve semantic interoperability. One key improvement of openEHR is that it allows for the use of existing archetypes. The crucial problem is how to improve the precision and resolve ambiguity in the archetype retrieval. Method Based on the query expansion technology and Word2Vec model in Nature Language Processing (NLP), we propose to find synonyms as substitutes for original search terms in archetype retrieval. Test sets in different medical professional level are used to verify the feasibility. Result Applying the approach to each original search term (n = 120) in test sets, a total of 69,348 substitutes were constructed. Precision at 5 (P@5) was improved by 0.767, on average. For the best result, the P@5 was up to 0.975. Conclusions We introduce a novel approach that using NLP technology and corpus to find synonyms as substitutes for original search terms. Compared to simply mapping the element contained in openEHR to an external dictionary, this approach could greatly improve precision and resolve ambiguity in retrieval tasks. This is helpful to promote the application of openEHR and advance EHR information sharing.


2021 ◽  
Vol 16 (1) ◽  
pp. 1-23
Author(s):  
Bo Liu ◽  
Haowen Zhong ◽  
Yanshan Xiao

Multi-view classification aims at designing a multi-view learning strategy to train a classifier from multi-view data, which are easily collected in practice. Most of the existing works focus on multi-view classification by assuming the multi-view data are collected with precise information. However, we always collect the uncertain multi-view data due to the collection process is corrupted with noise in real-life application. In this case, this article proposes a novel approach, called uncertain multi-view learning with support vector machine (UMV-SVM) to cope with the problem of multi-view learning with uncertain data. The method first enforces the agreement among all the views to seek complementary information of multi-view data and takes the uncertainty of the multi-view data into consideration by modeling reachability area of the noise. Then it proposes an iterative framework to solve the proposed UMV-SVM model such that we can obtain the multi-view classifier for prediction. Extensive experiments on real-life datasets have shown that the proposed UMV-SVM can achieve a better performance for uncertain multi-view classification in comparison to the state-of-the-art multi-view classification methods.


Author(s):  
Primož Cigoj ◽  
Borka Jerman Blažič

This paper presents a novel approach to education in the area of digital forensics based on a multi-platform cloud-computer infrastructure and an innovative computer based tool. The tool is installed and available through the cloud-based infrastructure of the Dynamic Forensic Education Alliance. Cloud computing provides an efficient mechanism for a wide range of services that offer real-life environments for teaching and training cybersecurity and digital forensics. The cloud-based infrastructure, the virtualized environment and the developed educational tool enable the construction of a dynamic e-learning environment making the training very close to reality and to real-life situations. The paper presents the Dynamic Forensic Digital tool named EduFors and describes the different levels of college and university education where the tool is introduced and used in the training of future investigators of cybercrime events.


2014 ◽  
Vol 6 (2) ◽  
pp. 23-36
Author(s):  
Fatma Molu

Complex financial conversion projects with large budgets have many different challenges. For companies that want to survive in conditions of tough competition, legacy (old) systems must continue to provide the required service throughout the project life cycle and in some circumstances even after project completion partly. In this case, the term coexistence comes into prominence. During this period, testing phase takes more critical role while integration systems' complexity and risk amount increase. Determining testing approach to use is essential to make sure both transformed and legacy systems provide service synchronously. In this paper, testing practices applied in the long conversion processes are discussed. Primarily, the basic features of the critical financial systems are addressed and then the main adoption methods in the literature are summarized. Then a variety of testing methodologies are presented depending on those adoption methods. These samples based on real-life experiences of transformation project. The most extensive example of real-time online financial systems is core banking systems. This paper covers the testing life cycle process of the large scale project of core banking system transformation project of a bank in Turkey.


Sign in / Sign up

Export Citation Format

Share Document