Challenges of Designing a Markup Language for Music

Author(s):  
Jacques Steyn

XML-based languages for music have constraints not applicable to typical XML applications such as for standard text documents or data sets. Music contains numerous simultaneous events across several dimensions, including time. The Document Model for a piece of music would thus look very different from serialised text documents. Most existing XML-based music markup languages mark music typography, following the print traditions of music scores. A general music markup language should include much more than mere print. Some of the challenges designing an XML-based markup language for music are considered. An SVG-based Music Symbol Design Grid is proposed to meet the challenge of music typology. An XML-based Music Symbol Language is used to design symbols on this grid. Resulting symbols are positioned in 3D Music Space, which is introduced to address the challenge of topography.

Author(s):  
Jacques Steyn

Design goals determine the particular structure of a markup language, while the philosophy of what markup languages are about determine the framework within which its structure is developed. Most existing markup languages for music reflect low level design strategies compared to design that adheres to the high level philosophy of markup languages. An approach to an XML-based music markup language from the perspective of SGML would differ from an approach from a markup language such as HTML. An ideal structure for a general markup language for music is proposed that follows a purist approach and which results in a different kind of XML-based music markup language than most present music markup languages offer.


2021 ◽  
pp. 1-13
Author(s):  
Qingtian Zeng ◽  
Xishi Zhao ◽  
Xiaohui Hu ◽  
Hua Duan ◽  
Zhongying Zhao ◽  
...  

Word embeddings have been successfully applied in many natural language processing tasks due to its their effectiveness. However, the state-of-the-art algorithms for learning word representations from large amounts of text documents ignore emotional information, which is a significant research problem that must be addressed. To solve the above problem, we propose an emotional word embedding (EWE) model for sentiment analysis in this paper. This method first applies pre-trained word vectors to represent document features using two different linear weighting methods. Then, the resulting document vectors are input to a classification model and used to train a text sentiment classifier, which is based on a neural network. In this way, the emotional polarity of the text is propagated into the word vectors. The experimental results on three kinds of real-world data sets demonstrate that the proposed EWE model achieves superior performances on text sentiment prediction, text similarity calculation, and word emotional expression tasks compared to other state-of-the-art models.


Author(s):  
Medhi Dastani

Rule markup languages will be the vehicle for using rules on the Web and in other distributed systems. They allow publishing, deploying, executing and communicating rules in a network. They may also play the role of a lingua franca for exchanging rules between different systems and tools. In a narrow sense, a rule markup language is a concrete (XML-based) rule syntax for the Web. In a broader sense, it should have an abstract syntax as a common basis for defining various concrete languages addressing different consumers. The main purposes of a rule markup language are to permit the publication, interchange and reuse of rules. This chapter introduces important requirements and design issues for general Web rule languages to fulfill these tasks. Characteristics of several important general standardization or standards-proposing efforts for (XML-based) rule markup languages including W3C RIF, RuleML, R2ML, SWRL as well as (human-readable) Semantic Web rule languages such as TRIPLE, N3, Jena, and Prova are discussed with respect to these identified issues.


Symmetry ◽  
2019 ◽  
Vol 11 (4) ◽  
pp. 545 ◽  
Author(s):  
Mohammed Attik ◽  
Malik Missen ◽  
Mickaël Coustaty ◽  
Gyu Choi ◽  
Fahd Alotaibi ◽  
...  

It is the age of the social web, where people express themselves by giving their opinions about various issues, from their personal life to the world’s political issues. This process generates a lot of opinion data on the web that can be processed for valuable information, and therefore, semantic annotation of opinions becomes an important task. Unfortunately, existing opinion annotation schemes have failed to satisfy annotation challenges and cannot even adhere to the basic definition of opinion. Opinion holders, topical features and temporal expressions are major components of an opinion that remain ignored in existing annotation schemes. In this work, we propose OpinionML, a new Markup Language, that aims to compensate for the issues that existing typical opinion markup languages fail to resolve. We present a detailed discussion about existing annotation schemes and their associated problems. We argue that OpinionML is more robust, flexible and easier for annotating opinion data. Its modular approach while implementing a logical model provides us with a flexible and easier model of annotation. OpinionML can be considered a step towards “information symmetry”. It is an effort for consistent sentiment annotations across the research community. We perform experiments to prove robustness of the proposed OpinionML and the results demonstrate its capability of retrieving significant components of opinion segments. We also propose OpinionML ontology in an effort to make OpinionML more inter-operable. The ontology proposed is more complete than existing opinion ontologies like Marl and Onyx. A comprehensive comparison of the proposed ontology with existing sentiment ontologies Marl and Onyx proves its worth.


Author(s):  
Adrian Paschke ◽  
Harold Boley

Rule markup languages will be the vehicle for using rules on the Web and in other distributed systems. They allow publishing, deploying, executing and communicating rules in a network. They may also play the role of a lingua franca for exchanging rules between different systems and tools. In a narrow sense, a rule markup language is a concrete (XML-based) rule syntax for the Web. In a broader sense, it should have an abstract syntax as a common basis for defining various concrete languages addressing different consumers. The main purposes of a rule markup language are to permit the publication, interchange and reuse of rules. This chapter introduces important requirements and design issues for general Web rule languages to fulfill these tasks. Characteristics of several important general standardization or standards-proposing efforts for (XML-based) rule markup languages including W3C RIF, RuleML, R2ML, SWRL as well as (human-readable) Semantic Web rule languages such as TRIPLE, N3, Jena, and Prova are discussed with respect to these identified issues.


2016 ◽  
Vol 35 (1) ◽  
pp. 51 ◽  
Author(s):  
Juliet L. Hardesty

Metadata, particularly within the academic library setting, is often expressed in eXtensible Markup Language (XML) and managed with XML tools, technologies, and workflows. Managing a library’s metadata currently takes on a greater level of complexity as libraries are increasingly adopting the Resource Description Framework (RDF). Semantic Web initiatives are surfacing in the library context with experiments in publishing metadata as Linked Data sets and also with development efforts such as BIBFRAME and the Fedora 4 Digital Repository incorporating RDF. Use cases show that transitions into RDF are occurring in both XML standards and in libraries with metadata encoded in XML. It is vital to understand that transitioning from XML to RDF requires a shift in perspective from replicating structures in XML to defining meaningful relationships in RDF. Establishing coordination and communication among these efforts will help as more libraries move to use RDF, produce Linked Data, and approach the Semantic Web.


2020 ◽  
Vol 10 (11) ◽  
pp. 4009
Author(s):  
Asmaa M. Aubaid ◽  
Alok Mishra

With the growth of online information and sudden expansion in the number of electronic documents provided on websites and in electronic libraries, there is difficulty in categorizing text documents. Therefore, a rule-based approach is a solution to this problem; the purpose of this study is to classify documents by using a rule-based. This paper deals with the rule-based approach with the embedding technique for a document to vector (doc2vec) files. An experiment was performed on two data sets Reuters-21578 and the 20 Newsgroups to classify the top ten categories of these data sets by using a document to vector rule-based (D2vecRule). Finally, this method provided us a good classification result according to the F-measures and implementation time metrics. In conclusion, it was observed that our algorithm document to vector rule-based (D2vecRule) was good when compared with other algorithms such as JRip, One R, and ZeroR applied to the same Reuters-21578 dataset.


2013 ◽  
Vol 837 ◽  
pp. 577-581 ◽  
Author(s):  
Krzysztof Foit

In certain situations the robots effector must be moved strictly according to some path. The example of such case could be painting, cutting, milling, welding, glue applying etc. The common feature of the mentioned operations is that in the most of cases, the movement of the tool is realized on the plane. The advantage of use of a tool operated by a robot is that the work area could be placed anywhere in the manipulators workspace and can be set at almost any angle relative to ground. By specifying a local coordinate system, the operator can define the path of the tool. Referring to the earlier studies of the author, this paper continues the discussion of the possibility of using the markup languages in the field of robotics. The further part describes a proposal for the application of SVG markup language to describe the objects forming the path of a tool. Just like XML, the SVG code can be processed in many ways, giving the possibility of translation to the particular robots programming language. The described method has also some disadvantages arising from the purposes of the SVG standard, like the 2D nature of a path.


2015 ◽  
Vol 30 (2) ◽  
pp. 157-170 ◽  
Author(s):  
Rizwana Irfan ◽  
Christine K. King ◽  
Daniel Grages ◽  
Sam Ewen ◽  
Samee U. Khan ◽  
...  

AbstractIn this survey, we review different text mining techniques to discover various textual patterns from the social networking sites. Social network applications create opportunities to establish interaction among people leading to mutual learning and sharing of valuable knowledge, such as chat, comments, and discussion boards. Data in social networking websites is inherently unstructured and fuzzy in nature. In everyday life conversations, people do not care about the spellings and accurate grammatical construction of a sentence that may lead to different types of ambiguities, such as lexical, syntactic, and semantic. Therefore, analyzing and extracting information patterns from such data sets are more complex. Several surveys have been conducted to analyze different methods for the information extraction. Most of the surveys emphasized on the application of different text mining techniques for unstructured data sets reside in the form of text documents, but do not specifically target the data sets in social networking website. This survey attempts to provide a thorough understanding of different text mining techniques as well as the application of these techniques in the social networking websites. This survey investigates the recent advancement in the field of text analysis and covers two basic approaches of text mining, such as classification and clustering that are widely used for the exploration of the unstructured text available on the Web.


2019 ◽  
Vol 6 (4) ◽  
pp. 430
Author(s):  
Admaja Dwi Herlambang ◽  
Satrio Hadi Wijoyo

<p>Salah satu komponen esensial dalam kegiatan pembelajaran di Sekolah Menengah Kejuruan Rumpun Teknologi Informasi dan Komunikasi (SMK TIK) adalah ketersediaan sumber belajar mata pelajaran produktif. Media internet atau online adalah sumber belajar yang berbentuk media elektronik yang dapat dimanfaatkan oleh siswa dan guru melalui jaringan internet. Salah satu bentuk media online adalah halaman web berformat .html (Hypertext Markup Language) yang berupa dokumen teks sangatlah banyak. Sehingga sumber belajar tersebut perlu di kelompokkan berdasarkan kriteria atau ciri esensial setiap mata pelajaran produktif di SMK TIK. Proses pengelompokkan menggunakan algoritma Naive Bayes karena algoritma tersebut dapat digunakan untuk dokumen teks dan menggunakan teorema Bayes dengan menganggap semua atribut saling tidak berhubungan. Tujuan penelitian ini adalah untuk mendeskripsikan hasil klasifikasi dan evaluasi kualitas klasifikasi sumber belajar berbasis teks dengan menggunakan Algoritma Naïve Bayes. Tahapan penelitian yang dilakukan adalah pengoleksian data set, pemrosesan awal dengan text mining, pembobotan Tf-Idf, pengklasifikasian Naïve Bayes, dan evaluasi akurasi. Pengklasifikasian teks menghasilkan sembilan kelompok mata pelajaran produktif dan pengujian menghasilkan nilai akurasi tertinggi 81,48%, sedangkan nilai akurasi terendah sebesar 79,63%.</p><p> </p><p><strong>Abstract</strong></p><p>The availability of learning resources for productive subjects is one of the essential components in learning activities for Vocational High Schools, especially for Information and Communication Technology competence field. Internet or online media is a learning resource in the form of electronic media that can be used by students and teachers through the internet. One form of online media is a web page formatted in .html (Hypertext Markup Language) in the form of very many text documents. So that learning resources need to be grouped based on the essential criteria or characteristics of each productive subject in Vocational High Schools. The grouping process uses the Naive Bayes algorithm because the algorithm can be used to text documents and use the Bayes theorem by assuming all attributes are mutually unrelated. The purpose of the study was to describe the results of the classification and classification quality evaluation of text-based learning sources using the Naïve Bayes Algorithm. The stages of the research carried out are collecting data sets, pre-processing with text mining, Tf-Idf weighting, Naïve Bayes classifying, and accuracy evaluation. Text classification results shows that there are nine productive subject groups and based on uji results shows that the highest accuracy value was 81,48%, while the lowest accuracy value was 79,63%.</p>


Sign in / Sign up

Export Citation Format

Share Document