extensible markup
Recently Published Documents


TOTAL DOCUMENTS

495
(FIVE YEARS 63)

H-INDEX

15
(FIVE YEARS 3)

Author(s):  
Mohd Kamir Yusof ◽  
Wan Mohd Amir Fazamin Wan Hamzah ◽  
Nur Shuhada Md Rusli

The coronavirus COVID-19 is affecting 196 countries and territories around the world. The number of deaths keep on increasing each day because of COVID-19. According to World Health Organization (WHO), infected COVID-19 is slightly increasing day by day and now reach to 570,000. WHO is prefer to conduct a screening COVID-19 test via online system. A suitable approach especially in string matching based on symptoms is required to produce fast and accurate result during retrieving process. Currently, four latest approaches in string matching have been implemented in string matching; characters-based algorithm, hashing algorithm, suffix automation algorithm and hybrid algorithm. Meanwhile, extensible markup language (XML), JavaScript object notation (JSON), asynchronous JavaScript XML (AJAX) and JQuery tehnology has been used widelfy for data transmission, data storage and data retrieval. This paper proposes a combination of algorithm among hybrid, JSON and JQuery in order to produce a fast and accurate results during COVID-19 screening process. A few experiments have been by comparison performance in term of execution time and memory usage using five different collections of datasets. Based on the experiments, the results show hybrid produce better performance compared to JSON and JQuery. Online screening COVID-19 is hopefully can reduce the number of effected and deaths because of COVID.


2022 ◽  
Vol 355 ◽  
pp. 02018
Author(s):  
Menglei Zheng ◽  
Ling Tian

With the rapid increase of multi-source heterogeneous dynamic data of mechanical products, the digital twin technology is considered to be an important method to realize the deep integration of product data and intelligent manufacturing. As a digital archive of the physical entity in entire life cycle, the mechanical product digital twin model is cross-phased and multi-domain. Therefore, safe and stable cooperative modeling has become a basic technical problem that needs to be solved urgently. In this paper, we proposed a blockchain-based collaborative modeling method for the digital twin ontology model of mechanical products. First, an authorization network was constructed among stakeholders. Then modeling processes of the digital twin were mapped to ontology operations and formatted through extensible markup language. Finally, consensuses were obtained based on practical byzantine fault tolerance. And a material modification process of a helicopter damper bearing was taken as an example to verify. The proposed method enables all participants to accurately obtain the latest state of the digital twin model, and has the advantages of tamper-proof, traceability, and decentralization.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0262067
Author(s):  
Adi A. AlQudah ◽  
Mostafa Al-Emran ◽  
Khaled Shaalan

Integration between information systems is critical, especially in the healthcare domain, since interoperability requirements are related to patients’ data confidentiality, safety, and satisfaction. The goal of this study is to propose a solution based on the integration between queue management solution (QMS) and the electronic medical records (EMR), using Health Level Seven (HL7) protocols and Extensible Markup Language (XML). The proposed solution facilitates the patient’s self-check-in within a healthcare organization in UAE. The solution aims to help in minimizing the waiting times within the outpatient department through early identification of patients who hold the Emirates national ID cards, i.e., whether an Emirati or expatriates. The integration components, solution design, and the custom-designed XML and HL7 messages were clarified in this paper. In addition, the study includes a simulation experiment through control and intervention weeks with 517 valid appointments. The experiment goal was to evaluate the patient’s total journey and each related clinical stage by comparing the “routine-based identification” with the “patient’s self-check-in” processes in case of booked appointments. As a key finding, the proposed solution is efficient and could reduce the “patient’s journey time” by more than 14 minutes and “time to identify” patients by 10 minutes. There was also a significant drop in the waiting time to triage and the time to finish the triage process. In conclusion, the proposed solution is considered innovative and can provide a positive added value for the patient’s whole journey.


2021 ◽  
Vol 5 (4) ◽  
pp. 409
Author(s):  
Lee Ruo Yee ◽  
Hazalila Kamaludin ◽  
Noor Zuraidin Mohd Safar ◽  
Norfaradilla Wahid ◽  
Noryusliza Abdullah ◽  
...  

Intelligence Eye is an Android based mobile application developed to help blind and visually impaired users to detect light and objects. Intelligence Eye used Region-based Convolutional Neural Networks (R-CNN) to recognize objects in the object recognition module and a vibration feedback is provided according to the light value in the light detection module. A voice guidance is provided in the application to guide the users and announce the result of the object recognition. TensorFlow Lite is used to train the neural network model for object recognition in conjunction with extensible markup language (XML) and Java in Android Studio for the programming language. For future works, improvements can be made to enhance the functionality of the Intelligence Eye application by increasing the object detection capacity in the object recognition module, add menu settings for vibration intensity in light detection module and support multiple languages for the voice guidance.


2021 ◽  
Vol 10 (6) ◽  
pp. 3256-3264
Author(s):  
Su-Cheng Haw ◽  
Emyliana Song

eXtensible markup language (XML) appeared internationally as the format for data representation over the web. Yet, most organizations are still utilising relational databases as their database solutions. As such, it is crucial to provide seamless integration via effective transformation between these database infrastructures. In this paper, we propose XML-REG to bridge these two technologies based on node-based and path-based approaches. The node-based approach is good to annotate each positional node uniquely, while the path-based approach provides summarised path information to join the nodes. On top of that, a new range labelling is also proposed to annotate nodes uniquely by ensuring the structural relationships are maintained between nodes. If a new node is to be added to the document, re-labelling is not required as the new label will be assigned to the node via the new proposed labelling scheme. Experimental evaluations indicated that the performance of XML-REG exceeded XMap, XRecursive, XAncestor and Mini-XML concerning storing time, query retrieval time and scalability. This research produces a core framework for XML to relational databases (RDB) mapping, which could be adopted in various industries.


Author(s):  
Velin Kralev ◽  
Radoslava Kraleva ◽  
Petia Koprinkova-Hristova

Data modeling and data processing are important activities in any scientific research. This research focuses on the modeling of data and processing of data generated by a saccadometer. The approach used is based on the relational data model, but the processing and storage of the data is done with client datasets. The experiments were performed with 26 randomly selected files from a total of 264 experimental sessions. The data from each experimental session was stored in three different formats, respectively text, binary and extensible markup language (XML) based. The results showed that the text format and the binary format were the most compact. Several actions related to data processing were analyzed. Based on the results obtained, it was found that the two fastest actions are respectively loading data from a binary file and storing data into a binary file. In contrast, the two slowest actions were storing the data in XML format and loading the data from a text file, respectively. Also, one of the time-consuming operations turned out to be the conversion of data from text format to binary format. Moreover, the time required to perform this action does not depend in proportion on the number of records processed.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 907
Author(s):  
Su-Cheng Haw ◽  
Aisyah Amin ◽  
Chee-Onn Wong ◽  
Samini Subramaniam

Background: As the standard for the exchange of data over the World Wide Web, it is important to ensure that the eXtensible Markup Language (XML) database is capable of supporting not only efficient query processing but also capable of enduring frequent data update operations over the dynamic changes of Web content. Most of the existing XML annotation is based on a labeling scheme to identify each hierarchical position of the XML nodes. This computation is costly as any updates will cause the whole XML tree to be re-labelled. This impact can be observed on large datasets. Therefore, a robust labeling scheme that avoids re-labeling is crucial. Method: Here, we present ORD-GAP (named after Order Gap), a robust and persistent XML labeling scheme that supports dynamic updates. ORD-GAP assigns unique identifiers with gaps in-between XML nodes, which could easily identify the level, Parent-Child (P-C), Ancestor-Descendant (A-D) and sibling relationship. ORD-GAP adopts the OrdPath labeling scheme for any future insertion. Results: We demonstrate that ORD-GAP is robust enough for dynamic updates, and have implemented it in three use cases: (i) left-most, (ii) in-between and (iii) right-most insertion. Experimental evaluations on DBLP dataset demonstrated that ORD-GAP outperformed existing approaches such as ORDPath and ME Labeling concerning database storage size, data loading time and query retrieval. On average, ORD-GAP has the best storing and query retrieval time. Conclusion: The main contributions of this paper are: (i) A robust labeling scheme named ORD-GAP that assigns certain gap between each node to support future insertion, and (ii) An efficient mapping scheme, which built upon ORD-GAP labeling scheme to transform XML into RDB effectively.


Linha D Água ◽  
2021 ◽  
Vol 34 (2) ◽  
pp. 47-64
Author(s):  
Mika Hamalainen ◽  
Jack Rueter ◽  
Khalid Alnajjar

Presentamos nuestra infraestructura para la documentación de lenguas urálicas, que consiste en herramientas para redactar diccionarios de tal forma que las entradas sean estructuradas en el formato XML (Extensible Markup Language). Desde los diccionarios en XML podemos generar código para analizadores morfológicos que son útiles para todo tipo de actividades de PLN. En este artículo mostramos las ventajas que una documentación digital y legible por máquina tiene. Describimos, también, el sistema en el contexto de lenguas urálicas amenazadas.


Author(s):  
Sweta Sharma

In the next wave of insurgence, humans may endeavour self-reflection which can lead to an effortless talk and to find out if an event will fructify. Training the system on how to make accurate prognostication with the help of machine learning and statistical models can lead to an intelligent personalized conversational system. The Chatbot industry is ever-growing and after the COVID-19 pandemic and rigorous lockdowns all around the world, people have realized the importance of human interaction in their lives. We are developing this model to create a more intimate relationship between the system and humans. For this purpose, many open-source platforms are available. Artificial Intelligence Markup Language (AIML) is derived from Extensible Markup Language (XML) which is used to build up a conversational agent artificially. The success of this project will help the model in observing and understanding human emotions which will ultimately help it to form a more personalized relationship to delineate the future course of events.


2021 ◽  
Vol 20 (2) ◽  
pp. 12-15
Author(s):  
Alhadi A. Klaib

Extensible Markup Language (XML) has become a significant technology for transferring data through the world of the Internet. XML labelling schemes are an essential technique used to handle XML data effectively. Labelling XML data is performed by assigning labels to all nodes in that XML document. CLS labelling scheme is a hybrid labelling scheme that was developed to address some limitations of indexing XML data.  Moreover, datasets are used to test XML labelling schemes. There are many XML datasets available nowadays. Some of them are from real life datasets and others are from artificial datasets. These datasets and benchmarks are used for testing the XML labelling schemes. This paper discusses and considers these datasets and benchmarks and their specifications in order to determine the most appropriate one for testing the CLS labelling scheme. This research found out that the XMark benchmark is the most appropriate choice for the testing performance of the CLS labelling scheme. 


Sign in / Sign up

Export Citation Format

Share Document