scholarly journals Online Knowledge-Based Model for Big Data Topic Extraction

2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Muhammad Taimoor Khan ◽  
Mehr Durrani ◽  
Shehzad Khalid ◽  
Furqan Aziz

Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half.

Big Data ◽  
2016 ◽  
pp. 711-733 ◽  
Author(s):  
Jafreezal Jaafar ◽  
Kamaluddeen Usman Danyaro ◽  
M. S. Liew

This chapter discusses about the veracity of data. The veracity issue is the challenge of imprecision in big data due to influx of data from diverse sources. To overcome this problem, this chapter proposes a fuzzy knowledge-based framework that will enhance the accessibility of Web data and solve the inconsistency in data model. D2RQ, protégé, and fuzzy Web Ontology Language applications were used for configuration and performance. The chapter also provides the completeness fuzzy knowledge-based algorithm, which was used to determine the robustness and adaptability of the knowledge base. The result shows that the D2RQ is more scalable with respect to performance comparison. Finally, the conclusion and future lines of the research were provided.


Author(s):  
Jafreezal Jaafar ◽  
Kamaluddeen Usman Danyaro ◽  
M. S. Liew

This chapter discusses about the veracity of data. The veracity issue is the challenge of imprecision in big data due to influx of data from diverse sources. To overcome this problem, this chapter proposes a fuzzy knowledge-based framework that will enhance the accessibility of Web data and solve the inconsistency in data model. D2RQ, protégé, and fuzzy Web Ontology Language applications were used for configuration and performance. The chapter also provides the completeness fuzzy knowledge-based algorithm, which was used to determine the robustness and adaptability of the knowledge base. The result shows that the D2RQ is more scalable with respect to performance comparison. Finally, the conclusion and future lines of the research were provided.


2021 ◽  
Author(s):  
Marciane Mueller ◽  
Rejane Frozza ◽  
Liane Mählmann Kipper ◽  
Ana Carolina Kessler

BACKGROUND This article presents the modeling and development of a Knowledge Based System, supported by the use of a virtual conversational agent called Dóris. Using natural language processing resources, Dóris collects the clinical data of patients in care in the context of urgency and hospital emergency. OBJECTIVE The main objective is to validate the use of virtual conversational agents to properly and accurately collect the data necessary to perform the evaluation flowcharts used to classify the degree of urgency of patients and determine the priority for medical care. METHODS The agent's knowledge base was modeled using the rules provided for in the evaluation flowcharts comprised by the Manchester Triage System. It also allows the establishment of a simple, objective and complete communication, through dialogues to assess signs and symptoms that obey the criteria established by a standardized, validated and internationally recognized system. RESULTS Thus, in addition to verifying the applicability of Artificial Intelligence techniques in a complex domain of health care, a tool is presented that helps not only in the perspective of improving organizational processes, but also in improving human relationships, bringing professionals and patients closer. The system's knowledge base was modeled on the IBM Watson platform. CONCLUSIONS The results obtained from simulations carried out by the human specialist allowed us to verify that a knowledge-based system supported by a virtual conversational agent is feasible for the domain of risk classification and priority determination of medical care for patients in the context of emergency care and hospital emergency.


Author(s):  
Sarah Bouraga ◽  
Ivan Jureta ◽  
Stéphane Faulkner ◽  
Caroline Herssens

Knowledge-Base Recommendation (or Recommender) Systems (KBRS) provide the user with advice about a decision to make or an action to take. KBRS rely on knowledge provided by human experts, encoded in the system and applied to input data, in order to generate recommendations. This survey overviews the main ideas characterizing a KBRS. Using a classification framework, the survey overviews KBRS components, user problems for which recommendations are given, knowledge content of the system, and the degree of automation in producing recommendations.


2021 ◽  
Vol 11 (22) ◽  
pp. 11061
Author(s):  
Juan Francisco Mendoza-Moreno ◽  
Luz Santamaria-Granados ◽  
Anabel Fraga Vázquez ◽  
Gustavo Ramirez-Gonzalez

Tourist traceability is the analysis of the set of actions, procedures, and technical measures that allows us to identify and record the space–time causality of the tourist’s touring, from the beginning to the end of the chain of the tourist product. Besides, the traceability of tourists has implications for infrastructure, transport, products, marketing, the commercial viability of the industry, and the management of the destination’s social, environmental, and cultural impact. To this end, a tourist traceability system requires a knowledge base for processing elements, such as functions, objects, events, and logical connectors among them. A knowledge base provides us with information on the preparation, planning, and implementation or operation stages. In this regard, unifying tourism terminology in a traceability system is a challenge because we need a central repository that promotes standards for tourists and suppliers in forming a formal body of knowledge representation. Some studies are related to the construction of ontologies in tourism, but none focus on tourist traceability systems. For the above, we propose OntoTouTra, an ontology that uses formal specifications to represent knowledge of tourist traceability systems. This paper outlines the development of the OntoTouTra ontology and how we gathered and processed data from ubiquitous computing using Big Data analysis techniques.


2018 ◽  
Vol 7 (3.33) ◽  
pp. 168
Author(s):  
Yonglak SHON ◽  
Jaeyoung PARK ◽  
Jangmook KANG ◽  
Sangwon LEE

The LOD data sets consist of RDF Triples based on the Ontology, a specification of existing facts, and by linking them to previously disclosed knowledge based on linked data principles. These structured LOD clouds form a large global data network, which provides a more accurate foundation for users to deliver the desired information. However, it is difficult to identify that, if the presence of the same object is identified differently across several LOD data sets, they are inherently identical. This is because objects with different URIs in the LOD datasets must be different and they must be closely examined for similarities in order to judge them as identical. The aim of this study is that the prosed model, RILE, evaluates similarity by comparing object values of existing specified predicates. After performing experiments with our model, we could check the improvement of the confidence level of the connection by extracting the link value.  


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Xuhui Li ◽  
Liuyan Liu ◽  
Xiaoguang Wang ◽  
Yiwen Li ◽  
Qingfeng Wu ◽  
...  

Purpose The purpose of this paper is to propose a graph-based representation approach for evolutionary knowledge under the big data circumstance, aiming to gradually build conceptual models from data. Design/methodology/approach A semantic data model named meaning graph (MGraph) is introduced to represent knowledge concepts to organize the knowledge instances in a graph-based knowledge base. MGraph uses directed acyclic graph–like types as concept schemas to specify the structural features of knowledge with intention variety. It also proposes several specialization mechanisms to enable knowledge evolution. Based on MGraph, a paradigm is introduced to model the evolutionary concept schemas, and a scenario on video semantics modeling is introduced in detail. Findings MGraph is fit for the evolution features of representing knowledge from big data and lays the foundation for building a knowledge base under the big data circumstance. Originality/value The representation approach based on MGraph can effectively and coherently address the major issues of evolutionary knowledge from big data. The new approach is promising in building a big knowledge base.


1990 ◽  
Vol 80 (6B) ◽  
pp. 1833-1851 ◽  
Author(s):  
Thomas C. Bache ◽  
Steven R. Bratt ◽  
James Wang ◽  
Robert M. Fung ◽  
Cris Kobryn ◽  
...  

Abstract The Intelligent Monitoring System (IMS) is a computer system for processing data from seismic arrays and simpler stations to detect, locate, and identify seismic events. The first operational version processes data from two high-frequency arrays (NORESS and ARCESS) in Norway. The IMS computers and functions are distributed between the NORSAR Data Analysis Center (NDAC) near Oslo and the Center for Seismic Studies (Center) in Arlington, Virginia. The IMS modules at NDAC automatically retrieve data from a disk buffer, detect signals, compute signal attributes (amplitude, slowness, azimuth, polarization, etc.), and store them in a commercial relational database management system (DBMS). IMS makes scheduled (e.g., hourly) transfers of the data to a separate DBMS at the Center. Arrival of new data automatically initiates a “knowledge-based system (KBS)” that interprets these data to locate and identify (earthquake, mine blast, etc.) seismic events. This KBS uses general and area-specific seismological knowledge represented in rules and procedures. For each event, unprocessed data segments (e.g., 7 min for regional events) are retrieved from NDAC for subsequent display and analyst review. The interactive analysis modules include integrated waveform and map display/manipulation tools for efficient analyst validation or correction of the solutions produced by the automated system. Another KBS compares the analyst and automatic solutions to mark overruled elements of the knowledge base. Performance analysis statistics guide subsequent changes to the knowledge base so it improves with experience. The IMS is implemented on networked Sun workstations, with a 56 kbps satellite link bridging the NDAC and Center computer networks. The software architecture is modular and distributed, with processes communicating by messages and sharing data via the DBMS. The IMS processing requirements are easily met with major processes (i.e., signal processing, KBS, and DBMS) on separate Sun 4/2xx workstations. This architecture facilitates expansion in functionality and number of stations. The first version was operated continuously for 8 weeks in late-1989. The Center functions were then transferred to NDAC for subsequent operation. Later versions will be distributed among NDAC, Scripps/IGPP (San Diego), and the Center to process data from many stations and arrays. The IMS design is ambitious in its integration of many new computer technologies, but the operational performance of the first version demonstrates its validity. Thus, IMS provides a new generation of automated seismic event monitoring capability.


Sign in / Sign up

Export Citation Format

Share Document