scholarly journals A Computational Modeling for Knowledge Binding of the Unstructured Web Data

The focus of this manuscript is laid towards extracting insightful data embedded into web-based information which is crucial for various academic and commercialized application requirements. The study thereby introduces a robust computational modeling by means of computing knowledge from collaborative web-based unstructured information. For this purpose, this design is simplified with Fuzzy based matching algorithm and also with a set of procedures which reduces the computational effort to a significant extent. The numerical theoretical analysis shows that the effectiveness of the formulated model. It also shows that the formulated concept outperforms the baseline modeling by almost 50% when computational performance is concerned.

2021 ◽  
pp. 0734242X2198941
Author(s):  
Athanasios Angelis-Dimakis ◽  
George Arampatzis ◽  
Tryfonas Pieri ◽  
Konstantina Solomou ◽  
Panagiotis Dedousis ◽  
...  

The SWAN platform is an integrated suite of online resources and tools for assessing industrial symbiotic opportunities based on solid industrial waste reuse. It has been developed as a digital solid waste reuse platform and is already applied in four countries (Greece, Bulgaria, Albania and Cyprus). The SWAN platform integrates a database with the spatial and technical characteristics of industrial solid waste producers and potential consumers, populated with data from these countries. It also incorporates an inventory of commercially implemented best practices on solid industrial waste reuse. The role of the SWAN platform is to facilitate the development of novel business cases. Towards this end, decision support services, based on a suitable matching algorithm, are provided to the registered users, helping them to identify and assess potential novel business models, based on solid waste reuse, either for an individual industrial unit (source/potential receiver of solid waste) or a specific region.


Author(s):  
Hilário Oliveira ◽  
Rinaldo Lima ◽  
João Gomes ◽  
Fred Freitas ◽  
Rafael Dueire Lins ◽  
...  

The Semantic Web, proposed by Berners-Lee, aims to make explicit the meaning of the data available on the Internet, making it possible for Web data to be processed both by people and intelligent agents. The Semantic Web requires Web data to be semantically classified and annotated with some structured representation of knowledge, such as ontologies. This chapter proposes an unsupervised, domain-independent method for extracting instances of ontological classes from unstructured data sources available on the World Wide Web. Starting with an initial set of linguistic patterns, a confidence-weighted score measure is presented integrating distinct measures and heuristics to rank candidate instances extracted from the Web. The results of several experiments are discussed achieving very encouraging results, which demonstrate the feasibility of the proposed method for automatic ontology population.


Fractals ◽  
2020 ◽  
Author(s):  
Amjad Ali ◽  
Kamal Shah ◽  
Hussam Alrabaiah ◽  
Zahir Shah ◽  
Ghaus Ur Rahman ◽  
...  

Author(s):  
D. Xuan Le ◽  
J. Wenny Rahayu ◽  
David Taniar

This paper proposes a data warehouse integration technique that combines data and documents from different underlying documents and database design approaches. The well-defined and structured data such as Relational, Object- oriented and Object Relational data, semi-structured data such as XML, and unstructured data such as HTML documents are integrated into a Web data warehouse system. The user specified requirement and data sources are combined to assist with the definitions of the hierarchical structures, which serve specific requirements and represent a certain type of data semantics using object-oriented features including inheritance, aggregation, association and collection. A conceptual integrated data warehouse model is then specified based on a combination of user requirements and data source structure, which creates the need for a logical integrated data warehouse model. A case study is then developed into a prototype in a Web-based environment that enables the evaluation. The evaluation of the proposed integration Web data warehouse methodology includes the verification of correctness of the integrated data, and the overall benefits of utilizing this proposed integration technique.


2013 ◽  
Vol 416-417 ◽  
pp. 1336-1340
Author(s):  
Dong Tao Ma

For application requirements of smart multimedia system, the paper design and implements system application framework and describes the function of each software module in the whole system. After description of the overall framework,paper analyze detailed implementation of several key software modules,that is supports SD and HD video signal, video capture module, object intrusion alarm motion detection module, audio and video multimedia processing synchronization mechanism, and web-based remote control module. And the article describes the main features of the DaVinci technology and DaVinci technology platform development, elaborated the DSP6467 DaVinci platform hardware structure which used in this paper.


2012 ◽  
Vol 45 (2) ◽  
pp. 332-334 ◽  
Author(s):  
R. Nagarajan ◽  
S. Siva Balan ◽  
R. Sabarinathan ◽  
M. Kirti Vaishnavi ◽  
K. Sekar

Fragment Finder 2.0is a web-based interactive computing server which can be used to retrieve structurally similar protein fragments from 25 and 90% nonredundant data sets. The computing server identifies structurally similar fragments using the protein backbone Cα angles. In addition, the identified fragments can be superimposed using either of the two structural superposition programs,STAMPandPROFIT, provided in the server. The freely available Java plug-inJmolhas been interfaced with the server for the visualization of the query and superposed fragments. The server is the updated version of a previously developed search engine and employs an in-house-developed fast pattern matching algorithm. This server can be accessed freely over the World Wide Web through the URL http://cluster.physics.iisc.ernet.in/ff/.


Petir ◽  
2018 ◽  
Vol 11 (1) ◽  
pp. 67-71
Author(s):  
Redaksi Tim Jurnal

One of the roles in information and communication technology to improve the quality of learning and teachingin educational organizations is to conduct a web-based learning facility that e-learning. In  general, there are two types of software that is generic and bespoke (customized), in this study we found that the university mercubuana customized using generic applications. Featured model is a way to define the functionality of the application based on the features required by user features, the features of these features can be grouped based on necessity (mandatory) and supplementary (optional). Preparation of requirements-based features proposed in this study is intended as the reference management application requirements e-learning mapped clearly and well. So it can be helpful to the development of future applications.


Water ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 1243 ◽  
Author(s):  
Andre Schardong ◽  
Slobodan P. Simonovic ◽  
Abhishek Gaur ◽  
Dan Sandink

Rainfall Intensity–Duration–Frequency (IDF) curves are among the most essential datasets used in water resources management across the globe. Traditionally, they are derived from observations of historical rainfall, under the assumption of stationarity. Change of climatic conditions makes use of historical data for development of IDFs for the future unreliable, and in some cases, may lead to underestimated infrastructure designs. The IDF_CC tool is designed to assist water professionals and engineers in producing IDF estimates under changing climatic conditions. The latest version of the tool (Version 4) provides updated IDF curve estimates for gauged locations (rainfall monitoring stations) and ungauged sites using a new gridded dataset of IDF curves for the land mass of Canada. The tool has been developed using web-based technologies and takes the form of a decision support system (DSS). The main modifications and improvements between version 1 and the latest version of the IDF_CC tool include: (i) introduction of the Generalized Extreme value (GEV) distribution; (ii) updated equidistant matching algorithm (QM); (iii) gridded IDF curves dataset for ungauged location and (iv) updated Climate Models.


Sign in / Sign up

Export Citation Format

Share Document