Ontology-Assisted Enterprise Information Systems Integration in Manufacturing Supply Chain

Author(s):  
Kamalendu Pal

Manufacturing communities around the globe are eagerly witnessing the recent developments in semantic web technology (SWT). This technology combines a set of new mechanisms with grounded knowledge representation techniques to address the needs of formal information modelling and reasoning for web-based services. This chapter provides a high-level summary of SWT to help better understand the impact that this technology will have on wider enterprise information architectures. In many cases it also reuses familiar concepts with a new twist. For example, “ontologies” for “data dictionaries” and “semantic models” for “data models.” This chapter presents the usefulness of a proposed architecture by applying a theory to integrating data from multiple heterogeneous sources which entails dealing with semantic mapping between source schema and a resource description framework (RDF) ontology described declaratively using specific query language (i.e. SPARQL) queries. Finally, the semantic of query rewriting is further discussed and a query rewriting algorithm is presented.

Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 149
Author(s):  
Petros Zervoudakis ◽  
Haridimos Kondylakis ◽  
Nicolas Spyratos ◽  
Dimitris Plexousakis

HIFUN is a high-level query language for expressing analytic queries of big datasets, offering a clear separation between the conceptual layer, where analytic queries are defined independently of the nature and location of data, and the physical layer, where queries are evaluated. In this paper, we present a methodology based on the HIFUN language, and the corresponding algorithms for the incremental evaluation of continuous queries. In essence, our approach is able to process the most recent data batch by exploiting already computed information, without requiring the evaluation of the query over the complete dataset. We present the generic algorithm which we translated to both SQL and MapReduce using SPARK; it implements various query rewriting methods. We demonstrate the effectiveness of our approach in temrs of query answering efficiency. Finally, we show that by exploiting the formal query rewriting methods of HIFUN, we can further reduce the computational cost, adding another layer of query optimization to our implementation.


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 34 ◽  
Author(s):  
Maria-Evangelia Papadaki ◽  
Nicolas Spyratos ◽  
Yannis Tzitzikas

The continuous accumulation of multi-dimensional data and the development of Semantic Web and Linked Data published in the Resource Description Framework (RDF) bring new requirements for data analytics tools. Such tools should take into account the special features of RDF graphs, exploit the semantics of RDF and support flexible aggregate queries. In this paper, we present an approach for applying analytics to RDF data based on a high-level functional query language, called HIFUN. According to that language, each analytical query is considered to be a well-formed expression of a functional algebra and its definition is independent of the nature and structure of the data. In this paper, we investigate how HIFUN can be used for easing the formulation of analytic queries over RDF data. We detail the applicability of HIFUN over RDF, as well as the transformations of data that may be required, we introduce the translation rules of HIFUN queries to SPARQL and we describe a first implementation of the proposed model.


2019 ◽  
Vol 214 ◽  
pp. 01049
Author(s):  
Alexey Anisenkov ◽  
Daniil Zhadan ◽  
Ivan Logashenko

A comprehensive and efficient environment and data monitoring system is a vital part of any HEP experiment. In this paper we describe the software web-based framework which is currently used by the CMD-3 Collaboration at the VEPP-2000 Collider and partially by the Muon g-2 experiment at Fermilab to monitor the status of data acquisition and the quality of data taken by the experiments. The system is designed to meet typical requirements and cover various use-cases of DAQ applications, starting from the central configuration, slow control data monitoring, data quality monitoring, user-oriented visualization, control of the hardware and DAQ processes, etc. Being an intermediate middleware between the front-end electronics and the DAQ applications the system is focused to provide a high-level coherent view for shifters and experts for robust operations. In particular, it is used to integrate various experiment dependent monitoring modules and tools into a unified Web oriented portal with appropriate access control policy. The paper describes the design and overall architecture of the system, recent developments and the most important aspects of the framework implementation.


2018 ◽  
Author(s):  
Maxat Kulmanov ◽  
Senay Kafkas ◽  
Andreas Karwath ◽  
Alexander Malic ◽  
Georgios V Gkoutos ◽  
...  

AbstractRecent developments in machine learning have lead to a rise of large number of methods for extracting features from structured data. The features are represented as a vectors and may encode for some semantic aspects of data. They can be used in a machine learning models for different tasks or to compute similarities between the entities of the data. SPARQL is a query language for structured data originally developed for querying Resource Description Framework (RDF) data. It has been in use for over a decade as a standardized NoSQL query language. Many different tools have been developed to enable data sharing with SPARQL. For example, SPARQL endpoints make your data interoperable and available to the world. SPARQL queries can be executed across multiple endpoints. We have developed a Vec2SPARQL, which is a general framework for integrating structured data and their vector space representations. Vec2SPARQL allows jointly querying vector functions such as computing similarities (cosine, correlations) or classifications with machine learning models within a single SPARQL query. We demonstrate applications of our approach for biomedical and clinical use cases. Our source code is freely available at https://github.com/bio-ontology-research-group/vec2sparql and we make a Vec2SPARQL endpoint available at http://sparql.bio2vec.net/.


Author(s):  
Maarten Trekels ◽  
Matt Woodburn ◽  
Deborah L Paul ◽  
Sharon Grant ◽  
Kate Webbink ◽  
...  

Data standards allow us to aggregate, compare, compute and communicate data from a wide variety of origins. However, for historical reasons, data are most likely to be stored in many different formats and conform to different models. Every data set might contain a huge amount of information, but it becomes tremendously difficult to compare them without a common way to represent the data. That is when standards development jumps in. Developing a standard is a formidable process, often involving many stakeholders. Typically the initial blueprint of a standard is created by a limited number of people who have a clear view of their use cases. However, as development continues, additional stakeholders participate in the process. As a result, conflicting opinions and interests will influence the development of the standard. Compromises need to be made and the standard might look very different from the initial concept. In order to address the needs of the community, a high level of engagement in the development process is encouraged. However, this does not necessarily increase the usability of the standard. To mitigate this, there is a need to test the standard during the early stages of development. In order to facilitate this, we explored the use of Wikibase to create an initial implementation of the standard. Wikibase is the underlying technology that drives Wikidata. The software is open-source and can be customized for creating collaborative knowledge bases. In addition to containing an RDF (Resource Description Framework) triple store under the hood, it provides users with an easy-to-use graphical user interface (see Fig. 1). This facilitates the use of an implementation of a standard by non-technical users. The Wikibase remains fully flexible in the way data are represented and no data model is enforced. This allows users to map their data onto the standard without any restrictions. Retrieving information from RDF data can be done through the SPARQL query language (W3C 2020). The software package has also a built-in SPARQL endpoint, allowing users to extract the relevant information: Does the standard cover all use cases envisioned? Are parts of the standard underdeveloped? Are the controlled vocabularies sufficient to describe the data? Does the standard cover all use cases envisioned? Are parts of the standard underdeveloped? Are the controlled vocabularies sufficient to describe the data? This strategy was applied during the development of the TDWG Collection Description standard. After completing a rough version of the standard, the different terms that were defined in the first version were transferred to a Wikibase instance running on WBStack (Addshore 2020). Initially, collection data were entered manually, which revealed several issues. The Wikibase allowed us to easily define controlled vocabularies and expand them as needed. The feedback reported from users then flowed back to the further development of the standard. Currently we envisage creating automated scripts that will import data en masse from collections. Using the SPARQL query interface, it will then be straightforward to ensure that data can be extracted from the Wikibase to support the envisaged use cases.


2020 ◽  
Author(s):  
James McDonagh ◽  
William Swope ◽  
Richard L. Anderson ◽  
Michael Johnston ◽  
David J. Bray

Digitization offers significant opportunities for the formulated product industry to transform the way it works and develop new methods of business. R&D is one area of operation that is challenging to take advantage of these technologies due to its high level of domain specialisation and creativity but the benefits could be significant. Recent developments of base level technologies such as artificial intelligence (AI)/machine learning (ML), robotics and high performance computing (HPC), to name a few, present disruptive and transformative technologies which could offer new insights, discovery methods and enhanced chemical control when combined in a digital ecosystem of connectivity, distributive services and decentralisation. At the fundamental level, research in these technologies has shown that new physical and chemical insights can be gained, which in turn can augment experimental R&D approaches through physics-based chemical simulation, data driven models and hybrid approaches. In all of these cases, high quality data is required to build and validate models in addition to the skills and expertise to exploit such methods. In this article we give an overview of some of the digital technology demonstrators we have developed for formulated product R&D. We discuss the challenges in building and deploying these demonstrators.<br>


2020 ◽  
Vol 16 (3) ◽  
pp. 182-195
Author(s):  
Sarah Baker ◽  
Natalie Logie ◽  
Kim Paulson ◽  
Adele Duimering ◽  
Albert Murtha

Radiotherapy is an important component of the treatment for primary and metastatic brain tumors. Due to the close proximity of critical structures and normal brain parenchyma, Central Nervous System (CNS) radiotherapy is associated with adverse effects such as neurocognitive deficits, which must be weighed against the benefit of improved tumor control. Advanced radiotherapy technology may help to mitigate toxicity risks, although there is a paucity of high-level evidence to support its use. Recent advances have been made in the treatment for gliomas, meningiomas, benign tumors, and metastases, although outcomes remain poor for many high grade tumors. This review highlights recent developments in CNS radiotherapy, discusses common treatment toxicities, critically reviews advanced radiotherapy technologies, and highlights promising treatment strategies to improve clinical outcomes in the future.


Author(s):  
Prasad Dandawate ◽  
Khursheed Ahmed ◽  
Subhash Padhye ◽  
Aamir Ahmad ◽  
Bernhard Biersack

Background: Chalcones are structurally simple compounds that are easily accessible by synthetic methods. Heterocyclic chalcones have gained the interest among scientists due to their diverse biological activities. The anti-tumor activities of heterocyclic chalcones are especially remarkable and the growing number of publications dealing with this topic warrants an up-to-date compilation. Methods: Search for antitumor active heterocyclic chalcones was carried out using Pubmed and Scifinder as common web-based literature searching tools. Pertinent and current literature is covered from 2015/2016 to 2019. Chemical structures, biological activities and modes of action of anti-tumor active heterocyclic chalcones are summarized. Results: Simply prepared chalcones have emerged over the last years with promising antitumor activities. Among them is a considerable number of tubulin polymerization inhibitors. But there are also new chalcones targeting special enzymes such as histone deacetylases or with DNA-binding properties. Conclusion: This review provides a summary of recent heterocyclic chalcone derivatives with distinct anti-tumor activities.


2021 ◽  
Vol 11 (9) ◽  
pp. 3730
Author(s):  
Aniqa Dilawari ◽  
Muhammad Usman Ghani Khan ◽  
Yasser D. Al-Otaibi ◽  
Zahoor-ur Rehman ◽  
Atta-ur Rahman ◽  
...  

After the September 11 attacks, security and surveillance measures have changed across the globe. Now, surveillance cameras are installed almost everywhere to monitor video footage. Though quite handy, these cameras produce videos in a massive size and volume. The major challenge faced by security agencies is the effort of analyzing the surveillance video data collected and generated daily. Problems related to these videos are twofold: (1) understanding the contents of video streams, and (2) conversion of the video contents to condensed formats, such as textual interpretations and summaries, to save storage space. In this paper, we have proposed a video description framework on a surveillance dataset. This framework is based on the multitask learning of high-level features (HLFs) using a convolutional neural network (CNN) and natural language generation (NLG) through bidirectional recurrent networks. For each specific task, a parallel pipeline is derived from the base visual geometry group (VGG)-16 model. Tasks include scene recognition, action recognition, object recognition and human face specific feature recognition. Experimental results on the TRECViD, UET Video Surveillance (UETVS) and AGRIINTRUSION datasets depict that the model outperforms state-of-the-art methods by a METEOR (Metric for Evaluation of Translation with Explicit ORdering) score of 33.9%, 34.3%, and 31.2%, respectively. Our results show that our framework has distinct advantages over traditional rule-based models for the recognition and generation of natural language descriptions.


2005 ◽  
Vol 68 (1) ◽  
pp. 36-43 ◽  
Author(s):  
Gayle Vogt ◽  
Catherine Atwong ◽  
Jean Fuller

Student Assessment of Learning Gains (SALGains) is a Web-based instrument for measuring student perception of their learning in a variety of courses. The authors adapted this instrument to measure students’ achieved proficiency in analyzing cases in an advanced business communication class. The instrument showed that students did achieve a high level of proficiency and that they did so equally in both traditional and online classes.


Sign in / Sign up

Export Citation Format

Share Document