scholarly journals Introduction to the Integrated Domain Modeling Toolset

2014 ◽  
Vol 16 (1) ◽  
pp. 13-18
Author(s):  
Armands Slihte ◽  
Juan Manuel Cueva Lovelle

Abstract This paper describes the Integrated Domain Modeling approach and introduces the supporting toolset as a solution to the complex domain-modeling task. This approach integrates artificial intelligence (AI) and system analysis by exploiting ontology, natural language processing (NLP), use cases and model-driven architecture (MDA) for knowledge engineering and domain modeling. The IDM toolset provides the opportunity to automatically generate the initial AS-IS model from the formally defined domain knowledge. In this paper, we describe in detail the scope, architecture and implementation of the toolset.

2021 ◽  
Vol 3 ◽  
Author(s):  
Marieke van Erp ◽  
Christian Reynolds ◽  
Diana Maynard ◽  
Alain Starke ◽  
Rebeca Ibáñez Martín ◽  
...  

In this paper, we discuss the use of natural language processing and artificial intelligence to analyze nutritional and sustainability aspects of recipes and food. We present the state-of-the-art and some use cases, followed by a discussion of challenges. Our perspective on addressing these is that while they typically have a technical nature, they nevertheless require an interdisciplinary approach combining natural language processing and artificial intelligence with expert domain knowledge to create practical tools and comprehensive analysis for the food domain.


2015 ◽  
Vol 1 (1) ◽  
pp. 206-214 ◽  
Author(s):  
Zobia Rehman ◽  
Stefania Kifor

AbstractIt often happens in teaching that due to complexity of a subject or unavailability of an expert instructor the subject undergoes in a situation that not only affects its outcome but the involvement and learning development of students also. Although contents are covered even in such a situation but their inadequate explanation leaves many question marks in students’ mind. Artificial Intelligence helps represent knowledge graphically and symbolically which can be logically inferred. Visual and symbolic representation of knowledge is easy to understand for both teachers and students. To facilitate students understanding teachers often structure domain knowledge in a visual form where all important contents of a subject can be seen along with their relation to each other. These structures are called ontology which is an important aspect of knowledge engineering. Teaching via ontology is in practice since last two decades. Natural Language Processing (NLP) is a combination of computation and linguistic and is often hard to teach. Its contents are apparently not tied together in a reasonable way which makes it difficult for a teacher that where to start with. In this article we will discuss the design of ontology to support rational learning and efficient teaching of NLP at introductory level.


2021 ◽  
pp. 1-35
Author(s):  
John A. Bateman

GUM is a linguistically-motivated ontology originally developed to support natural language processing systems by offering a level of representation intermediate between linguistic forms and domain knowledge. Whereas modeling decisions for individual domains may need to be responsive to domain-specific criteria, a linguistically-motivated ontology offers a characterization that generalizes across domains because its design criteria are derived independently both of domain and of application. With respect to this mediating role, the use of GUM resembles (and partially predates) the adoption of upper ontologies as tools for mediating across domains and for supporting domain modeling. This paper briefly introduces the ontology, setting out its origins, design principles and applications. The example cases for this special issue are then described, illustrating particularly some of the principal differences and similarities of GUM to non-linguistically motivated upper ontologies.


2021 ◽  
pp. 1-13
Author(s):  
Lamiae Benhayoun ◽  
Daniel Lang

BACKGROUND: The renewed advent of Artificial Intelligence (AI) is inducing profound changes in the classic categories of technology professions and is creating the need for new specific skills. OBJECTIVE: Identify the gaps in terms of skills between academic training on AI in French engineering and Business Schools, and the requirements of the labour market. METHOD: Extraction of AI training contents from the schools’ websites and scraping of a job advertisements’ website. Then, analysis based on a text mining approach with a Python code for Natural Language Processing. RESULTS: Categorization of occupations related to AI. Characterization of three classes of skills for the AI market: Technical, Soft and Interdisciplinary. Skills’ gaps concern some professional certifications and the mastery of specific tools, research abilities, and awareness of ethical and regulatory dimensions of AI. CONCLUSIONS: A deep analysis using algorithms for Natural Language Processing. Results that provide a better understanding of the AI capability components at the individual and the organizational levels. A study that can help shape educational programs to respond to the AI market requirements.


2021 ◽  
pp. 1063293X2098297
Author(s):  
Ivar Örn Arnarsson ◽  
Otto Frost ◽  
Emil Gustavsson ◽  
Mats Jirstrand ◽  
Johan Malmqvist

Product development companies collect data in form of Engineering Change Requests for logged design issues, tests, and product iterations. These documents are rich in unstructured data (e.g. free text). Previous research affirms that product developers find that current IT systems lack capabilities to accurately retrieve relevant documents with unstructured data. In this research, we demonstrate a method using Natural Language Processing and document clustering algorithms to find structurally or contextually related documents from databases containing Engineering Change Request documents. The aim is to radically decrease the time needed to effectively search for related engineering documents, organize search results, and create labeled clusters from these documents by utilizing Natural Language Processing algorithms. A domain knowledge expert at the case company evaluated the results and confirmed that the algorithms we applied managed to find relevant document clusters given the queries tested.


Author(s):  
Seonho Kim ◽  
Jungjoon Kim ◽  
Hong-Woo Chun

Interest in research involving health-medical information analysis based on artificial intelligence, especially for deep learning techniques, has recently been increasing. Most of the research in this field has been focused on searching for new knowledge for predicting and diagnosing disease by revealing the relation between disease and various information features of data. These features are extracted by analyzing various clinical pathology data, such as EHR (electronic health records), and academic literature using the techniques of data analysis, natural language processing, etc. However, still needed are more research and interest in applying the latest advanced artificial intelligence-based data analysis technique to bio-signal data, which are continuous physiological records, such as EEG (electroencephalography) and ECG (electrocardiogram). Unlike the other types of data, applying deep learning to bio-signal data, which is in the form of time series of real numbers, has many issues that need to be resolved in preprocessing, learning, and analysis. Such issues include leaving feature selection, learning parts that are black boxes, difficulties in recognizing and identifying effective features, high computational complexities, etc. In this paper, to solve these issues, we provide an encoding-based Wave2vec time series classifier model, which combines signal-processing and deep learning-based natural language processing techniques. To demonstrate its advantages, we provide the results of three experiments conducted with EEG data of the University of California Irvine, which are a real-world benchmark bio-signal dataset. After converting the bio-signals (in the form of waves), which are a real number time series, into a sequence of symbols or a sequence of wavelet patterns that are converted into symbols, through encoding, the proposed model vectorizes the symbols by learning the sequence using deep learning-based natural language processing. The models of each class can be constructed through learning from the vectorized wavelet patterns and training data. The implemented models can be used for prediction and diagnosis of diseases by classifying the new data. The proposed method enhanced data readability and intuition of feature selection and learning processes by converting the time series of real number data into sequences of symbols. In addition, it facilitates intuitive and easy recognition, and identification of influential patterns. Furthermore, real-time large-capacity data analysis is facilitated, which is essential in the development of real-time analysis diagnosis systems, by drastically reducing the complexity of calculation without deterioration of analysis performance by data simplification through the encoding process.


Sign in / Sign up

Export Citation Format

Share Document