scholarly journals Correction: Razgon, M., et al. Relaxed Rule-Based Learning for Automated Predictive Maintenance: Proof of Concept. Algorithms 2020, 13, 219

Algorithms ◽  
2021 ◽  
Vol 14 (3) ◽  
pp. 86
Author(s):  
Margarita Razgon ◽  
Alireza Mousavi

The authors wish to make the following corrections to their paper [...]

Algorithms ◽  
2020 ◽  
Vol 13 (9) ◽  
pp. 219
Author(s):  
Margarita Razgon ◽  
Alireza Mousavi

In this paper we propose a novel approach of rule learning called Relaxed Separate-and- Conquer (RSC): a modification of the standard Separate-and-Conquer (SeCo) methodology that does not require elimination of covered rows. This method can be seen as a generalization of the methods of SeCo and weighted covering that does not suffer from fragmentation. We present an empirical investigation of the proposed RSC approach in the area of Predictive Maintenance (PdM) of complex manufacturing machines, to predict forthcoming failures of these machines. In particular, we use for experiments a real industrial case study of a Continuous Compression Moulding (CCM) machine which manufactures the plastic bottle closure (caps) in the beverage industry. We compare the RSC approach with a Decision Tree (DT) based and SeCo algorithms and demonstrate that RSC significantly outperforms both DT based and SeCo rule learners. We conclude that the proposed RSC approach is promising for PdM guided by rule learning.


10.29007/wjwz ◽  
2018 ◽  
Author(s):  
Nada Sharaf ◽  
Slim Abdennadher ◽  
Thom Fruehwirth ◽  
Daniel Gall

Computational psychology provides computational models exploring different aspects of cognition. A cognitive architecture includes the basic aspects of any cognitive agent. It consists of different correlated modules. In general, cognitive architectures provide the needed layouts for building intelligent agents. The paper presents the a rule-based approach to visually animate the simulations of models done through cognitive architectures. As a proof of concept, simulations through Adaptive Control of Thought-Rational (ACT-R) were animated. ACT-R is a well-known cognitive architecture. It was deployed to create models in different fields including, among others, learning, problem solving and languages.


2015 ◽  
Vol 32 (6) ◽  
pp. 908-917 ◽  
Author(s):  
Goksel Misirli ◽  
Matteo Cavaliere ◽  
William Waites ◽  
Matthew Pocock ◽  
Curtis Madsen ◽  
...  

Abstract Motivation: Biological systems are complex and challenging to model and therefore model reuse is highly desirable. To promote model reuse, models should include both information about the specifics of simulations and the underlying biology in the form of metadata. The availability of computationally tractable metadata is especially important for the effective automated interpretation and processing of models. Metadata are typically represented as machine-readable annotations which enhance programmatic access to information about models. Rule-based languages have emerged as a modelling framework to represent the complexity of biological systems. Annotation approaches have been widely used for reaction-based formalisms such as SBML. However, rule-based languages still lack a rich annotation framework to add semantic information, such as machine-readable descriptions, to the components of a model. Results: We present an annotation framework and guidelines for annotating rule-based models, encoded in the commonly used Kappa and BioNetGen languages. We adapt widely adopted annotation approaches to rule-based models. We initially propose a syntax to store machine-readable annotations and describe a mapping between rule-based modelling entities, such as agents and rules, and their annotations. We then describe an ontology to both annotate these models and capture the information contained therein, and demonstrate annotating these models using examples. Finally, we present a proof of concept tool for extracting annotations from a model that can be queried and analyzed in a uniform way. The uniform representation of the annotations can be used to facilitate the creation, analysis, reuse and visualization of rule-based models. Although examples are given, using specific implementations the proposed techniques can be applied to rule-based models in general. Availability and implementation: The annotation ontology for rule-based models can be found at http://purl.org/rbm/rbmo. The krdf tool and associated executable examples are available at http://purl.org/rbm/rbmo/krdf. Contact: [email protected] or [email protected]


The need for conversion method exists due to the limitation of manual conversion at design time whenever the interested party must perform some assessments using an existing model checker tool. Manual conversion of the related requirements into the respective specification language is time-consuming especially when the person has limited knowledge and need to do the task repetitively with a different set of Service Level Agreement (SLA) configurations. This paper aims to address the need to automatically capture non-functional requirements specified in the SLA, namely, Service Level Objectives (SLO) and converting them into a specific probabilistic temporal logic specification. We tackle this problem by proposing a conversion method that utilizes a rule-based and template-based approach. The conversion method automatically extracts the required information in SLA based on certain rules and uses the extracted information to replace the elements in the prepared template. We focus on WS-Agreement language for SLA and probabilistic alternating-time temporal logic with rewards specification (rPATL) for the properties specification used in PRISM-games model checker tool. We then implement an initial proof-of concept of a conversion method to illustrate the applicability of translating between targeted specifications


2019 ◽  
Vol 10 (1) ◽  
pp. 224 ◽  
Author(s):  
Alberto Jimenez-Cortadi ◽  
Itziar Irigoien ◽  
Fernando Boto ◽  
Basilio Sierra ◽  
German Rodriguez

This paper presents the process required to implement a data driven Predictive Maintenance (PdM) not only in the machine decision making, but also in data acquisition and processing. A short review of the different approaches and techniques in maintenance is given. The main contribution of this paper is a solution for the predictive maintenance problem in a real machining process. Several steps are needed to reach the solution, which are carefully explained. The obtained results show that the Preventive Maintenance (PM), which was carried out in a real machining process, could be changed into a PdM approach. A decision making application was developed to provide a visual analysis of the Remaining Useful Life (RUL) of the machining tool. This work is a proof of concept of the methodology presented in one process, but replicable for most of the process for serial productions of pieces.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 1557-1557
Author(s):  
Risa Liang Wong ◽  
Medha Sagar ◽  
Jacob Hoffman ◽  
Claire Huang ◽  
Angelica Lerma ◽  
...  

1557 Background: Patients with prostate cancer are diagnosed through a prostate needle biopsy (PNB). Information contained in PNB pathology reports is critical for informing clinical risk stratification and treatment; however, patient comprehension of PNB pathology reports is low, and formats vary widely by institution. Natural language processing (NLP) models trained to automatically extract key information from unstructured PNB pathology reports could be used to generate personalized educational materials for patients in a scalable fashion and expedite the process of collecting registry data or screening patients for clinical trials. As proof of concept, we trained and tested four NLP models for accuracy of information extraction. Methods: Using 403 positive PNB pathology reports from over 80 institutions, we converted portable document formats (PDFs) into text using the Tesseract optical character recognition (OCR) engine, removed protected health information using the Philter open-source tool, cleaned the text with rule-based methods, and annotated clinically relevant attributes as well as structural attributes relevant to information extraction using the Brat Rapid Annotation Tool. Text pre-processing for classification and extraction was done using Scispacy and rule-based methods. Using a 75:25 train:test split (N = 302, 101), we tested conditional random field (CRF), support vector machine (SVM), bidirectional long-short term memory network (Bi-LSTM), and Bi-LSTM-CRF models, reserving 46 training reports as a validation subset for the latter two models. Model-extracted variables were compared with values manually obtained from the unprocessed PDF reports for clinical accuracy. Results: Clinical accuracy of model-extracted variables is reported in the Table. CRF was the highest performing model, with accuracies of 97% for Gleason grade, 82% for percentage of positive cores ( < 50% vs. ≥50%), 90% for perineural or lymphovascular invasion, and 100% for presence of non-acinar carcinoma histology. On manual review of inaccurate results, model performance was limited by PDF image quality, errors in OCR processing of tables or columns, and practice variability in reporting number of biopsy cores. Conclusions: Our results demonstrate successful proof of concept for the use of NLP models in accurately extracting information from PNB pathology reports, though further optimization is needed before use in clinical practice.[Table: see text]


GigaScience ◽  
2020 ◽  
Vol 9 (2) ◽  
Author(s):  
Jerven Bolleman ◽  
Edouard de Castro ◽  
Delphine Baratin ◽  
Sebastien Gehant ◽  
Beatrice A Cuche ◽  
...  

Abstract Background Genome and proteome annotation pipelines are generally custom built and not easily reusable by other groups. This leads to duplication of effort, increased costs, and suboptimal annotation quality. One way to address these issues is to encourage the adoption of annotation standards and technological solutions that enable the sharing of biological knowledge and tools for genome and proteome annotation. Results Here we demonstrate one approach to generate portable genome and proteome annotation pipelines that users can run without recourse to custom software. This proof of concept uses our own rule-based annotation pipeline HAMAP, which provides functional annotation for protein sequences to the same depth and quality as UniProtKB/Swiss-Prot, and the World Wide Web Consortium (W3C) standards Resource Description Framework (RDF) and SPARQL (a recursive acronym for the SPARQL Protocol and RDF Query Language). We translate complex HAMAP rules into the W3C standard SPARQL 1.1 syntax, and then apply them to protein sequences in RDF format using freely available SPARQL engines. This approach supports the generation of annotation that is identical to that generated by our own in-house pipeline, using standard, off-the-shelf solutions, and is applicable to any genome or proteome annotation pipeline. Conclusions HAMAP SPARQL rules are freely available for download from the HAMAP FTP site, ftp://ftp.expasy.org/databases/hamap/sparql/, under the CC-BY-ND 4.0 license. The annotations generated by the rules are under the CC-BY 4.0 license. A tutorial and supplementary code to use HAMAP as SPARQL are available on GitHub at https://github.com/sib-swiss/HAMAP-SPARQL, and general documentation about HAMAP can be found on the HAMAP website at https://hamap.expasy.org.


Sign in / Sign up

Export Citation Format

Share Document