Knowledge-Based Scoring Functions in Drug Design: 2. Can the Knowledge Base Be Enriched?

2010 ◽  
Vol 51 (2) ◽  
pp. 386-397 ◽  
Author(s):  
Qiancheng Shen ◽  
Bing Xiong ◽  
Mingyue Zheng ◽  
Xiaomin Luo ◽  
Cheng Luo ◽  
...  

2010 ◽  
Vol 50 (8) ◽  
pp. 1378-1386 ◽  
Author(s):  
Mengzhu Xue ◽  
Mingyue Zheng ◽  
Bing Xiong ◽  
Yanlian Li ◽  
Hualiang Jiang ◽  
...  


2021 ◽  
Author(s):  
Marciane Mueller ◽  
Rejane Frozza ◽  
Liane Mählmann Kipper ◽  
Ana Carolina Kessler

BACKGROUND This article presents the modeling and development of a Knowledge Based System, supported by the use of a virtual conversational agent called Dóris. Using natural language processing resources, Dóris collects the clinical data of patients in care in the context of urgency and hospital emergency. OBJECTIVE The main objective is to validate the use of virtual conversational agents to properly and accurately collect the data necessary to perform the evaluation flowcharts used to classify the degree of urgency of patients and determine the priority for medical care. METHODS The agent's knowledge base was modeled using the rules provided for in the evaluation flowcharts comprised by the Manchester Triage System. It also allows the establishment of a simple, objective and complete communication, through dialogues to assess signs and symptoms that obey the criteria established by a standardized, validated and internationally recognized system. RESULTS Thus, in addition to verifying the applicability of Artificial Intelligence techniques in a complex domain of health care, a tool is presented that helps not only in the perspective of improving organizational processes, but also in improving human relationships, bringing professionals and patients closer. The system's knowledge base was modeled on the IBM Watson platform. CONCLUSIONS The results obtained from simulations carried out by the human specialist allowed us to verify that a knowledge-based system supported by a virtual conversational agent is feasible for the domain of risk classification and priority determination of medical care for patients in the context of emergency care and hospital emergency.



Glycobiology ◽  
2018 ◽  
Vol 29 (2) ◽  
pp. 124-136 ◽  
Author(s):  
Juan I Blanco Capurro ◽  
Matias Di Paola ◽  
Marcelo Daniel Gamarra ◽  
Marcelo A Martí ◽  
Carlos P Modenutti

Abstract Unraveling the structure of lectin–carbohydrate complexes is vital for understanding key biological recognition processes and development of glycomimetic drugs. Molecular Docking application to predict them is challenging due to their low affinity, hydrophilic nature and ligand conformational diversity. In the last decade several strategies, such as the inclusion of glycan conformation specific scoring functions or our developed solvent-site biased method, have improved carbohydrate docking performance but significant challenges remain, in particular, those related to receptor conformational diversity. In the present work we have analyzed conventional and solvent-site biased autodock4 performance concerning receptor conformational diversity as derived from different crystal structures (apo and holo), Molecular Dynamics snapshots and Homology-based models, for 14 different lectin–monosaccharide complexes. Our results show that both conventional and biased docking yield accurate lectin–monosaccharide complexes, starting from either apo or homology-based structures, even when only moderate (45%) sequence identity templates are available. An essential element for success is a proper combination of a middle-sized (10–100 structures) conformational ensemble, derived either from Molecular dynamics or multiple homology model building. Consistent with our previous works, results show that solvent-site biased methods improve overall performance, but that results are still highly system dependent. Finally, our results also show that docking can select the correct receptor structure within the ensemble, underscoring the relevance of joint evaluation of both ligand pose and receptor conformation.



Author(s):  
Sarah Bouraga ◽  
Ivan Jureta ◽  
Stéphane Faulkner ◽  
Caroline Herssens

Knowledge-Base Recommendation (or Recommender) Systems (KBRS) provide the user with advice about a decision to make or an action to take. KBRS rely on knowledge provided by human experts, encoded in the system and applied to input data, in order to generate recommendations. This survey overviews the main ideas characterizing a KBRS. Using a classification framework, the survey overviews KBRS components, user problems for which recommendations are given, knowledge content of the system, and the degree of automation in producing recommendations.



2012 ◽  
Author(s):  
◽  
Liang Liu

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT AUTHOR'S REQUEST.] RNA (ribonucleic acid) molecules play a variety of crucial roles in cellular functions at the level of transcription, translation and gene regulation. RNA functions are tied to structures. We aim to develop a novel free energy-based model for RNA structures, especially for RNA loops and junctions. In the first project, we develop a new conformational entropy model for RNA structures consisting of multiple helices connected by cross-linked loops. The basic strategy of our approach is to decompose the whole structure into a number of three-body building blocks, where each building block consists of a loop and two helices that are directly connected to the two ends of the loop. Assembly of the building blocks gives the entropy of the whole structure. The method provide a solid first step toward a systematic development of an entropy and free energy model for complex tertiary folds for RNA and other biopolymer. In the second project, based on the survey of all the known RNA structures, we derive a set of virtual bond-based scoring functions for the different types of dinucleotides. To circumvent the problem of reference state selection, we apply an iterative method to extract the effective potential, based on the complete conformational ensemble. With such a set of knowledge-based energy parameters, for a given sequence, we can successfully identify the native structure (the best-scored structure) from a set of structural decoys.



1990 ◽  
Vol 80 (6B) ◽  
pp. 1833-1851 ◽  
Author(s):  
Thomas C. Bache ◽  
Steven R. Bratt ◽  
James Wang ◽  
Robert M. Fung ◽  
Cris Kobryn ◽  
...  

Abstract The Intelligent Monitoring System (IMS) is a computer system for processing data from seismic arrays and simpler stations to detect, locate, and identify seismic events. The first operational version processes data from two high-frequency arrays (NORESS and ARCESS) in Norway. The IMS computers and functions are distributed between the NORSAR Data Analysis Center (NDAC) near Oslo and the Center for Seismic Studies (Center) in Arlington, Virginia. The IMS modules at NDAC automatically retrieve data from a disk buffer, detect signals, compute signal attributes (amplitude, slowness, azimuth, polarization, etc.), and store them in a commercial relational database management system (DBMS). IMS makes scheduled (e.g., hourly) transfers of the data to a separate DBMS at the Center. Arrival of new data automatically initiates a “knowledge-based system (KBS)” that interprets these data to locate and identify (earthquake, mine blast, etc.) seismic events. This KBS uses general and area-specific seismological knowledge represented in rules and procedures. For each event, unprocessed data segments (e.g., 7 min for regional events) are retrieved from NDAC for subsequent display and analyst review. The interactive analysis modules include integrated waveform and map display/manipulation tools for efficient analyst validation or correction of the solutions produced by the automated system. Another KBS compares the analyst and automatic solutions to mark overruled elements of the knowledge base. Performance analysis statistics guide subsequent changes to the knowledge base so it improves with experience. The IMS is implemented on networked Sun workstations, with a 56 kbps satellite link bridging the NDAC and Center computer networks. The software architecture is modular and distributed, with processes communicating by messages and sharing data via the DBMS. The IMS processing requirements are easily met with major processes (i.e., signal processing, KBS, and DBMS) on separate Sun 4/2xx workstations. This architecture facilitates expansion in functionality and number of stations. The first version was operated continuously for 8 weeks in late-1989. The Center functions were then transferred to NDAC for subsequent operation. Later versions will be distributed among NDAC, Scripps/IGPP (San Diego), and the Center to process data from many stations and arrays. The IMS design is ambitious in its integration of many new computer technologies, but the operational performance of the first version demonstrates its validity. Thus, IMS provides a new generation of automated seismic event monitoring capability.



Author(s):  
Samir Rohatgi ◽  
James H. Oliver ◽  
Stuart S. Chen

Abstract This paper describes the development of OPGEN (Opportunity Generator), a computer based system to help identify areas where a knowledge based system (KBS) might be beneficial, and to evaluate whether a suitable system could be developed in that area. The core of the system is a knowledge base used to carry out the identification and evaluation functions. Ancillary functions serve to introduce and demonstrate KBS technology to enhance the overall effectiveness of the system. All aspects of the development, from knowledge acquisition through to testing are presented in this paper.



Author(s):  
Ming Dong ◽  
Jianzhong Cha ◽  
Mingcheng E

Abstract In this paper, we realize knowledge-based discrete event simulation model’s representation, reasoning and implementation by means of object-oriented(OO) frame language. Firstly, a classes library of simulation models is built by using the OO frame language. And then, behaviours of simulation models can be generated by inference engines reasoning about knowledge base. Lastly, activity cycle diagrams can be used to construct simulation network logic models by connecting the components classes of simulation models. This kind of knowledge-based simulation models can effectively solve the modeling problems of complex and ill-structure systems.



Sign in / Sign up

Export Citation Format

Share Document