Automated verification, assembly, and extension of GBM stem cell network model with knowledge from literature and data

2021 ◽  
Author(s):  
Emilee Holtzapple ◽  
Brent Cochran ◽  
Natasa Miskov-Zivanov

Signaling network models are usually assembled from information in literature and expert knowledge or inferred from data. The goal of modeling is to gain mechanistic understanding of key signaling pathways and provide predictions on how perturbations affect large-scale processes such as disease progression. For glioblastoma multiforme (GBM), this task is critical, given the lack of effective treatments and pace of disease progression. Both manual and automated assembly of signaling networks from data or literature have drawbacks. Existing GBM networks, as well as networks assembled using state-of-the-art machine reading, fall short when judged by the quality and quantity of information, as well as certain attributes of the overall network structure. The contributions of this work are two-fold. First, we propose an automated methodology for verification of signaling networks. Next, we discuss automation of network assembly and extension that relies on methods and resources used for network verification, thus, implicitly including verification in these processes. In addition to these methods, we also present, and verify a comprehensive GBM network assembled with a hybrid of manual and automated methods. Finally, we demonstrate that, while an automated network assembly is fast, such networks still lack precision and realistic network topology.

Author(s):  
Min Tang ◽  
Jiaran Cai ◽  
Hankz Hankui Zhuo

Multiple-choice machine reading comprehension is an important and challenging task where the machine is required to select the correct answer from a set of candidate answers given passage and question. Existing approaches either match extracted evidence with candidate answers shallowly or model passage, question and candidate answers with a single paradigm of matching. In this paper, we propose Multi-Matching Network (MMN) which models the semantic relationship among passage, question and candidate answers from multiple different paradigms of matching. In our MMN model, each paradigm is inspired by how human think and designed under a unified compose-match framework. To demonstrate the effectiveness of our model, we evaluate MMN on a large-scale multiple choice machine reading comprehension dataset (i.e. RACE). Empirical results show that our proposed model achieves a significant improvement compared to strong baselines and obtains state-of-the-art results.


2021 ◽  
pp. 1-19
Author(s):  
Kiet Van Nguyen ◽  
Nhat Duy Nguyen ◽  
Phong Nguyen-Thuan Do ◽  
Anh Gia-Tuan Nguyen ◽  
Ngan Luu-Thuy Nguyen

Machine Reading Comprehension has attracted significant interest in research on natural language understanding, and large-scale datasets and neural network-based methods have been developed for this task. However, most developments of resources and methods in machine reading comprehension have been investigated using two resource-rich languages, English and Chinese. This article proposes a system called ViReader for open-domain machine reading comprehension in Vietnamese by using Wikipedia as the textual knowledge source, where the answer to any particular question is a textual span derived directly from texts on Vietnamese Wikipedia. Our system combines a sentence retriever component, based on techniques of information retrieval to extract the relevant sentences, with a transfer learning-based answer extractor trained to predict answers based on Wikipedia texts. Experiments on multiple datasets for machine reading comprehension in Vietnamese and other languages demonstrate that (1) our ViReader system is highly competitive with prevalent machine learning-based systems, and (2) multi-task learning by using a combination consisting of the sentence retriever and answer extractor is an end-to-end reading comprehension system. The sentence retriever component of our proposed system retrieves the sentences that are most likely to provide the answer response to the given question. The transfer learning-based answer extractor then reads the document from which the sentences have been retrieved, predicts the answer, and returns it to the user. The ViReader system achieves new state-of-the-art performances, with values of 70.83% EM (exact match) and 89.54% F1, outperforming the BERT-based system by 11.55% and 9.54% , respectively. It also obtains state-of-the-art performance on UIT-ViNewsQA (another Vietnamese dataset consisting of online health-domain news) and BiPaR (a bilingual dataset on English and Chinese novel texts). Compared with the BERT-based system, our system achieves significant improvements (in terms of F1) with 7.65% for English and 6.13% for Chinese on the BiPaR dataset. Furthermore, we build a ViReader application programming interface that programmers can employ in Artificial Intelligence applications.


Database ◽  
2020 ◽  
Vol 2020 ◽  
Author(s):  
Emilee Holtzapple ◽  
Cheryl A Telmer ◽  
Natasa Miskov-Zivanov

Abstract State-of-the-art machine reading methods extract, in hours, hundreds of thousands of events from the biomedical literature. However, many of the extracted biomolecular interactions are incorrect or not relevant for computational modeling of a system of interest. Therefore, rapid, automated methods are required to filter and select accurate and useful information. The FiLter for Understanding True Events (FLUTE) tool uses public protein interaction databases to filter interactions that have been extracted by machines from databases such as PubMed and score them for accuracy. Confidence in the interactions allows for rapid and accurate model assembly. As our results show, FLUTE can reliably determine the confidence in the biomolecular interactions extracted by fast machine readers and at the same time provide a speedup in interaction filtering by three orders of magnitude. Database URL: https://bitbucket.org/biodesignlab/flute.


The success of the Program of housing stock renovation in Moscow depends on the efficiency of resource management. One of the main urban planning documents that determine the nature of the reorganization of residential areas included in the Program of renovation is the territory planning project. The implementation of the planning project is a complex process that has a time point of its beginning and end, and also includes a set of interdependent parallel-sequential activities. From an organizational point of view, it is convenient to use network planning and management methods for project implementation. These methods are based on the construction of network models, including its varieties – a Gantt chart. A special application has been developed to simulate the implementation of planning projects. The article describes the basic principles and elements of modeling. The list of the main implementation parameters of the Program of renovation obtained with the help of the developed software for modeling is presented. The variants of using the results obtained for a comprehensive analysis of the implementation of large-scale urban projects are proposed.


1986 ◽  
Author(s):  
Simon S. Kim ◽  
Mary Lou Maher ◽  
Raymond E. Levitt ◽  
Martin F. Rooney ◽  
Thomas J. Siller

2018 ◽  
Vol 14 (12) ◽  
pp. 1915-1960 ◽  
Author(s):  
Rudolf Brázdil ◽  
Andrea Kiss ◽  
Jürg Luterbacher ◽  
David J. Nash ◽  
Ladislava Řezníčková

Abstract. The use of documentary evidence to investigate past climatic trends and events has become a recognised approach in recent decades. This contribution presents the state of the art in its application to droughts. The range of documentary evidence is very wide, including general annals, chronicles, memoirs and diaries kept by missionaries, travellers and those specifically interested in the weather; records kept by administrators tasked with keeping accounts and other financial and economic records; legal-administrative evidence; religious sources; letters; songs; newspapers and journals; pictographic evidence; chronograms; epigraphic evidence; early instrumental observations; society commentaries; and compilations and books. These are available from many parts of the world. This variety of documentary information is evaluated with respect to the reconstruction of hydroclimatic conditions (precipitation, drought frequency and drought indices). Documentary-based drought reconstructions are then addressed in terms of long-term spatio-temporal fluctuations, major drought events, relationships with external forcing and large-scale climate drivers, socio-economic impacts and human responses. Documentary-based drought series are also considered from the viewpoint of spatio-temporal variability for certain continents, and their employment together with hydroclimate reconstructions from other proxies (in particular tree rings) is discussed. Finally, conclusions are drawn, and challenges for the future use of documentary evidence in the study of droughts are presented.


2021 ◽  
Vol 7 (3) ◽  
pp. 50
Author(s):  
Anselmo Ferreira ◽  
Ehsan Nowroozi ◽  
Mauro Barni

The possibility of carrying out a meaningful forensic analysis on printed and scanned images plays a major role in many applications. First of all, printed documents are often associated with criminal activities, such as terrorist plans, child pornography, and even fake packages. Additionally, printing and scanning can be used to hide the traces of image manipulation or the synthetic nature of images, since the artifacts commonly found in manipulated and synthetic images are gone after the images are printed and scanned. A problem hindering research in this area is the lack of large scale reference datasets to be used for algorithm development and benchmarking. Motivated by this issue, we present a new dataset composed of a large number of synthetic and natural printed face images. To highlight the difficulties associated with the analysis of the images of the dataset, we carried out an extensive set of experiments comparing several printer attribution methods. We also verified that state-of-the-art methods to distinguish natural and synthetic face images fail when applied to print and scanned images. We envision that the availability of the new dataset and the preliminary experiments we carried out will motivate and facilitate further research in this area.


2021 ◽  
Vol 40 (3) ◽  
pp. 1-13
Author(s):  
Lumin Yang ◽  
Jiajie Zhuang ◽  
Hongbo Fu ◽  
Xiangzhi Wei ◽  
Kun Zhou ◽  
...  

We introduce SketchGNN , a convolutional graph neural network for semantic segmentation and labeling of freehand vector sketches. We treat an input stroke-based sketch as a graph with nodes representing the sampled points along input strokes and edges encoding the stroke structure information. To predict the per-node labels, our SketchGNN uses graph convolution and a static-dynamic branching network architecture to extract the features at three levels, i.e., point-level, stroke-level, and sketch-level. SketchGNN significantly improves the accuracy of the state-of-the-art methods for semantic sketch segmentation (by 11.2% in the pixel-based metric and 18.2% in the component-based metric over a large-scale challenging SPG dataset) and has magnitudes fewer parameters than both image-based and sequence-based methods.


Author(s):  
Anil S. Baslamisli ◽  
Partha Das ◽  
Hoang-An Le ◽  
Sezer Karaoglu ◽  
Theo Gevers

AbstractIn general, intrinsic image decomposition algorithms interpret shading as one unified component including all photometric effects. As shading transitions are generally smoother than reflectance (albedo) changes, these methods may fail in distinguishing strong photometric effects from reflectance variations. Therefore, in this paper, we propose to decompose the shading component into direct (illumination) and indirect shading (ambient light and shadows) subcomponents. The aim is to distinguish strong photometric effects from reflectance variations. An end-to-end deep convolutional neural network (ShadingNet) is proposed that operates in a fine-to-coarse manner with a specialized fusion and refinement unit exploiting the fine-grained shading model. It is designed to learn specific reflectance cues separated from specific photometric effects to analyze the disentanglement capability. A large-scale dataset of scene-level synthetic images of outdoor natural environments is provided with fine-grained intrinsic image ground-truths. Large scale experiments show that our approach using fine-grained shading decompositions outperforms state-of-the-art algorithms utilizing unified shading on NED, MPI Sintel, GTA V, IIW, MIT Intrinsic Images, 3DRMS and SRD datasets.


2020 ◽  
Vol 499 (2) ◽  
pp. 2934-2958
Author(s):  
A Richard-Laferrière ◽  
J Hlavacek-Larrondo ◽  
R S Nemmen ◽  
C L Rhea ◽  
G B Taylor ◽  
...  

ABSTRACT A variety of large-scale diffuse radio structures have been identified in many clusters with the advent of new state-of-the-art facilities in radio astronomy. Among these diffuse radio structures, radio mini-halos are found in the central regions of cool core clusters. Their origin is still unknown and they are challenging to discover; less than 30 have been published to date. Based on new VLA observations, we confirmed the mini-halo in the massive strong cool core cluster PKS 0745−191 (z = 0.1028) and discovered one in the massive cool core cluster MACS J1447.4+0827 (z = 0.3755). Furthermore, using a detailed analysis of all known mini-halos, we explore the relation between mini-halos and active galactic nucleus (AGN) feedback processes from the central galaxy. We find evidence of strong, previously unknown correlations between mini-halo radio power and X-ray cavity power, and between mini-halo and the central galaxy radio power related to the relativistic jets when spectrally decomposing the AGN radio emission into a component for past outbursts and one for ongoing accretion. Overall, our study indicates that mini-halos are directly connected to the central AGN in clusters, following previous suppositions. We hypothesize that AGN feedback may be one of the dominant mechanisms giving rise to mini-halos by injecting energy into the intra-cluster medium and reaccelerating an old population of particles, while sloshing motion may drive the overall shape of mini-halos inside cold fronts. AGN feedback may therefore not only play a vital role in offsetting cooling in cool core clusters, but may also play a fundamental role in re-energizing non-thermal particles in clusters.


Sign in / Sign up

Export Citation Format

Share Document