Automatic task graph generation techniques

Author(s):  
M. Cosnard ◽  
M. Loi
Keyword(s):  
1995 ◽  
Vol 05 (04) ◽  
pp. 527-538 ◽  
Author(s):  
M. COSNARD ◽  
M. LOI

We present a model of parallel computation, the parameterized task graph, which is a compact, problem size independent, representation of some frequently used directed acyclic task graphs. Techniques automating the construction of such a representation, starting from an annotated sequential program are proposed. We show that many important properties of the task graph such as the computational load of the nodes and the communication volume of the edges can be automatically deduced in a problem size independent way.


2016 ◽  
Vol 9 (8) ◽  
Author(s):  
Mishra Ashish ◽  
Sharma Aditya ◽  
Verma Pranet ◽  
Abhijit R. Asati ◽  
Raju Kota Solomon

Author(s):  
E. M. Saad ◽  
M. El Adawy ◽  
H. A. Keshk ◽  
Shahira M. Habashy
Keyword(s):  

Author(s):  
Yanli Feng ◽  
Gongliang Sun ◽  
Zhiyao Liu ◽  
Chenrui Wu ◽  
Xiaoyang Zhu ◽  
...  

2020 ◽  
Vol 14 (7) ◽  
pp. 546-553
Author(s):  
Zhenxing Zheng ◽  
Zhendong Li ◽  
Gaoyun An ◽  
Songhe Feng
Keyword(s):  

2021 ◽  
Vol 10 (7) ◽  
pp. 488
Author(s):  
Peng Li ◽  
Dezheng Zhang ◽  
Aziguli Wulamu ◽  
Xin Liu ◽  
Peng Chen

A deep understanding of our visual world is more than an isolated perception on a series of objects, and the relationships between them also contain rich semantic information. Especially for those satellite remote sensing images, the span is so large that the various objects are always of different sizes and complex spatial compositions. Therefore, the recognition of semantic relations is conducive to strengthen the understanding of remote sensing scenes. In this paper, we propose a novel multi-scale semantic fusion network (MSFN). In this framework, dilated convolution is introduced into a graph convolutional network (GCN) based on an attentional mechanism to fuse and refine multi-scale semantic context, which is crucial to strengthen the cognitive ability of our model Besides, based on the mapping between visual features and semantic embeddings, we design a sparse relationship extraction module to remove meaningless connections among entities and improve the efficiency of scene graph generation. Meanwhile, to further promote the research of scene understanding in remote sensing field, this paper also proposes a remote sensing scene graph dataset (RSSGD). We carry out extensive experiments and the results show that our model significantly outperforms previous methods on scene graph generation. In addition, RSSGD effectively bridges the huge semantic gap between low-level perception and high-level cognition of remote sensing images.


Author(s):  
Liang Wang ◽  
Zhiwen Yu ◽  
Qi Han ◽  
Dingqi Yang ◽  
Shirui Pan ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document