scholarly journals Lazy Set-Sharing Analysis

Author(s):  
Xuan Li ◽  
Andy King ◽  
Lunjin Lu
Keyword(s):  
2014 ◽  
Vol 529 ◽  
pp. 359-363
Author(s):  
Xi Lei Huang ◽  
Mao Xiang Yi ◽  
Lin Wang ◽  
Hua Guo Liang

A novel concurrent core test approach is proposed to reduce the test cost of SoC. Before test, a novel test set sharing strategy is proposed to obtain a minimum size of merged test set by merging the test sets corresponding to cores under test (CUT).Moreover, it can be used in conjunction with general compression/decompression techniques to further reduce test data volume (TDV). During test, the proposed vector separating device which is composed of a set of simple combinational logical circuit (CLC) is designed for separating the vector from the merged test set to the correspondent test core. This approach does not add any test vector for each core and can test synchronously to reduce test application time (TAT). Experimental results for ISCAS’ 89 benchmarks have been rproven the efficiency of the proposed approach.


2021 ◽  
Vol 99 (3-4) ◽  
pp. 371-388
Author(s):  
Pulak Sahoo ◽  
Samar Halder

Author(s):  
Mario Méndez-Lojo ◽  
Ondřej Lhoták ◽  
Manuel V. Hermenegildo
Keyword(s):  

2020 ◽  
Vol 34 (07) ◽  
pp. 10494-10501
Author(s):  
Tingjia Cao ◽  
Ke Han ◽  
Xiaomei Wang ◽  
Lin Ma ◽  
Yanwei Fu ◽  
...  

This paper studies the task of image captioning with novel objects, which only exist in testing images. Intrinsically, this task can reflect the generalization ability of models in understanding and captioning the semantic meanings of visual concepts and objects unseen in training set, sharing the similarity to one/zero-shot learning. The critical difficulty thus comes from that no paired images and sentences of the novel objects can be used to help train the captioning model. Inspired by recent work (Chen et al. 2019b) that boosts one-shot learning by learning to generate various image deformations, we propose learning meta-networks for deforming features for novel object captioning. To this end, we introduce the feature deformation meta-networks (FDM-net), which is trained on source data, and learn to adapt to the novel object features detected by the auxiliary detection model. FDM-net includes two sub-nets: feature deformation, and scene graph sentence reconstruction, which produce the augmented image features and corresponding sentences, respectively. Thus, rather than directly deforming images, FDM-net can efficiently and dynamically enlarge the paired images and texts by learning to deform image features. Extensive experiments are conducted on the widely used novel object captioning dataset, and the results show the effectiveness of our FDM-net. Ablation study and qualitative visualization further give insights of our model.


2002 ◽  
Vol 2 (2) ◽  
pp. 155-201 ◽  
Author(s):  
PATRICIA M. HILL ◽  
ROBERTO BAGNARA ◽  
ENEA ZAFFANELLA

It is important that practical data-flow analyzers are backed by reliably proven theoretical results. Abstract interpretation provides a sound mathematical framework and necessary generic properties for an abstract domain to be well-defined and sound with respect to the concrete semantics. In logic programming, the abstract domain Sharing is a standard choice for sharing analysis for both practical work and further theoretical study. In spite of this, we found that there were no satisfactory proofs for the key properties of commutativity and idempotence that are essential for Sharing to be well-defined and that published statements of the soundness of Sharing assume the occurs-check. This paper provides a generalization of the abstraction function for Sharing that can be applied to any language, with or without the occurs-check. Results for soundness, idempotence and commutativity for abstract unification using this abstraction function are proven.


2021 ◽  
Author(s):  
Sigfried Gold ◽  
Harold Lehmann ◽  
Lisa Schilling ◽  
Wayne Lutters

Objective: Code sets play a central role in analytic work with clinical data warehouses, as components of phenotype, cohort, or analytic variable algorithms representing specific clinical phenomena. Code set quality has received critical attention and repositories for sharing and reusing code sets have been seen as a way to improve quality and reduce redundant effort. Nonetheless, concerns regarding code set quality persist. In order to better understand ongoing challenges in code set quality and reuse, and address them with software and infrastructure recommendations, we determined it was necessary to learn how code sets are constructed and validated in real-world settings. Methods: Survey and field study using semi-structured interviews of a purposive sample of code set practitioners. Open coding and thematic analysis on interview transcripts, interview notes, and answers to open-ended survey questions. Results: Thirty-six respondents completed the survey, of whom 15 participated in follow-up interviews. We found great variability in the methods, degree of formality, tools, expertise, and data used in code set construction and validation. We found universal agreement that crafting high-quality code sets is difficult, but very different ideas about how this can be achieved and validated. A primary divide exists between those who rely on empirical techniques using patient-level data and those who only rely on expertise and semantic data. We formulated a method- and process-based model able to account for observed variability in formality, thoroughness, resources, and techniques. Conclusion: Our model provides a structure for organizing a set of recommendations to facilitate reuse based on metadata capture during the code set development process. It classifies validation methods by the data they depend on — semantic, empirical, and derived — as they are applied over a sequence of phases: (1) code collection; (2) code evaluation; (3) code set evaluation; (4) code set acceptance; and, optionally, (5) reporting of methods used and validation results. This schematization of real-world practices informs our analysis of and response to persistent challenges in code set development. Potential re-users of existing code sets can find little evidence to support trust in their quality and fitness for use, particularly when reusing a code set in a new study or database context. Rather than allowing code set sharing and reuse to remain separate activities, occurring before and after the main action of code set development, sharing and reuse must permeate every step of the process in order to produce reliable evidence of quality and fitness for use.


Author(s):  
Patricia M. Hill ◽  
Roberto Bagnara ◽  
Enea Zaffanella
Keyword(s):  

2009 ◽  
Vol 45 (1) ◽  
Author(s):  
Mostafa A. Shirazi ◽  
John M. Faustini ◽  
Philip R. Kaufmann
Keyword(s):  
Data Set ◽  

Author(s):  
Eric Trias ◽  
Jorge Navas ◽  
Elena S. Ackley ◽  
Stephanie Forrest ◽  
M. Hermenegildo
Keyword(s):  

2018 ◽  
Author(s):  
Motonori Yamaguchi ◽  
Helen Joanne Wall ◽  
Bernhard Hommel

A central issue in the study of joint task performance has been one of whether co-acting individuals perform their partner’s part of the task as if it were their own. The present study addressed this issue by using joint task switching. A pair of actors shared two tasks that were presented in a random order, whereby the relevant task and actor were cued on each trial. Responses produced action effects that were either shared or separate between co-actors. When co-actors produced separate action effects, switch costs were obtained within the same actor (i.e., when the same actor performed consecutive trials) but not between co-actors (when different actors performed consecutive trials), implying that actors did not perform their co-actor’s part. When the same action effects were shared between co-actors, however, switch costs were also obtained between co-actors, implying that actors did perform their co-actor’s part. The results indicated that shared action effects induce task-set sharing between co-acting individuals.


Sign in / Sign up

Export Citation Format

Share Document