An effective fault ordering heuristic for SAT-based dynamic test compaction techniques

2014 ◽  
Vol 56 (4) ◽  
Author(s):  
Stephan Eggersglüß ◽  
Rolf Drechsler

AbstractEach chip is subjected to a post-production test after fabrication. A set of test patterns is applied to filter out defective devices. The size of this test set is an important issue. Generally, large test sets increase the test costs. Therefore, test compaction techniques are applied to obtain a compact test set. The effectiveness of these technique is significantly influenced by fault ordering. This paper describes how information about hard-to-detect faults can be extracted from an untestable identification phase and be used to develop a fault ordering technique which is able to reduce the pattern counts of highly compacted test sets generated by a SAT-based dynamic test compaction approach.

2009 ◽  
Vol 5 (2) ◽  
pp. 57
Author(s):  
Gábor Kovács ◽  
Gábor Árpád Németh ◽  
Zoltán Pap ◽  
Mahadevan Subramaniam

This paper proposes a string edit distance based test selection method to generate compact test sets for telecommunications software. Following the results of previous research, a trace in a test set is considered to be redundant if its edit distance from others is less than a given parameter. The algorithm first determines the minimum cardinality of the target test set inaccordance with the provided parameter, then it selects the test set with the highest sum of internal edit distances. The selection problem is reduced to an assignment problem in bipartite graphs.


2021 ◽  
Vol 11 (5) ◽  
pp. 2039
Author(s):  
Hyunseok Shin ◽  
Sejong Oh

In machine learning applications, classification schemes have been widely used for prediction tasks. Typically, to develop a prediction model, the given dataset is divided into training and test sets; the training set is used to build the model and the test set is used to evaluate the model. Furthermore, random sampling is traditionally used to divide datasets. The problem, however, is that the performance of the model is evaluated differently depending on how we divide the training and test sets. Therefore, in this study, we proposed an improved sampling method for the accurate evaluation of a classification model. We first generated numerous candidate cases of train/test sets using the R-value-based sampling method. We evaluated the similarity of distributions of the candidate cases with the whole dataset, and the case with the smallest distribution–difference was selected as the final train/test set. Histograms and feature importance were used to evaluate the similarity of distributions. The proposed method produces more proper training and test sets than previous sampling methods, including random and non-random sampling.


2014 ◽  
Vol 529 ◽  
pp. 359-363
Author(s):  
Xi Lei Huang ◽  
Mao Xiang Yi ◽  
Lin Wang ◽  
Hua Guo Liang

A novel concurrent core test approach is proposed to reduce the test cost of SoC. Before test, a novel test set sharing strategy is proposed to obtain a minimum size of merged test set by merging the test sets corresponding to cores under test (CUT).Moreover, it can be used in conjunction with general compression/decompression techniques to further reduce test data volume (TDV). During test, the proposed vector separating device which is composed of a set of simple combinational logical circuit (CLC) is designed for separating the vector from the merged test set to the correspondent test core. This approach does not add any test vector for each core and can test synchronously to reduce test application time (TAT). Experimental results for ISCAS’ 89 benchmarks have been rproven the efficiency of the proposed approach.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 8536-8536
Author(s):  
Gouji Toyokawa ◽  
Fahdi Kanavati ◽  
Seiya Momosaki ◽  
Kengo Tateishi ◽  
Hiroaki Takeoka ◽  
...  

8536 Background: Lung cancer is the leading cause of cancer-related death in many countries, and its prognosis remains unsatisfactory. Since treatment approaches differ substantially based on the subtype, such as adenocarcinoma (ADC), squamous cell carcinoma (SCC) and small cell lung cancer (SCLC), an accurate histopathological diagnosis is of great importance. However, if the specimen is solely composed of poorly differentiated cancer cells, distinguishing between histological subtypes can be difficult. The present study developed a deep learning model to classify lung cancer subtypes from whole slide images (WSIs) of transbronchial lung biopsy (TBLB) specimens, in particular with the aim of using this model to evaluate a challenging test set of indeterminate cases. Methods: Our deep learning model consisted of two separately trained components: a convolutional neural network tile classifier and a recurrent neural network tile aggregator for the WSI diagnosis. We used a training set consisting of 638 WSIs of TBLB specimens to train a deep learning model to classify lung cancer subtypes (ADC, SCC and SCLC) and non-neoplastic lesions. The training set consisted of 593 WSIs for which the diagnosis had been determined by pathologists based on the visual inspection of Hematoxylin-Eosin (HE) slides and of 45 WSIs of indeterminate cases (64 ADCs and 19 SCCs). We then evaluated the models using five independent test sets. For each test set, we computed the receiver operator curve (ROC) area under the curve (AUC). Results: We applied the model to an indeterminate test set of WSIs obtained from TBLB specimens that pathologists had not been able to conclusively diagnose by examining the HE-stained specimens alone. Overall, the model achieved ROC AUCs of 0.993 (confidence interval [CI] 0.971-1.0) and 0.996 (0.981-1.0) for ADC and SCC, respectively. We further evaluated the model using five independent test sets consisting of both TBLB and surgically resected lung specimens (combined total of 2490 WSIs) and obtained highly promising results with ROC AUCs ranging from 0.94 to 0.99. Conclusions: In this study, we demonstrated that a deep learning model could be trained to predict lung cancer subtypes in indeterminate TBLB specimens. The extremely promising results obtained show that if deployed in clinical practice, a deep learning model that is capable of aiding pathologists in diagnosing indeterminate cases would be extremely beneficial as it would allow a diagnosis to be obtained sooner and reduce costs that would result from further investigations.


Author(s):  
André Maletzke ◽  
Waqar Hassan ◽  
Denis dos Reis ◽  
Gustavo Batista

Quantification is a task similar to classification in the sense that it learns from a labeled training set. However, quantification is not interested in predicting the class of each observation, but rather measure the class distribution in the test set. The community has developed performance measures and experimental setups tailored to quantification tasks. Nonetheless, we argue that a critical variable, the size of the test sets, remains ignored. Such disregard has three main detrimental effects. First, it implicitly assumes that quantifiers will perform equally well for different test set sizes. Second, it increases the risk of cherry-picking by selecting a test set size for which a particular proposal performs best. Finally, it disregards the importance of designing methods that are suitable for different test set sizes. We discuss these issues with the support of one of the broadest experimental evaluations ever performed, with three main outcomes. (i) We empirically demonstrate the importance of the test set size to assess quantifiers. (ii) We show that current quantifiers generally have a mediocre performance on the smallest test sets. (iii) We propose a metalearning scheme to select the best quantifier based on the test size that can outperform the best single quantification method.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Wei Yang ◽  
Junkai Zhou

With the advent of the era of big data, great changes have taken place in the insurance industry, gradually entering the field of Internet insurance, and a large amount of insurance data has been accumulated. How to realize the innovation of insurance services through insurance data is crucial to the development of the insurance industry. Therefore, this paper proposes a ciphertext retrieval technology based on attribute encryption (HP-CPABKS) to realize the rapid retrieval and update of insurance data on the premise of ensuring the privacy of insurance information and puts forward an innovative insurance service based on cloud computing. The results show that 97.35% of users are successfully identified in test set A and 98.77% of users are successfully identified in test set B, and the recognition success rate of the four test sets is higher than 97.00%; when the number of challenges is 720, the modified data block is less than 9%; the total number of complaints is reduced from 1300 to 249; 99.19% of users are satisfied with the innovative insurance service; the number of the insured is increased significantly. To sum up, the insurance innovation service based on cloud computing insurance data can improve customer satisfaction, increase the number of policyholders, reduce the number of complaints, and achieve a more successful insurance service innovation. This study provides a reference for the precision marketing of insurance services.


2021 ◽  
Author(s):  
Mudan zhang ◽  
Xuntao Yin ◽  
Wuchao Li ◽  
Yan Zha ◽  
Xianchun Zeng ◽  
...  

Abstract Background: Endocrine system plays an important role in infectious disease prognosis. Our goal is to assess the value of radiomics features extracted from adrenal gland and periadrenal fat CT images in predicting disease prognosis in patients with COVID-19. Methods: A total of 1,325 patients (765 moderate and 560 severe patients) from three centers were enrolled in the retrospective study. We proposed a 3D cascade V-Net to automatically segment adrenal glands in onset CT images. Periadrenal fat areas were obtained using inflation operations. Then, the radiomics features were automatically extracted. Five models were established to predict the disease prognosis in patients with COVID-19: a clinical model (CM), three radiomics models (adrenal gland model [AM], periadrenal fat model [PM], fusion of adrenal gland and periadrenal fat model [FM]), and a radiomics nomogram model (RN).Data from one center (1,183 patients) were utilized as training and validation sets. The remaining two (36 and 106 patients) were used as 2 independent test sets to evaluate the models’ performance. Results: The auto-segmentation framework achieved an average dice of 0.79 in the test set. CM, AM, PM, FM, and RN obtained AUCs of 0.716, 0.755, 0.796, 0.828, and 0.825, respectively in the training set, and the mean AUCs of 0.754, 0.709, 0.672, 0.706 and 0.778 for 2 independent test sets. Decision curve analysis showed that if the threshold probability was more than 0.3, 0.5, and 0.1 in the validation set, the independent-test set 1 and the independent-test set 2 could gain more net benefits using RN than FM and CM, respectively. Conclusion: Radiomics features extracted from CT images of adrenal glands and periadrenal fat are related to disease prognosis in patients with COVID-19 and have great potential for predicting its severity.


Sign in / Sign up

Export Citation Format

Share Document