scholarly journals Towards improving edge quality using combinatorial optimization and a novel skeletonize algorithm

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Marvin Arnold ◽  
Stefanie Speidel ◽  
Georges Hattab

Abstract Background Object detection and image segmentation of regions of interest provide the foundation for numerous pipelines across disciplines. Robust and accurate computer vision methods are needed to properly solve image-based tasks. Multiple algorithms have been developed to solely detect edges in images. Constrained to the problem of creating a thin, one-pixel wide, edge from a predicted object boundary, we require an algorithm that removes pixels while preserving the topology. Thanks to skeletonize algorithms, an object boundary is transformed into an edge; contrasting uncertainty with exact positions. Methods To extract edges from boundaries generated from different algorithms, we present a computational pipeline that relies on: a novel skeletonize algorithm, a non-exhaustive discrete parameter search to find the optimal parameter combination of a specific post-processing pipeline, and an extensive evaluation using three data sets from the medical and natural image domains (kidney boundaries, NYU-Depth V2, BSDS 500). While the skeletonize algorithm was compared to classical topological skeletons, the validity of our post-processing algorithm was evaluated by integrating the original post-processing methods from six different works. Results Using the state of the art metrics, precision and recall based Signed Distance Error (SDE) and the Intersection over Union bounding box (IOU-box), our results indicate that the SDE metric for these edges is improved up to 2.3 times. Conclusions Our work provides guidance for parameter tuning and algorithm selection in the post-processing of predicted object boundaries.

2021 ◽  
Author(s):  
Marvin Arnold ◽  
Georges Hattab ◽  
Stefanie Speidel

Abstract Background: Object detection and image segmentation of regions of interest provide the foundation for numerous pipelines across disciplines. Robust and accurate computer vision methods are needed to properly solve image-based tasks. Multiple algorithms have been developed to solely detect edges in images. Constrained to the problem of creating a thin, one-pixel wide, edge from a predicted object boundary, we require an algorithm that removes pixels while preserving the topology. Thanks to skeletonize algorithms, an object boundary is transformed into an edge; contrasting uncertainty with exact positions.Methods: To extract edges from boundaries generated from different algorithms, we present a computational pipeline that relies on: a novel skeletonize algorithm, a non-exhaustive discrete parameter search to find the optimal parameter combination of a specific post-processing pipeline, and an extensive evaluation using three data sets from the medical and natural image domains (kidney boundaries, NYU-Depth V2, BSDS 500). While the skeletonize algorithm was compared to classical topological skeletons, the validity of our post-processing algorithm was evaluated by integrating the original post-processing methods from six different works.Results: Using the state of the art metrics, precision and recall based Signed Distance Error (SDE) and the Intersection over Union bounding box (IOU-box), our results indicate that the SDE metric for these edges is improved up to 2.3 times.Conclusions: Our work provides guidance for parameter tuning and algorithm selection in the post-processing of predicted object boundaries.


2021 ◽  
Vol 11 (3) ◽  
pp. 999
Author(s):  
Najeeb Moharram Jebreel ◽  
Josep Domingo-Ferrer ◽  
David Sánchez ◽  
Alberto Blanco-Justicia

Many organizations devote significant resources to building high-fidelity deep learning (DL) models. Therefore, they have a great interest in making sure the models they have trained are not appropriated by others. Embedding watermarks (WMs) in DL models is a useful means to protect the intellectual property (IP) of their owners. In this paper, we propose KeyNet, a novel watermarking framework that satisfies the main requirements for an effective and robust watermarking. In KeyNet, any sample in a WM carrier set can take more than one label based on where the owner signs it. The signature is the hashed value of the owner’s information and her model. We leverage multi-task learning (MTL) to learn the original classification task and the watermarking task together. Another model (called the private model) is added to the original one, so that it acts as a private key. The two models are trained together to embed the WM while preserving the accuracy of the original task. To extract a WM from a marked model, we pass the predictions of the marked model on a signed sample to the private model. Then, the private model can provide the position of the signature. We perform an extensive evaluation of KeyNet’s performance on the CIFAR10 and FMNIST5 data sets and prove its effectiveness and robustness. Empirical results show that KeyNet preserves the utility of the original task and embeds a robust WM.


Author(s):  
Sang Lim Choi ◽  
Sung Bin Park ◽  
Seungwook Yang ◽  
Eun Sun Lee ◽  
Hyun Jeong Park ◽  
...  

Purpose: Kidney, ureter, and bladder radiography (KUB) has frequently been used in suspected urolithiasis, but its performance is known to be lower than that of computed tomography (CT). This study aimed to investigate the diagnostic performance of digitally post-processed kidney ureter bladder radiography (KUB) in the detection of ureteral stones. Materials And Methods: Thirty patients who underwent digital KUB and CT were included in this retrospective study. The original digital KUB underwent post-processing that involved noise estimation, reduction, and whitening to improve the visibility of ureteral stones. Thus, 60 digital original or post-processed KUB images were obtained and ordered randomly for blinded review. After a period, a second review was performed after unblinding stone laterality. The detection rates were evaluated at both initial and second review, using CT as reference standard. The objective (size) and subjective (visibility) parameters of ureteral stones were analyzed. Fisher’s exact test was used to compare the detection sensitivity between the original and post-processed KUB data set. Visibility analysis was assessed with a paired t-test. Correlation of stone size between CT and digital KUB data sets was assessed with Pearson’s correlation test. Results: The detection rate was higher for most reviewers once stone laterality was provided and was non-significantly better for the post-processed KUB images (p > 0.05). There was no significant difference in stone size among CT and digital KUB data sets. In all reviews, visibility grade was higher in the post-processed KUB images, irrespective of whether stone laterality was provided. Conclusion: Digital post-processing of KUB yielded higher visibility of ureteral stones and could improve stone detection, especially when stone laterality was available. Thus, digitally post-processed KUB can be an excellent modality for detecting ureteral stones and measuring their exact size.


2008 ◽  
Vol 32 ◽  
pp. 565-606 ◽  
Author(s):  
L. Xu ◽  
F. Hutter ◽  
H. H. Hoos ◽  
K. Leyton-Brown

It has been widely observed that there is no single "dominant" SAT solver; instead, different solvers perform best on different instances. Rather than following the traditional approach of choosing the best solver for a given class of instances, we advocate making this decision online on a per-instance basis. Building on previous work, we describe SATzilla, an automated approach for constructing per-instance algorithm portfolios for SAT that use so-called empirical hardness models to choose among their constituent solvers. This approach takes as input a distribution of problem instances and a set of component solvers, and constructs a portfolio optimizing a given objective function (such as mean runtime, percent of instances solved, or score in a competition). The excellent performance of SATzilla was independently verified in the 2007 SAT Competition, where our SATzilla07 solvers won three gold, one silver and one bronze medal. In this article, we go well beyond SATzilla07 by making the portfolio construction scalable and completely automated, and improving it by integrating local search solvers as candidate solvers, by predicting performance score instead of runtime, and by using hierarchical hardness models that take into account different types of SAT instances. We demonstrate the effectiveness of these new techniques in extensive experimental results on data sets including instances from the most recent SAT competition.


2010 ◽  
Vol 09 (04) ◽  
pp. 547-573 ◽  
Author(s):  
JOSÉ BORGES ◽  
MARK LEVENE

The problem of predicting the next request during a user's navigation session has been extensively studied. In this context, higher-order Markov models have been widely used to model navigation sessions and to predict the next navigation step, while prediction accuracy has been mainly evaluated with the hit and miss score. We claim that this score, although useful, is not sufficient for evaluating next link prediction models with the aim of finding a sufficient order of the model, the size of a recommendation set, and assessing the impact of unexpected events on the prediction accuracy. Herein, we make use of a variable length Markov model to compare the usefulness of three alternatives to the hit and miss score: the Mean Absolute Error, the Ignorance Score, and the Brier score. We present an extensive evaluation of the methods on real data sets and a comprehensive comparison of the scoring methods.


1976 ◽  
Vol 128 (4) ◽  
pp. 397-403 ◽  
Author(s):  
William T. Carpenter ◽  
Michael H. Sacks ◽  
John S. Strauss ◽  
John J. Bartko ◽  
Judy Rayner

SummaryAre research interview techniques adequate in assessing signs and symptoms? This question is investigated by obtaining two sets of interview schedule ratings on 49 patients, one by a research psychiatrist applying the interview and one by the patient's psychiatrist using all available information. The latter was considered a clinical standard with which a cross-sectional research interview could be contrasted.These two data sets were subjected to several types of analysis commonly undertaken with research interview ratings. Results indicated that the research interview adequately represents symptom data, but is seriously lacking in the assessment of observed behaviour. The effect of this difference depends on the goals of the study and the nature of the data analysis. If overall group findings are desired, or if analysis relies primarily on symptom data, results with a research interview may be similar to results based on a far more extensive evaluation. On the other hand, if conclusions are to be drawn on individual patients, or if data analysis relies heavily on observed behaviour, then data derived from research interviews are questionable.


2020 ◽  
Vol 14 (4) ◽  
pp. 458-470
Author(s):  
Long Gong ◽  
Ziheng Liu ◽  
Liang Liu ◽  
Jun Xu ◽  
Mitsunori Ogihara ◽  
...  

Set reconciliation is a fundamental algorithmic problem that arises in many networking, system, and database applications. In this problem, two large sets A and B of objects (bitcoins, files, records, etc.) are stored respectively at two different network-connected hosts, which we name Alice and Bob respectively. Alice and Bob communicate with each other to learn A Δ B , the difference between A and B , and as a result the reconciled set A ∪ B. Current set reconciliation schemes are based on either invertible Bloom filters (IBF) or error-correction codes (ECC). The former has a low computational complexity of O(d) , where d is the cardinality of A Δ B , but has a high communication overhead that is several times larger than the theoretical minimum. The latter has a low communication overhead close to the theoretical minimum, but has a much higher computational complexity of O(d 2 ). In this work, we propose Parity Bitmap Sketch (PBS), an ECC-based set reconciliation scheme that gets the better of both worlds: PBS has both a low computational complexity of O(d) just like IBF-based solutions and a low communication overhead of roughly twice the theoretical minimum. A separate contribution of this work is a novel rigorous analytical framework that can be used for the precise calculation of various performance metrics and for the near-optimal parameter tuning of PBS.


This paper presents the optimal tuning of gain parameters of Multi-resolution PID controller for thermal system. Control of temperature in thermal system is very important. MRPID controller utilizes the multi-resolution property of wavelet transform to decompose the error signal in to different frequency components. Further different coefficients of wavelet are used to generate the control signal. To generate the desired control, optimal tuning is required. In this paper, optimal tuning of MRPID controller is done by genetic algorithm (GA) & Particle swarm optimization (PSO). At the end, performance comparison between these two techniques is done and concluded. Wavelet-based MRPID controller is executed in MATLAB/Simulink@2015a.


Sign in / Sign up

Export Citation Format

Share Document