scholarly journals Reconstruction Algorithms for DNA-Storage Systems

2020 ◽  
Author(s):  
Omer Sabary ◽  
Alexander Yucovich ◽  
Guy Shapira ◽  
Eitan Yaakobi

AbstractIn the trace reconstruction problem a length-n string x yields a collection of noisy copies, called traces, y1, …, yt where each yi is independently obtained from x by passing through a deletion channel, which deletes every symbol with some fixed probability. The main goal under this paradigm is to determine the required minimum number of i.i.d traces in order to reconstruct x with high probability. The trace reconstruction problem can be extended to the model where each trace is a result of x passing through a deletion-insertion-substitution channel, which introduces also insertions and substitutions. Motivated by the storage channel of DNA, this work is focused on another variation of the trace reconstruction problem, which is referred by the DNA reconstruction problem. A DNA reconstruction algorithm is a mapping which receives t traces y1, …, yt as an input and produces , an estimation of x. The goal in the DNA reconstruction problem is to minimize the edit distance between the original string and the algorithm’s estimation. For the deletion channel case, the problem is referred by the deletion DNA reconstruction problem and the goal is to minimize the Levenshtein distance .In this work, we present several new algorithms for these reconstruction problems. Our algorithms look globally on the entire sequence of the traces and use dynamic programming algorithms, which are used for the shortest common supersequence and the longest common subsequence problems, in order to decode the original sequence. Our algorithms do not require any limitations on the input and the number of traces, and more than that, they perform well even for error probabilities as high as 0.27. The algorithms have been tested on simulated data as well as on data from previous DNA experiments and are shown to outperform all previous algorithms.

Author(s):  
D.S. Hirschberg

In the previous chapters, we discussed problems involving an exact match of string patterns. We now turn to problems involving similar but not necessarily exact pattern matches. There are a number of similarity or distance measures, and many of them are special cases or generalizations of the Levenshtein metric. The problem of evaluating the measure of string similarity has numerous applications, including one arising in the study of the evolution of long molecules such as proteins. In this chapter, we focus on the problem of evaluating a longest common subsequence, which is expressively equivalent to the simple form of the Levenshtein distance. The Levenshtein distance is a metric that measures the similarity of two strings. In its simple form, the Levenshtein distance, D(x , y), between strings x and y is the minimum number of character insertions and/or deletions (indels) required to transform string x into string y. A commonly used generalization of the Levenshtein distance is the minimum cost of transforming x into y when the allowable operations are character insertion, deletion, and substitution, with costs δ(λ , σ), δ(σ, λ), and δ(σ1, σ2) , that are functions of the involved character(s). There are direct correspondences between the Levenshtein distance of two strings, the length of the shortest edit sequence from one string to the other, and the length of the longest common subsequence (LCS) of those strings. If D is the simple Levenshtein distance between two strings having lengths m and n, SES is the length of the shortest edit sequence between the strings, and L is the length of an LCS of the strings, then SES = D and L = (m + n — D)/2. We will focus on the problem of determining the length of an LCS and also on the related problem of recovering an LCS. Another related problem, which will be discussed in Chapter 6, is that of approximate string matching, in which it is desired to locate all positions within string y which begin an approximation to string x containing at most D errors (insertions or deletions).


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Johan Economou Lundeberg ◽  
Jenny Oddstig ◽  
Ulrika Bitzén ◽  
Elin Trägårdh

Abstract Background Lung cancer is one of the most common cancers in the world. Early detection and correct staging are fundamental for treatment and prognosis. Positron emission tomography with computed tomography (PET/CT) is recommended clinically. Silicon (Si) photomultiplier (PM)-based PET technology and new reconstruction algorithms are hoped to increase the detection of small lesions and enable earlier detection of pathologies including metastatic spread. The aim of this study was to compare the diagnostic performance of a SiPM-based PET/CT (including a new block-sequential regularization expectation maximization (BSREM) reconstruction algorithm) with a conventional PM-based PET/CT including a conventional ordered subset expectation maximization (OSEM) reconstruction algorithm. The focus was patients admitted for 18F-fluorodeoxyglucose (FDG) PET/CT for initial diagnosis and staging of suspected lung cancer. Patients were scanned on both a SiPM-based PET/CT (Discovery MI; GE Healthcare, Milwaukee, MI, USA) and a PM-based PET/CT (Discovery 690; GE Healthcare, Milwaukee, MI, USA). Standardized uptake values (SUV) and image interpretation were compared between the two systems. Image interpretations were further compared with histopathology when available. Results Seventeen patients referred for suspected lung cancer were included in our single injection, dual imaging study. No statically significant differences in SUVmax of suspected malignant primary tumours were found between the two PET/CT systems. SUVmax in suspected malignant intrathoracic lymph nodes was 10% higher on the SiPM-based system (p = 0.026). Good consistency (14/17 cases) between the PET/CT systems were found when comparing simplified TNM staging. The available histology results did not find any obvious differences between the systems. Conclusion In a clinical setting, the new SiPM-based PET/CT system with a new BSREM reconstruction algorithm provided a higher SUVmax for suspected lymph node metastases compared to the PM-based system. However, no improvement in lung cancer detection was seen.


Micromachines ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 164
Author(s):  
Dongxu Wu ◽  
Fusheng Liang ◽  
Chengwei Kang ◽  
Fengzhou Fang

Optical interferometry plays an important role in the topographical surface measurement and characterization in precision/ultra-precision manufacturing. An appropriate surface reconstruction algorithm is essential in obtaining accurate topography information from the digitized interferograms. However, the performance of a surface reconstruction algorithm in interferometric measurements is influenced by environmental disturbances and system noise. This paper presents a comparative analysis of three algorithms commonly used for coherence envelope detection in vertical scanning interferometry, including the centroid method, fast Fourier transform (FFT), and Hilbert transform (HT). Numerical analysis and experimental studies were carried out to evaluate the performance of different envelope detection algorithms in terms of measurement accuracy, speed, and noise resistance. Step height standards were measured using a developed interferometer and the step profiles were reconstructed by different algorithms. The results show that the centroid method has a higher measurement speed than the FFT and HT methods, but it can only provide acceptable measurement accuracy at a low noise level. The FFT and HT methods outperform the centroid method in terms of noise immunity and measurement accuracy. Even if the FFT and HT methods provide similar measurement accuracy, the HT method has a superior measurement speed compared to the FFT method.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Robert Peter Reimer ◽  
Konstantin Klein ◽  
Miriam Rinneburger ◽  
David Zopfs ◽  
Simon Lennartz ◽  
...  

AbstractComputed tomography in suspected urolithiasis provides information about the presence, location and size of stones. Particularly stone size is a key parameter in treatment decision; however, data on impact of reformatation and measurement strategies is sparse. This study aimed to investigate the influence of different image reformatations, slice thicknesses and window settings on stone size measurements. Reference stone sizes of 47 kidney stones representative for clinically encountered compositions were measured manually using a digital caliper (Man-M). Afterwards stones were placed in a 3D-printed, semi-anthropomorphic phantom, and scanned using a low dose protocol (CTDIvol 2 mGy). Images were reconstructed using hybrid-iterative and model-based iterative reconstruction algorithms (HIR, MBIR) with different slice thicknesses. Two independent readers measured largest stone diameter on axial (2 mm and 5 mm) and multiplanar reformatations (based upon 0.67 mm reconstructions) using different window settings (soft-tissue and bone). Statistics were conducted using ANOVA ± correction for multiple comparisons. Overall stone size in CT was underestimated compared to Man-M (8.8 ± 2.9 vs. 7.7 ± 2.7 mm, p < 0.05), yet closely correlated (r = 0.70). Reconstruction algorithm and slice thickness did not significantly impact measurements (p > 0.05), while image reformatations and window settings did (p < 0.05). CT measurements using multiplanar reformatation with a bone window setting showed closest agreement with Man-M (8.7 ± 3.1 vs. 8.8 ± 2.9 mm, p < 0.05, r = 0.83). Manual CT-based stone size measurements are most accurate using multiplanar image reformatation with a bone window setting, while measurements on axial planes with different slice thicknesses underestimate true stone size. Therefore, this procedure is recommended when impacting treatment decision.


2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Hsuan-Ming Huang ◽  
Ing-Tsung Hsiao

Background and Objective. Over the past decade, image quality in low-dose computed tomography has been greatly improved by various compressive sensing- (CS-) based reconstruction methods. However, these methods have some disadvantages including high computational cost and slow convergence rate. Many different speed-up techniques for CS-based reconstruction algorithms have been developed. The purpose of this paper is to propose a fast reconstruction framework that combines a CS-based reconstruction algorithm with several speed-up techniques.Methods. First, total difference minimization (TDM) was implemented using the soft-threshold filtering (STF). Second, we combined TDM-STF with the ordered subsets transmission (OSTR) algorithm for accelerating the convergence. To further speed up the convergence of the proposed method, we applied the power factor and the fast iterative shrinkage thresholding algorithm to OSTR and TDM-STF, respectively.Results. Results obtained from simulation and phantom studies showed that many speed-up techniques could be combined to greatly improve the convergence speed of a CS-based reconstruction algorithm. More importantly, the increased computation time (≤10%) was minor as compared to the acceleration provided by the proposed method.Conclusions. In this paper, we have presented a CS-based reconstruction framework that combines several acceleration techniques. Both simulation and phantom studies provide evidence that the proposed method has the potential to satisfy the requirement of fast image reconstruction in practical CT.


Sign in / Sign up

Export Citation Format

Share Document