scholarly journals An insider's look at LF type reconstruction: everything you (n)ever wanted to know

2012 ◽  
Vol 23 (1) ◽  
pp. 1-37 ◽  
Author(s):  
BRIGITTE PIENTKA

AbstractAlthough type reconstruction for dependently typed languages is common in practical systems, it is still ill-understood. Detailed descriptions of the issues around it are hard to find and formal descriptions together with correctness proofs are non-existing. In this paper, we discuss a one-pass type reconstruction for objects in the logical framework LF, describe formally the type reconstruction process using the framework of contextual modal types, and prove correctness of type reconstruction. Since type reconstruction will find most general types and may leave free variables, we in addition describe abstraction which will return a closed object where all free variables are bound at the outside. We also implemented our algorithms as part of the Beluga language, and the performance of our type reconstruction algorithm is comparable to type reconstruction in existing systems such as the logical framework Twelf.

Author(s):  
GuoLong Zhang

The use of computer technology for three-dimensional (3 D) reconstruction is one of the important development directions of social production. The purpose is to find a new method that can be used in traditional handicraft design, and to explore the application of 3 D reconstruction technology in it. Based on the description and analysis of 3 D reconstruction technology, the 3 D reconstruction algorithm based on Poisson equation is analyzed, and the key steps and problems of the method are clarified. Then, by introducing the shielding design constraint, a 3 D reconstruction algorithm based on shielded Poisson equation is proposed. Finally, the performance of two algorithms is compared by reconstructing the 3 D image of rabbit. The results show that: when the depth value of the algorithm is 11, the surface of the rabbit image obtained by the proposed optimization algorithm is smoother, and the details are more delicate and fluent; under different depth values, with the increase of the depth value, the number of vertices and faces of the two algorithms increase, and the optimal depth values of 3 D reconstruction are more than 8. However, the proposed optimization algorithm has more vertices, and performs better in the reconstruction process; the larger the depth value is, the more time and memory are consumed in 3 D reconstruction, so it is necessary to select the appropriate depth value; the shielding parameters of the algorithm have a great impact on the fineness of the reconstruction model. The larger the parameter is, the higher the fineness is. In a word, the proposed 3 D reconstruction algorithm based on shielded Poisson equation has better practicability and superiority.


2021 ◽  
Vol 11 (14) ◽  
pp. 6460
Author(s):  
Fabio Di Martino ◽  
Patrizio Barca ◽  
Eleonora Bortoli ◽  
Alessia Giuliano ◽  
Duccio Volterrani

Quantitative analyses in nuclear medicine are increasingly used, both for diagnostic and therapeutic purposes. The Partial Volume Effect (PVE) is the most important factor of loss of quantification in Nuclear Medicine, especially for evaluation in Region of Interest (ROI) smaller than the Full Width at Half Maximum (FWHM) of the PSF. The aim of this work is to present a new approach for the correction of PVE, using a post-reconstruction process starting from a mathematical expression, which only requires the knowledge of the FWHM of the final PSF of the imaging system used. After the presentation of the theoretical derivation, the experimental evaluation of this method is performed using a PET/CT hybrid system and acquiring the IEC NEMA phantom with six spherical “hot” ROIs (with diameters of 10, 13, 17, 22, 28, and 37 mm) and a homogeneous “colder” background. In order to evaluate the recovery of quantitative data, the effect of statistical noise (different acquisition times), tomographic reconstruction algorithm with and without time-of-flight (TOF) and different signal-to-background activity concentration ratio (3:1 and 10:1) was studied. The application of the corrective method allows recovering the loss of quantification due to PVE for all sizes of spheres acquired, with a final accuracy less than 17%, for lesion dimensions larger than two FWHM and for acquisition times equal to or greater than two minutes.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Wosik Cho ◽  
Jeong-uk Shin ◽  
Kyung Taec Kim

AbstractWe present a reconstruction algorithm developed for the temporal characterization method called tunneling ionization with a perturbation for the time-domain observation of an electric field (TIPTOE). The reconstruction algorithm considers the high-order contribution of an additional laser pulse to ionization, enabling the use of an intense additional laser pulse. Therefore, the signal-to-noise ratio of the TIPTOE measurement is improved by at least one order of magnitude compared to the first-order approximation. In addition, the high-order contribution provides additional information regarding the pulse envelope. The reconstruction algorithm was tested with ionization yields obtained by solving the time-dependent Schrödinger equation. The optimal conditions for accurate reconstruction were analyzed. The reconstruction algorithm was also tested using experimental data obtained using few-cycle laser pulses. The reconstructed pulses obtained under different dispersion conditions exhibited good consistency. These results confirm the validity and accuracy of the reconstruction process.


1999 ◽  
Vol 5 (S2) ◽  
pp. 940-941
Author(s):  
Shih Ang ◽  
Wang Ge ◽  
Cheng Ping-Chin

Due to the penetration ability and absorption contrast mechanism, cone-beam X-ray microtomography is a powerful tool in studying 3D microstructures in opaque specimens. In contrast to the conventional parallel and fan-beam geometry, the cone-beam tomography set up is highly desirable for faster data acquisition, build-in magnification, better radiation utilization and easier hardware implementation. However, the major draw back of the cone-beam reconstruction is its computational complexity. In an effort to maximize the reconstruction speed, we have developed a generalized Feldkamp cone-beam reconstruction algorithm to optimize the reconstruction process. We report here the use of curved voxels in a cylindrical coordinate system and mapping tables to further improve the reconstruction efficiency.The generalized Feldkamp cone-beam image reconstruction algorithm is reformulated utilizing mapping table in the discrete domain as: , where .


1994 ◽  
Vol VII (3) ◽  
pp. 12-23 ◽  
Author(s):  
Shail Aditya ◽  
Christine H. Flood ◽  
James E. Hicks

2011 ◽  
Vol 2011 ◽  
pp. 1-7
Author(s):  
Hengyong Yu ◽  
Changguo Ji ◽  
Ge Wang

To maximize the time-integrated X-ray flux from multiple X-ray sources and shorten the data acquisition process, a promising way is to allow overlapped projections from multiple sources being simultaneously on without involving the source multiplexing technology. The most challenging task in this configuration is to perform image reconstruction effectively and efficiently from overlapped projections. Inspired by the single-source simultaneous algebraic reconstruction technique (SART), we hereby develop a multisource SART-type reconstruction algorithm regularized by a sparsity-oriented constraint in the soft-threshold filtering framework to reconstruct images from overlapped projections. Our numerical simulation results verify the correctness of the proposed algorithm and demonstrate the advantage of image reconstruction from overlapped projections.


Author(s):  
Fabian Thorand ◽  
Jurriaan Hage

AbstractThe precision of a static analysis can be improved by increasing the context-sensitivity of the analysis. In a type-based formulation of static analysis for functional languages this can be achieved by, e.g., introducing let-polyvariance or subtyping. In this paper we go one step further by defining a higher-ranked polyvariant type system so that even properties of lambda-bound identifiers can be generalized over. We do this for dependency analysis, a generic analysis that can be instantiated to a range of different analyses that in this way all can profit.We prove that our analysis is sound with respect to a call-by-name semantics and that it satisfies a so-called noninterference property. We provide a type reconstruction algorithm that we have proven to be terminating, and sound and complete with respect to its declarative specification. Our principled description can serve as a blueprint for making other analyses higher-ranked.


Author(s):  
M. Mehranfar ◽  
H. Arefi ◽  
F. Alidoost

Abstract. This paper presents a projection-based method for 3D bridge modeling using dense point clouds generated from drone-based images. The proposed workflow consists of hierarchical steps including point cloud segmentation, modeling of individual elements, and merging of individual models to generate the final 3D model. First, a fuzzy clustering algorithm including the height values and geometrical-spectral features is employed to segment the input point cloud into the main bridge elements. In the next step, a 2D projection-based reconstruction technique is developed to generate a 2D model for each element. Next, the 3D models are reconstructed by extruding the 2D models orthogonally to the projection plane. Finally, the reconstruction process is completed by merging individual 3D models and forming an integrated 3D model of the bridge structure in a CAD format. The results demonstrate the effectiveness of the proposed method to generate 3D models automatically with a median error of about 0.025 m between the elements’ dimensions in the reference and reconstructed models for two different bridge datasets.


Sign in / Sign up

Export Citation Format

Share Document