scholarly journals Iterative Image Reconstruction for Limited-Angle CT Using Optimized Initial Image

2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Jingyu Guo ◽  
Hongliang Qi ◽  
Yuan Xu ◽  
Zijia Chen ◽  
Shulong Li ◽  
...  

Limited-angle computed tomography (CT) has great impact in some clinical applications. Existing iterative reconstruction algorithms could not reconstruct high-quality images, leading to severe artifacts nearby edges. Optimal selection of initial image would influence the iterative reconstruction performance but has not been studied deeply yet. In this work, we proposed to generate optimized initial image followed by total variation (TV) based iterative reconstruction considering the feature of image symmetry. The simulated data and real data reconstruction results indicate that the proposed method effectively removes the artifacts nearby edges.

PeerJ ◽  
2018 ◽  
Vol 6 ◽  
pp. e5852
Author(s):  
Yu-Yu Lin ◽  
Ping Chun Wu ◽  
Pei-Lung Chen ◽  
Yen-Jen Oyang ◽  
Chien-Yu Chen

Background The need for read-based phasing arises with advances in sequencing technologies. The minimum error correction (MEC) approach is the primary trend to resolve haplotypes by reducing conflicts in a single nucleotide polymorphism-fragment matrix. However, it is frequently observed that the solution with the optimal MEC might not be the real haplotypes, due to the fact that MEC methods consider all positions together and sometimes the conflicts in noisy regions might mislead the selection of corrections. To tackle this problem, we present a hierarchical assembly-based method designed to progressively resolve local conflicts. Results This study presents HAHap, a new phasing algorithm based on hierarchical assembly. HAHap leverages high-confident variant pairs to build haplotypes progressively. The phasing results by HAHap on both real and simulated data, compared to other MEC-based methods, revealed better phasing error rates for constructing haplotypes using short reads from whole-genome sequencing. We compared the number of error corrections (ECs) on real data with other methods, and it reveals the ability of HAHap to predict haplotypes with a lower number of ECs. We also used simulated data to investigate the behavior of HAHap under different sequencing conditions, highlighting the applicability of HAHap in certain situations.


2021 ◽  
Vol 14 (1) ◽  
pp. 86-100
Author(s):  
Aleksei A. Korneev ◽  
Anatoly N. Krichevets ◽  
Konstantin V. Sugonyaev ◽  
Dmitriy V. Ushakov ◽  
Alexander G. Vinogradov ◽  
...  

Background. Spearman’s law of diminishing returns (SLODR) states that intercorrelations between scores on tests of intellectual abilities were higher when the data set was comprised of subjects with lower intellectual abilities and vice versa. After almost a hundred years of research, this trend has only been detected on average. Objective. To determine whether the very different results were obtained due to variations in scaling and the selection of subjects. Design. We used three methods for SLODR detection based on moderated factor analysis (MFCA) to test real data and three sets of simulated data. Of the latter group, the first one simulated a real SLODR effect. The second one simulated the case of a different density of tasks of varying difficulty; it did not have a real SLODR effect. The third one simulated a skewed selection of respondents with different abilities and also did not have a real SLODR effect. We selected the simulation parameters so that the correlation matrix of the simulated data was similar to the matrix created from the real data, and all distributions had similar skewness parameters (about -0.3). Results. The results of MFCA are contradictory and we cannot clearly distinguish by this method the dataset with real SLODR from datasets with similar correlation structure and skewness, but without a real SLODR effect. Theresults allow us to conclude that when effects like SLODR are very subtle and can be identified only with a large sample, then features of the psychometric scale become very important, because small variations of scale metrics may lead either to masking of real SLODR or to false identification of SLODR.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4458 ◽  
Author(s):  
Shih-Chun Jin ◽  
Chia-Jui Hsieh ◽  
Jyh-Cheng Chen ◽  
Shih-Huan Tu ◽  
Ya-Chen Chen ◽  
...  

Limited-angle iterative reconstruction (LAIR) reduces the radiation dose required for computed tomography (CT) imaging by decreasing the range of the projection angle. We developed an image-quality-based stopping-criteria method with a flexible and innovative instrument design that, when combined with LAIR, provides the image quality of a conventional CT system. This study describes the construction of different scan acquisition protocols for micro-CT system applications. Fully-sampled Feldkamp (FDK)-reconstructed images were used as references for comparison to assess the image quality produced by these tested protocols. The insufficient portions of a sinogram were inpainted by applying a context encoder (CE), a type of generative adversarial network, to the LAIR process. The context image was passed through an encoder to identify features that were connected to the decoder using a channel-wise fully-connected layer. Our results evidence the excellent performance of this novel approach. Even when we reduce the radiation dose by 1/4, the iterative-based LAIR improved the full-width half-maximum, contrast-to-noise and signal-to-noise ratios by 20% to 40% compared to a fully-sampled FDK-based reconstruction. Our data support that this CE-based sinogram completion method enhances the efficacy and efficiency of LAIR and that would allow feasibility of limited angle reconstruction.


2015 ◽  
Author(s):  
Po-Ju Yao ◽  
Ren-Hua Chung

Computer simulations are routinely conducted to evaluate new statistical methods, to compare the properties among different methods, and to mimic the real data in genetic epidemiology studies. Conducting simulation studies can become a complicated task as several challenges can occur, such as the selection of an appropriate simulation tool and the specification of parameters in the simulation model. Although abundant simulated data have been generated for human genetic research, currently there is no public database designed specifically as a repository for these simulated data. With the lack of such database, for similar studies, similar simulations may have been repeated, which resulted in redundant works. We created an online platform, DBSIM, for simulation data sharing and discussion of simulation techniques for human genetic studies. DBSIM has a database containing simulation scripts, simulated data, and documentations from published manuscripts, as well as a discussion forum, which provides a platform for discussion of the simulated data and exchanging simulation ideas. DBSIM will be useful in three aspects. Moreover, summary statistics such as the simulation tools that are most commonly used and datasets that are most frequently downloaded are provided. The statistics will be very informative for researchers to choose an appropriate simulation tool or select a common dataset for method comparisons. DBSIM can be accessed at http://dbsim.nhri.org.tw.


Metabolites ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 214
Author(s):  
Aneta Sawikowska ◽  
Anna Piasecka ◽  
Piotr Kachlicki ◽  
Paweł Krajewski

Peak overlapping is a common problem in chromatography, mainly in the case of complex biological mixtures, i.e., metabolites. Due to the existence of the phenomenon of co-elution of different compounds with similar chromatographic properties, peak separation becomes challenging. In this paper, two computational methods of separating peaks, applied, for the first time, to large chromatographic datasets, are described, compared, and experimentally validated. The methods lead from raw observations to data that can form inputs for statistical analysis. First, in both methods, data are normalized by the mass of sample, the baseline is removed, retention time alignment is conducted, and detection of peaks is performed. Then, in the first method, clustering is used to separate overlapping peaks, whereas in the second method, functional principal component analysis (FPCA) is applied for the same purpose. Simulated data and experimental results are used as examples to present both methods and to compare them. Real data were obtained in a study of metabolomic changes in barley (Hordeum vulgare) leaves under drought stress. The results suggest that both methods are suitable for separation of overlapping peaks, but the additional advantage of the FPCA is the possibility to assess the variability of individual compounds present within the same peaks of different chromatograms.


2021 ◽  
Vol 10 (7) ◽  
pp. 435
Author(s):  
Yongbo Wang ◽  
Nanshan Zheng ◽  
Zhengfu Bian

Since pairwise registration is a necessary step for the seamless fusion of point clouds from neighboring stations, a closed-form solution to planar feature-based registration of LiDAR (Light Detection and Ranging) point clouds is proposed in this paper. Based on the Plücker coordinate-based representation of linear features in three-dimensional space, a quad tuple-based representation of planar features is introduced, which makes it possible to directly determine the difference between any two planar features. Dual quaternions are employed to represent spatial transformation and operations between dual quaternions and the quad tuple-based representation of planar features are given, with which an error norm is constructed. Based on L2-norm-minimization, detailed derivations of the proposed solution are explained step by step. Two experiments were designed in which simulated data and real data were both used to verify the correctness and the feasibility of the proposed solution. With the simulated data, the calculated registration results were consistent with the pre-established parameters, which verifies the correctness of the presented solution. With the real data, the calculated registration results were consistent with the results calculated by iterative methods. Conclusions can be drawn from the two experiments: (1) The proposed solution does not require any initial estimates of the unknown parameters in advance, which assures the stability and robustness of the solution; (2) Using dual quaternions to represent spatial transformation greatly reduces the additional constraints in the estimation process.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Camilo Broc ◽  
Therese Truong ◽  
Benoit Liquet

Abstract Background The increasing number of genome-wide association studies (GWAS) has revealed several loci that are associated to multiple distinct phenotypes, suggesting the existence of pleiotropic effects. Highlighting these cross-phenotype genetic associations could help to identify and understand common biological mechanisms underlying some diseases. Common approaches test the association between genetic variants and multiple traits at the SNP level. In this paper, we propose a novel gene- and a pathway-level approach in the case where several independent GWAS on independent traits are available. The method is based on a generalization of the sparse group Partial Least Squares (sgPLS) to take into account groups of variables, and a Lasso penalization that links all independent data sets. This method, called joint-sgPLS, is able to convincingly detect signal at the variable level and at the group level. Results Our method has the advantage to propose a global readable model while coping with the architecture of data. It can outperform traditional methods and provides a wider insight in terms of a priori information. We compared the performance of the proposed method to other benchmark methods on simulated data and gave an example of application on real data with the aim to highlight common susceptibility variants to breast and thyroid cancers. Conclusion The joint-sgPLS shows interesting properties for detecting a signal. As an extension of the PLS, the method is suited for data with a large number of variables. The choice of Lasso penalization copes with architectures of groups of variables and observations sets. Furthermore, although the method has been applied to a genetic study, its formulation is adapted to any data with high number of variables and an exposed a priori architecture in other application fields.


2021 ◽  
Vol 11 (2) ◽  
pp. 582
Author(s):  
Zean Bu ◽  
Changku Sun ◽  
Peng Wang ◽  
Hang Dong

Calibration between multiple sensors is a fundamental procedure for data fusion. To address the problems of large errors and tedious operation, we present a novel method to conduct the calibration between light detection and ranging (LiDAR) and camera. We invent a calibration target, which is an arbitrary triangular pyramid with three chessboard patterns on its three planes. The target contains both 3D information and 2D information, which can be utilized to obtain intrinsic parameters of the camera and extrinsic parameters of the system. In the proposed method, the world coordinate system is established through the triangular pyramid. We extract the equations of triangular pyramid planes to find the relative transformation between two sensors. One capture of camera and LiDAR is sufficient for calibration, and errors are reduced by minimizing the distance between points and planes. Furthermore, the accuracy can be increased by more captures. We carried out experiments on simulated data with varying degrees of noise and numbers of frames. Finally, the calibration results were verified by real data through incremental validation and analyzing the root mean square error (RMSE), demonstrating that our calibration method is robust and provides state-of-the-art performance.


Informatics ◽  
2021 ◽  
Vol 8 (2) ◽  
pp. 22
Author(s):  
Sung Jin Lee ◽  
Sang Eun Lee ◽  
Ji-On Kim ◽  
Gi Bum Kim

In this study, we address the problem originated from the fact that “The Corona 19 Epidemiological Research Support System,” developed by the Korea Centers for Disease Control and Prevention, is limited to analyzing the Global Positioning System (GPS) information of the confirmed COVID-19 cases alone. Consequently, we study a method that the authority predicts the transmission route of COVID-19 between visitors in the community from a spatiotemporal perspective. This method models a contact network around the first confirmed case, allowing the health authorities to conduct tests on visitors after an outbreak of COVID-19 in the community. After securing the GPS data of community visitors, it traces back to the past from the time the first confirmed case occurred and creates contact clusters at each time step. This is different from other researches that focus on identifying the movement paths of confirmed patients by forward tracing. The proposed method creates the contact network by assigning weights to each contact cluster based on the degree of proximity between contacts. Identifying the source of infection in the contact network can make us predict the transmission route between the first confirmed case and the source of infection and classify the contacts on the transmission route. In this experiment, we used 64,073 simulated data for 100 people and extracted the transmission route and a top 10 list for centrality analysis. The contacts on the route path can be quickly designated as a priority for COVID-19 testing. In addition, it is possible for the authority to extract the subjects with high influence from the centrality theory and use them for additional COVID-19 epidemic investigation that requires urgency. This model is expected to be used in the epidemic investigation requiring the quick selection of close contacts.


2021 ◽  
Vol 13 (5) ◽  
pp. 2426
Author(s):  
David Bienvenido-Huertas ◽  
Jesús A. Pulido-Arcas ◽  
Carlos Rubio-Bellido ◽  
Alexis Pérez-Fargallo

In recent times, studies about the accuracy of algorithms to predict different aspects of energy use in the building sector have flourished, being energy poverty one of the issues that has received considerable critical attention. Previous studies in this field have characterized it using different indicators, but they have failed to develop instruments to predict the risk of low-income households falling into energy poverty. This research explores the way in which six regression algorithms can accurately forecast the risk of energy poverty by means of the fuel poverty potential risk index. Using data from the national survey of socioeconomic conditions of Chilean households and generating data for different typologies of social dwellings (e.g., form ratio or roof surface area), this study simulated 38,880 cases and compared the accuracy of six algorithms. Multilayer perceptron, M5P and support vector regression delivered the best accuracy, with correlation coefficients over 99.5%. In terms of computing time, M5P outperforms the rest. Although these results suggest that energy poverty can be accurately predicted using simulated data, it remains necessary to test the algorithms against real data. These results can be useful in devising policies to tackle energy poverty in advance.


Sign in / Sign up

Export Citation Format

Share Document