Improved Decoding for LDPC Coded Modulation Systems Using Averaged Log-Likelihood Ratios

2011 ◽  
Vol 30 (8) ◽  
pp. 1845-1848
Author(s):  
Ping Huang ◽  
Ming Jiang ◽  
Chun-ming Zhao
Author(s):  
Pedro Gomez-Vilda ◽  
Agustin Alvarez-Marquina ◽  
Victoria Rodellar-Biarge ◽  
Victor Nieto-Lluis ◽  
Rafael Martinez-Olalla ◽  
...  

Author(s):  
Shunichi Ishihara

This study is one of the first likelihood ratio-based forensic text comparison studies in forensic authorship analysis. The likelihood-ratio-based evaluation of scientific evidence has started being adopted in many disciplines of forensic evidence comparison sciences, such as DNA, handwriting, fingerprints, footwear, voice recording, etc., and it is largely accepted that this is the way to ensure the maximum accountability and transparency of the process. Due to its convenience and low cost, short message service (SMS) has been a very popular medium of communication for quite some time. Unfortunately, however, SMS messages are sometimes used for reprehensible purposes, e.g., communication between drug dealers and buyers, or in illicit acts such as extortion, fraud, scams, hoaxes, and false reports of terrorist threats. In this study, the author performs a likelihood-ratio-based forensic text comparison of SMS messages focusing on lexical features. The likelihood ratios (LRs) are calculated in Aitken and Lucy’s (2004) multivariate kernel density procedure, and are calibrated. The validity of the system is assessed based on the magnitude of the LRs using the log-likelihood-ratio cost (Cllr). The strength of the derived LRs is graphically presented in Tippett plots. The results of the current study are compared with those of previous studies.


2005 ◽  
Vol 12 (3) ◽  
pp. 155-160 ◽  
Author(s):  
J K Morris ◽  
N J Wald

Objective: The screening performance of tests involving multiple markers is usually presented visually as two Gaussian relative frequency distributions of risk, one curve relating to affected and the other to unaffected individuals. If the distribution of the underlying screening markers is approximately Gaussian, risk estimates based on the same markers will usually also be approximately Gaussian. However, this approximation sometimes fails. Here we examine the circumstances when this occurs. Setting: A theoretical statistical analysis. Methods: Hypothetical log Gaussian relative distributions of affected and unaffected individuals were generated for three antenatal screening markers for Down's syndrome. Log likelihood ratios were calculated for each marker value and plots of the relative frequency distributions were compared with plots of Gaussian distributions based on the means and standard deviations of these log likelihood ratios. Results: When the standard deviations of the distributions of a perfectly Gaussian screening marker are similar in affected and unaffected individuals, the distributions of risk estimates are also approximately Gaussian. If the standard deviations differ materially, incorrectly assuming that the distributions of the risk estimates are Gaussian creates a graphical anomaly in which the distributions of risk in affected and unaffected individuals plotted on a continuous risk scale intersect in two places. This is theoretically impossible. Plotting the risk distributions empirically reveals that all individuals have an estimated risk above a specified value. For individuals with more extreme marker values, the risk estimates reverse and increase instead of continuing to decrease. Conclusion: It is useful to check whether a Gaussian approximation for the distribution of risk estimates based on a screening marker is valid. If the value of the marker level at which risk reversal occurs lies within the set truncation limits, these may need to be reset, and a Gaussian model may be inappropriate to illustrate the risk distributions.


Author(s):  
Mireia Diez ◽  
Amparo Varona ◽  
Mikel Penagarikano ◽  
Luis Javier Rodriguez-Fuentes ◽  
German Bordel

2013 ◽  
Vol 433-435 ◽  
pp. 595-598
Author(s):  
Yu Nian Ru ◽  
Jian Ping Li

This paper proposes a novel stopping criterion based on the HDA stopping criterion. To devise the criterion, we consider both the HDA criterion and the mean of the absolute values of the log-likelihood ratios (LLR) at the output of the component decoders over each frame together. The new criterion saves more than 0.5 iteration ,in the low signal-to-noise ratios (SNR) situation,with a negligible degradation of the error performance.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
N. M. Masoodhu Banu ◽  
S. Sasikumar

A novel doping bits based belief propagation decoding algorithm, for rate-adaptive LDPC codes based on fixed bipartite graph code, is proposed. The proposed work modifies the decoding algorithm, by converting the puncturing nodes to regular source nodes and by following the encoding rule at the decoder. The transmitted doping bits in place of punctured bits, with the modified decoding algorithm at the decoder, feed all the punctured nodes with reliable log likelihood ratios. This enables the proposed decoding algorithm to recover all punctured nodes in the early iteration. The fast convergence leads to decoder complexity reduction while providing considerable improvement in performance.


Sign in / Sign up

Export Citation Format

Share Document