IMPROVED MIN-SUM DECODING ALGORITHM FOR IRREGULAR LDPC CODES

Author(s):  
А.В. Башкиров ◽  
И.В. Свиридова ◽  
Т.Д. Ижокина ◽  
Е.А. Зубкова ◽  
О.В. Свиридова ◽  
...  

Aналитический подход к определению оптимальной функции постобработки для минимальной операции в алгоритме MIN-SUM, ранее полученный для обычных кодов проверки на четность с низкой плотностью (LDPC-коды), распространяется на нерегулярные коды LDPC. Оптимальное выражение постобработки для нестандартного случая варьируется от одного контрольного узла к другому, а также от одной итерации к следующей. Для практического использования необходимо аппроксимировать эту оптимальную функцию. В отличие от обычного случая, когда можно было бы использовать уникальную функцию постобработки на протяжении всего процесса декодирования без потери производительности битовых ошибок, для нерегулярных кодов критически важно варьировать постобработку от одной итерации к следующей, чтобы добиться хорошей производительности. С использованием этого подхода было выявлено, что качество битовых ошибок от алгоритма распространения доверия соответствует улучшению на 1 дБ по сравнению с MIN-SUM алгоритмом без постобработки. Сначала будет представлен обзор подхода и представлена аналитическая основа для оптимальной постобработки. Далее будет представлена оптимальная функция постобработки для нерегулярных кодов и обсуждены возможные упрощения. И наконец, показаны результаты моделирования и преимущества аппроксимации We extended an analytical approach to determining the optimal post-processing function for the minimum operation in the MIN-SUM algorithm, previously obtained for conventional low density parity check codes (LDPC codes), to irregular LDPC codes. The optimal post-processing expression for the non-standard case varies from one control node to another, as well as from one iteration to the next. For practical use, it is necessary to approximate this optimal function. Unlike the usual case where one could use a unique post-processing function throughout the entire decoding process without sacrificing bit code performance, it is critical for irregular codes to distinguish post-processing from one iteration to the next in order to achieve good performance. Using this approach, we found that the quality of bit errors from the trust algorithm corresponds to an improvement of 1 level compared to the MIN-SUM algorithm without post-processing. First, we provide an overview and analytical framework for optimal post-processing. Then, we present the optimal post-processing function for irregular codes and discuss possible simplifications. Finally, we show the simulation results and the benefits of the approximation

Author(s):  
Margaret Jane Radin

Boilerplate—the fine-print terms and conditions that we become subject to when we click “I agree” online, rent an apartment, or enter an employment contract, for example—pervades all aspects of our modern lives. On a daily basis, most of us accept boilerplate provisions without realizing that should a dispute arise about a purchased good or service, the nonnegotiable boilerplate terms can deprive us of our right to jury trial and relieve providers of responsibility for harm. Boilerplate is the first comprehensive treatment of the problems posed by the increasing use of these terms, demonstrating how their use has degraded traditional notions of consent, agreement, and contract, and sacrificed core rights whose loss threatens the democratic order. This book examines attempts to justify the use of boilerplate provisions by claiming either that recipients freely consent to them or that economic efficiency demands them, and it finds these justifications wanting. It argues that our courts, legislatures, and regulatory agencies have fallen short in their evaluation and oversight of the use of boilerplate clauses. To improve legal evaluation of boilerplate, the book offers a new analytical framework, one that takes into account the nature of the rights affected, the quality of the recipient's consent, and the extent of the use of these terms. It goes on to offer possibilities for new methods of boilerplate evaluation and control, and concludes by discussing positive steps that NGOs, legislators, regulators, courts, and scholars could take to bring about better practices.


SLEEP ◽  
2020 ◽  
Author(s):  
Luca Menghini ◽  
Nicola Cellini ◽  
Aimee Goldstone ◽  
Fiona C Baker ◽  
Massimiliano de Zambotti

Abstract Sleep-tracking devices, particularly within the consumer sleep technology (CST) space, are increasingly used in both research and clinical settings, providing new opportunities for large-scale data collection in highly ecological conditions. Due to the fast pace of the CST industry combined with the lack of a standardized framework to evaluate the performance of sleep trackers, their accuracy and reliability in measuring sleep remains largely unknown. Here, we provide a step-by-step analytical framework for evaluating the performance of sleep trackers (including standard actigraphy), as compared with gold-standard polysomnography (PSG) or other reference methods. The analytical guidelines are based on recent recommendations for evaluating and using CST from our group and others (de Zambotti and colleagues; Depner and colleagues), and include raw data organization as well as critical analytical procedures, including discrepancy analysis, Bland–Altman plots, and epoch-by-epoch analysis. Analytical steps are accompanied by open-source R functions (depicted at https://sri-human-sleep.github.io/sleep-trackers-performance/AnalyticalPipeline_v1.0.0.html). In addition, an empirical sample dataset is used to describe and discuss the main outcomes of the proposed pipeline. The guidelines and the accompanying functions are aimed at standardizing the testing of CSTs performance, to not only increase the replicability of validation studies, but also to provide ready-to-use tools to researchers and clinicians. All in all, this work can help to increase the efficiency, interpretation, and quality of validation studies, and to improve the informed adoption of CST in research and clinical settings.


2021 ◽  
pp. 146247452198980
Author(s):  
Vicky Heap ◽  
Alex Black ◽  
Zoe Rodgers

Community Protection Notices (CPNs) are civil preventive orders used in England and Wales to prevent and/or require specific behaviour by an individual or organisation, where existing conduct has a ‘detrimental impact on the quality of life of those in the locality’. Breach of the notice results in a £100 fine under a Fixed Penalty Notice or a possible criminal conviction. To date, CPNs have tackled an array of perceived anti-social behaviours, ranging from rough sleeping to overgrown gardens. Using Ashworth and Zedner’s preventive justice as an analytical framework, our research qualitatively explores recipients’ experiences of this new tool for the first time. The findings highlight how the operationalisation of CPNs extends the coercive power of the state, with a range of negative consequences relating to the concepts of disproportionality, due process and accountability. We also offer three empirically-grounded recommendations for reforming CPN practices.


Author(s):  
Radhika Theagarajan ◽  
Shubham Nimbkar ◽  
Jeyan Arthur Moses ◽  
Chinnaswamy Anandharamakrishnan

2008 ◽  
Vol 2008 ◽  
pp. 1-4
Author(s):  
Luca Barletta ◽  
Arnaldo Spalvieri

This work focuses on high-rate () moderate-length () low-density parity-check codes. High-rate codes allow to maintain good quality of the preliminary decisions that are used in carrier recovery, while a moderate code length allows to keep the latency low. The interleaver of the LDPC matrix that we consider is inspired to the DVB-S2 standard one. A novel approach for avoiding short cycles is analyzed. A modified BP decoding algorithm is applied in order to deal with longer cycles. Simulations and results for the AWGN channel are presented, both for BPSK signalling and for coded modulation based on the partition .


2011 ◽  
Vol E94-B (8) ◽  
pp. 2375-2377 ◽  
Author(s):  
Beomkyu SHIN ◽  
Hosung PARK ◽  
Jong-Seon NO ◽  
Habong CHUNG

2001 ◽  
Vol 1 (4) ◽  
pp. 282-290 ◽  
Author(s):  
F. C. Langbein ◽  
B. I. Mills ◽  
A. D. Marshall ◽  
R. R. Martin

Current reverse engineering systems can generate boundary representation (B-rep) models from 3D range data. Such models suffer from inaccuracies caused by noise in the input data and algorithms. The quality of reverse engineered geometric models can be improved by finding candidate shape regularities in such a model, and constraining the model to meet a suitable subset of them, in a post-processing step called beautification. This paper discusses algorithms to detect such approximate regularities in terms of similarities between feature objects describing properties of faces, edges and vertices, and small groups of these elements in a B-rep model with only planar, spherical, cylindrical, conical and toroidal faces. For each group of similar feature objects they also seek special feature objects which may represent the group, e.g. an integer value which approximates the radius of similar cylinders. Experiments show that the regularities found by the algorithms include the desired regularities as well as spurious regularities, which can be limited by an appropriate choice of tolerances.


Sign in / Sign up

Export Citation Format

Share Document