scholarly journals Where to go from here? New cross layer techniques for LTE Turbo-Code decoding at high code rates

2018 ◽  
Vol 16 ◽  
pp. 77-87 ◽  
Author(s):  
Stefan Weithoffer ◽  
Norbert Wehn

Abstract. The wide range of code rates and code block sizes supported by todays wireless communication standards, together with the requirement for a throughput in the order of Gbps, necessitates sophisticated and highly parallel channel decoder architectures. Code rates specified in the LTE standard, which uses Turbo-Codes, range up to r=0.94 to maximize the information throughput by transmitting only a minimum amount of parity information, which negatively impacts the error correcting performance. This especially holds for highly parallel hardware architectures. Therefore, the error correcting performance must be traded-off against the degree of parallel processing. State-of-the-art Turbo-Code decoder hardware architectures are optimized on code block level to alleviate this trade-off. In this paper, we follow a cross-layer approach by combining system level knowledge about the rate-matching and the transport block structure in LTE with the bit-level technique of on-the-fly CRC calculation. Thereby, our proposed Turbo-Code decoder hardware architecture achieves coding gains of 0.4–1.8 dB compared to state-of-the-art accross a wide range of code block sizes. For the fully LTE compatible Turbo-Code decoder, we demonstrate a negligible hardware overhead and a resulting high area and energy efficiency and give post place and route synthesis numbers.

2013 ◽  
Vol 347-350 ◽  
pp. 1720-1726
Author(s):  
Peng Zhu ◽  
Jun Zhu ◽  
Xiang Liu

Turbo codes have a wide range of applications in 3G mobile communications, deep-sea communications, satellite communications and other power constrained fields. In the paper, the Turbo Code Decoding Principle and several major decoding methods are introduced. Simulations of Turbo code performance under different parameters of AWGN channel are made and the effects of the different interleaving length, the number of iterations, and the decoding algorithm to Turbo code performance are also discussed in AWGN channel. Simulation results show that under the same signal-to-noise ratio, the more the number of iterations is, the longer the sequence of information is, and the more excellent decoding algorithm is, the better the performance of Turbo codes is.


2013 ◽  
Vol 10 (7) ◽  
pp. 10937-10995 ◽  
Author(s):  
A. M. Foley ◽  
D. Dalmonech ◽  
A. D. Friend ◽  
F. Aires ◽  
A. Archibald ◽  
...  

Abstract. Earth system models are increasing in complexity and incorporating more processes than their predecessors, making them important tools for studying the global carbon cycle. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes, with coupled climate-carbon cycle models that represent land-use change simulating total land carbon stores by 2100 that vary by as much as 600 Pg C given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous model evaluation methodologies. Here we assess the state-of-the-art with respect to evaluation of Earth system models, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeo data and (ii) metrics for evaluation, and discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute towards the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but it is also a challenge, as more knowledge about data uncertainties is required in order to determine robust evaluation methodologies that move the field of ESM evaluation from "beauty contest" toward the development of useful constraints on model behaviour.


2013 ◽  
Vol 10 (12) ◽  
pp. 8305-8328 ◽  
Author(s):  
A. M. Foley ◽  
D. Dalmonech ◽  
A. D. Friend ◽  
F. Aires ◽  
A. T. Archibald ◽  
...  

Abstract. Earth system models (ESMs) are increasing in complexity by incorporating more processes than their predecessors, making them potentially important tools for studying the evolution of climate and associated biogeochemical cycles. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes. For example, coupled climate–carbon cycle models that represent land-use change simulate total land carbon stores at 2100 that vary by as much as 600 Pg C, given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous methods of model evaluation. Here we assess the state-of-the-art in evaluation of ESMs, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeodata and (ii) metrics for evaluation. We note that the practice of averaging results from many models is unreliable and no substitute for proper evaluation of individual models. We discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute to the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but also presents a challenge. Improved knowledge of data uncertainties is still necessary to move the field of ESM evaluation away from a "beauty contest" towards the development of useful constraints on model outcomes.


2020 ◽  
Vol 12 ◽  
Author(s):  
Francisco Basílio ◽  
Ricardo Jorge Dinis-Oliveira

Background: Pharmacobezoars are specific types of bezoars formed when medicines, such as tablets, suspensions, and/or drug delivery systems, aggregate and may cause death by occluding airways with tenacious material or by eluting drugs resulting in toxic or lethal blood concentrations. Objective: This work aims to fully review the state-of-the-art regarding pathophysiology, diagnosis, treatment and other relevant clinical and forensic features of pharmacobezoars. Results: patients of a wide range of ages and in both sexes present with signs and symptoms of intoxications or more commonly gastrointestinal obstructions. The exact mechanisms of pharmacobezoar formation are unknown but is likely multifactorial. The diagnosis and treatment depend on the gastrointestinal segment affected and should be personalized to the medication and the underlying factor. A good and complete history, physical examination, image tests, upper endoscopy and surgery through laparotomy of the lower tract are useful for diagnosis and treatment. Conclusion: Pharmacobezoars are rarely seen in clinical and forensic practice. They are related to controlled or immediate-release formulations, liquid or non-digestible substances, in normal or altered digestive motility/anatomy tract, and in overdoses or therapeutic doses, and should be suspected in the presence of risk factors or patients taking drugs which may form pharmacobezoars.


This volume vividly demonstrates the importance and increasing breadth of quantitative methods in the earth sciences. With contributions from an international cast of leading practitioners, chapters cover a wide range of state-of-the-art methods and applications, including computer modeling and mapping techniques. Many chapters also contain reviews and extensive bibliographies which serve to make this an invaluable introduction to the entire field. In addition to its detailed presentations, the book includes chapters on the history of geomathematics and on R.G.V. Eigen, the "father" of mathematical geology. Written to commemorate the 25th anniversary of the International Association for Mathematical Geology, the book will be sought after by both practitioners and researchers in all branches of geology.


Polymers ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 1566
Author(s):  
Oliver J. Pemble ◽  
Maria Bardosova ◽  
Ian M. Povey ◽  
Martyn E. Pemble

Chitosan-based films have a diverse range of potential applications but are currently limited in terms of commercial use due to a lack of methods specifically designed to produce thin films in high volumes. To address this limitation directly, hydrogels prepared from chitosan, chitosan-tetraethoxy silane, also known as tetraethyl orthosilicate (TEOS) and chitosan-glutaraldehyde have been used to prepare continuous thin films using a slot-die technique which is described in detail. By way of preliminary analysis of the resulting films for comparison purposes with films made by other methods, the mechanical strength of the films produced was assessed. It was found that as expected, the hybrid films made with TEOS and glutaraldehyde both show a higher yield strength than the films made with chitosan alone. In all cases, the mechanical properties of the films were found to compare very favorably with similar measurements reported in the literature. In order to assess the possible influence of the direction in which the hydrogel passes through the slot-die on the mechanical properties of the films, testing was performed on plain chitosan samples cut in a direction parallel to the direction of travel and perpendicular to this direction. It was found that there was no evidence of any mechanical anisotropy induced by the slot die process. The examples presented here serve to illustrate how the slot-die approach may be used to create high-volume, high-area chitosan-based films cheaply and rapidly. It is suggested that an approach of the type described here may facilitate the use of chitosan-based films for a wide range of important applications.


2021 ◽  
Vol 15 (5) ◽  
pp. 1-32
Author(s):  
Quang-huy Duong ◽  
Heri Ramampiaro ◽  
Kjetil Nørvåg ◽  
Thu-lan Dam

Dense subregion (subgraph & subtensor) detection is a well-studied area, with a wide range of applications, and numerous efficient approaches and algorithms have been proposed. Approximation approaches are commonly used for detecting dense subregions due to the complexity of the exact methods. Existing algorithms are generally efficient for dense subtensor and subgraph detection, and can perform well in many applications. However, most of the existing works utilize the state-or-the-art greedy 2-approximation algorithm to capably provide solutions with a loose theoretical density guarantee. The main drawback of most of these algorithms is that they can estimate only one subtensor, or subgraph, at a time, with a low guarantee on its density. While some methods can, on the other hand, estimate multiple subtensors, they can give a guarantee on the density with respect to the input tensor for the first estimated subsensor only. We address these drawbacks by providing both theoretical and practical solution for estimating multiple dense subtensors in tensor data and giving a higher lower bound of the density. In particular, we guarantee and prove a higher bound of the lower-bound density of the estimated subgraph and subtensors. We also propose a novel approach to show that there are multiple dense subtensors with a guarantee on its density that is greater than the lower bound used in the state-of-the-art algorithms. We evaluate our approach with extensive experiments on several real-world datasets, which demonstrates its efficiency and feasibility.


2021 ◽  
Vol 50 (1) ◽  
pp. 33-40
Author(s):  
Chenhao Ma ◽  
Yixiang Fang ◽  
Reynold Cheng ◽  
Laks V.S. Lakshmanan ◽  
Wenjie Zhang ◽  
...  

Given a directed graph G, the directed densest subgraph (DDS) problem refers to the finding of a subgraph from G, whose density is the highest among all the subgraphs of G. The DDS problem is fundamental to a wide range of applications, such as fraud detection, community mining, and graph compression. However, existing DDS solutions suffer from efficiency and scalability problems: on a threethousand- edge graph, it takes three days for one of the best exact algorithms to complete. In this paper, we develop an efficient and scalable DDS solution. We introduce the notion of [x, y]-core, which is a dense subgraph for G, and show that the densest subgraph can be accurately located through the [x, y]-core with theoretical guarantees. Based on the [x, y]-core, we develop both exact and approximation algorithms. We have performed an extensive evaluation of our approaches on eight real large datasets. The results show that our proposed solutions are up to six orders of magnitude faster than the state-of-the-art.


2021 ◽  
Author(s):  
Danila Piatov ◽  
Sven Helmer ◽  
Anton Dignös ◽  
Fabio Persia

AbstractWe develop a family of efficient plane-sweeping interval join algorithms for evaluating a wide range of interval predicates such as Allen’s relationships and parameterized relationships. Our technique is based on a framework, components of which can be flexibly combined in different manners to support the required interval relation. In temporal databases, our algorithms can exploit a well-known and flexible access method, the Timeline Index, thus expanding the set of operations it supports even further. Additionally, employing a compact data structure, the gapless hash map, we utilize the CPU cache efficiently. In an experimental evaluation, we show that our approach is several times faster and scales better than state-of-the-art techniques, while being much better suited for real-time event processing.


2020 ◽  
Vol 499 (4) ◽  
pp. 5732-5748 ◽  
Author(s):  
Rahul Kannan ◽  
Federico Marinacci ◽  
Mark Vogelsberger ◽  
Laura V Sales ◽  
Paul Torrey ◽  
...  

ABSTRACT We present a novel framework to self-consistently model the effects of radiation fields, dust physics, and molecular chemistry (H2) in the interstellar medium (ISM) of galaxies. The model combines a state-of-the-art radiation hydrodynamics module with a H  and He  non-equilibrium thermochemistry module that accounts for H2 coupled to an empirical dust formation and destruction model, all integrated into the new stellar feedback framework SMUGGLE. We test this model on high-resolution isolated Milky-Way (MW) simulations. We show that the effect of radiation feedback on galactic star formation rates is quite modest in low gas surface density galaxies like the MW. The multiphase structure of the ISM, however, is highly dependent on the strength of the interstellar radiation field. We are also able to predict the distribution of H2, that allow us to match the molecular Kennicutt–Schmidt (KS) relation, without calibrating for it. We show that the dust distribution is a complex function of density, temperature, and ionization state of the gas. Our model is also able to match the observed dust temperature distribution in the ISM. Our state-of-the-art model is well-suited for performing next-generation cosmological galaxy formation simulations, which will be able to predict a wide range of resolved (∼10 pc) properties of galaxies.


Sign in / Sign up

Export Citation Format

Share Document