Noise Processes in Discrete Communication Systems

2021 ◽  
pp. 121-171
Author(s):  
Stevan Berber

This chapter focuses on noise processes in discrete communication systems. The problem with white Gaussian noise process discretization is that a strict definition implies that the noise has theoretically infinite power. Thus, it would be impossible to generate discrete noise, because the sampling theorem requires that the sampled signal must be physically realizable, that is, the sampled noise needs to have a finite power. To overcome this problem, noise entropy is defined as an additional measure of noise properties, and a truncated Gaussian probability density function is used. Adding entropy and truncated density to the definition of the noise autocorrelation and power spectral density functions allows mathematical modelling of the discrete noise source for both baseband and bandpass noise generators and regenerators. Noise theory and noise generators are essential for a theoretical explanation of the operation of digital and discrete communications systems and their design, simulation, emulation, and testing.

2021 ◽  
Vol 28 (2) ◽  
pp. 163-182
Author(s):  
José L. Simancas-García ◽  
Kemel George-González

Shannon’s sampling theorem is one of the most important results of modern signal theory. It describes the reconstruction of any band-limited signal from a finite number of its samples. On the other hand, although less well known, there is the discrete sampling theorem, proved by Cooley while he was working on the development of an algorithm to speed up the calculations of the discrete Fourier transform. Cooley showed that a sampled signal can be resampled by selecting a smaller number of samples, which reduces computational cost. Then it is possible to reconstruct the original sampled signal using a reverse process. In principle, the two theorems are not related. However, in this paper we will show that in the context of Non Standard Mathematical Analysis (NSA) and Hyperreal Numerical System R, the two theorems are equivalent. The difference between them becomes a matter of scale. With the scale changes that the hyperreal number system allows, the discrete variables and functions become continuous, and Shannon’s sampling theorem emerges from the discrete sampling theorem.


2021 ◽  
Vol 9 (17) ◽  
pp. 26-39
Author(s):  
Hugo Wladimir Iza Benítez ◽  
Diego Javier Reinoso Chisaguano

UFMC (Universal Filtered Multi-Carrier) is a novel multi-carrier transmission technique that aims to replace the OFDM (Orthogonal Frequency Division Multiplexing) modulation technique for fifth generation (5G) wireless communication systems. UFMC, being a generalization of OFDM and FBMC (Filter Bank Multicarrier), combines the advantages of these systems and at the same time avoids their main disadvantages. Using a Matlab simulation, this article presents an analysis of the robustness of UFMC against fading effects of multipath channels without using a CP (cyclic prefix). The behavior of the UFMC system is analyzed in terms of the PSD (Power Spectral Density), BER (Bit Error Rate) and MSE (Mean Square Error). The results show that UFMC reduces the out-band side lobes produced in the PSD of the processed signal. Also, it is shown that the pilot-assisted channel estimation method applied in OFDM systems can also be applied in UFMC systems.


2010 ◽  
Vol 2010 ◽  
pp. 1-22 ◽  
Author(s):  
Carlo Cattani

Shannon wavelets are used to define a method for the solution of integrodifferential equations. This method is based on (1) the Galerking method, (2) the Shannon wavelet representation, (3) the decorrelation of the generalized Shannon sampling theorem, and (4) the definition of connection coefficients. The Shannon sampling theorem is considered in a more general approach suitable for analysing functions ranging in multifrequency bands. This generalization coincides with the Shannon wavelet reconstruction ofL2(ℝ)functions. Shannon wavelets areC∞-functions and their any order derivatives can be analytically defined by some kind of a finite hypergeometric series (connection coefficients).


Author(s):  
Robert J Marks II

The literature on the recovery of signals and images is vast (e.g., [23, 110, 112, 257, 391, 439, 791, 795, 933, 934, 937, 945, 956, 1104, 1324, 1494, 1495, 1551]). In this Chapter, the specific problem of recovering lost signal intervals from the remaining known portion of the signal is considered. Signal recovery is also a topic of Chapter 11 on POCS. To this point, sampling has been discrete. Bandlimited signals, we will show, can also be recovered from continuous samples. Our definition of continuous sampling is best presented by illustration.Asignal, f (t), is shown in Figure 10.1a, along with some possible continuous samples. Regaining f (t) from knowledge of ge(t) = f (t)Π(t/T) in Figure 10.1b is the extrapolation problem which has applications in a number of fields. In optics, for example, extrapolation in the frequency domain is termed super resolution [2, 40, 367, 444, 500, 523, 641, 720, 864, 1016, 1099, 1117]. Reconstructing f (t) from its tails [i.e., gi(t) = f (t){1 − Π(t/T)}] is the interval interpolation problem. Prediction, shown in Figure 10.1d, is the problem of recovering a signal with knowledge of that signal only for negative time. Lastly, illustrated in Figure 10.1e, is periodic continuous sampling. Here, the signal is known in sections periodically spaced at intervals of T. The duty cycle is α. Reconstruction of f (t) from this data includes a number of important reconstruction problems as special cases. (a) By keeping αT constant, we can approach the extrapolation problem by letting T go to ∞. (b) Redefine the origin in Figure 10.1e to be centered in a zero interval. Under the same assumption as (a), we can similarly approach the interpolation problem. (c) Redefine the origin as in (b). Then the interpolation problem can be solved by discarding data to make it periodically sampled. (d) Keep T constant and let α → 0. The result is reconstructing f (t) from discrete samples as discussed in Chapter 5. Indeed, this model has been used to derive the sampling theorem [246]. Figures 10.1b-e all illustrate continuously sampled versions of f (t).


Author(s):  
Semiha Türkay ◽  
Aslı S. Leblebici

Abstract In this paper, the vertical carbody dynamics of the railway vehicle excited by random track inputs are investigated. The multi-objective ℋ∞ controllers for carbody weight of the actual, heavy and a mass confined in a polytopic range have been designed with the aim of reducing the wheel forces, heave, pitch and roll body accelerations of the vehicle. Later, the carbody mass is modelled as a free-free Euler Bernoulli beam and the low frequency flexural vibrations of the train body are examined. An omnibus ℋ∞ controller is synthesized to suppress both the rigid and low frequencies flexible modes of the railway vehicle. The performances of the ℋ∞ controllers are verified by using the passive and active suspension responses on the right and left rail track disturbances that are represented by the power spectral density functions authenticated for the stochastic real track data collected from the Qinhuangdao-Shenyang passenger railway line in China. Simulation results showed that all controllers exhibit a very good performance by effectively reducing the car-body accelerations in vicinity of the resonanat frequencies while keeping the wheel-rail forces in the allowable limit.


2019 ◽  
Vol 2019 ◽  
pp. 1-17 ◽  
Author(s):  
Igor Bisio ◽  
Andrea Sciarrone

The telecommunication infrastructure in emergency scenarios is necessarily composed of heterogeneous radio/mobile portions. Mobile Nodes (MNs) equipped with multiple network interfaces can assure continuous communications when different Radio Access Networks (RANs) that employ different Radio Access Technologies (RATs) are available. In this context, the paper proposes the definition of a Decision Maker (DM), within the protocol stack of the MN, in charge of performing network selections and handover decisions. The DM has been designed to optimize one or more performance metrics and it is based on Multiattribute Decision Making (MADM) methods. Among several MADM techniques considered, taken from the literature, the work is then focused on the TOPSIS approach, which allows introducing some improvements aimed at reducing the computational burden needed to select the RAT to be employed. The enhanced method is called Dynamic-TOPSIS (D-TOPSIS). Finally, the numerical results, obtained through a large simulative campaign and aimed at comparing the performance and the running time of the D-TOPSIS, the TOPSIS, and the algorithms found in the literature, are reported and discussed.


2016 ◽  
Vol 461 (2) ◽  
pp. 1642-1655 ◽  
Author(s):  
D. Emmanoulopoulos ◽  
I. E. Papadakis ◽  
A. Epitropakis ◽  
T. Pecháček ◽  
M. Dovčiak ◽  
...  

1993 ◽  
Vol 03 (06) ◽  
pp. 1619-1627 ◽  
Author(s):  
CHAI WAH WU ◽  
LEON O. CHUA

In this paper, we provide a scheme for synthesizing synchronized circuits and systems. Synchronization of the drive and response system is proved trivially without the need for computing numerically the conditional Lyapunov exponents. We give a definition of the driving and response system having the same functional form, which is more general than the concept of homogeneous driving by Pecora & Carroll [1991]. Finally, we show how synchronization coupled with chaos can be used to implement secure communication systems. This is illustrated with examples of secure communication systems which are inherently error-free in contrast to the signal-masking schemes proposed in Cuomo & Oppenheim [1993a,b] and Kocarev et al. [1992].


2002 ◽  
Vol 8 (2-3) ◽  
pp. 97-120 ◽  
Author(s):  
ROBERTO BASILI ◽  
FABIO MASSIMO ZANZOTTO

Robustness has been traditionally stressed as a general desirable property of any computational model and system. The human NL interpretation device exhibits this property as the ability to deal with odd sentences. However, the difficulties in a theoretical explanation of robustness within the linguistic modelling suggested the adoption of an empirical notion. In this paper, we propose an empirical definition of robustness based on the notion of performance. Furthermore, a framework for controlling the parser robustness in the design phase is presented. The control is achieved via the adoption of two principles: the modularisation, typical of the software engineering practice, and the availability of domain adaptable components. The methodology has been adopted for the production of CHAOS, a pool of syntactic modules, which has been used in real applications. This pool of modules enables a large validation of the notion of empirical robustness, on the one side, and of the design methodology, on the other side, over different corpora and two different languages (English and Italian).


Sign in / Sign up

Export Citation Format

Share Document