A Study on the Transfer Function Based Analog Fault Model for Linear and Time-Invariant Continuous-Time Analog Circuits

Author(s):  
Hao-Chiao Hong ◽  
Long-Yi Lin
Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1065
Author(s):  
Moshe Bensimon ◽  
Shlomo Greenberg ◽  
Moshe Haiut

This work presents a new approach based on a spiking neural network for sound preprocessing and classification. The proposed approach is biologically inspired by the biological neuron’s characteristic using spiking neurons, and Spike-Timing-Dependent Plasticity (STDP)-based learning rule. We propose a biologically plausible sound classification framework that uses a Spiking Neural Network (SNN) for detecting the embedded frequencies contained within an acoustic signal. This work also demonstrates an efficient hardware implementation of the SNN network based on the low-power Spike Continuous Time Neuron (SCTN). The proposed sound classification framework suggests direct Pulse Density Modulation (PDM) interfacing of the acoustic sensor with the SCTN-based network avoiding the usage of costly digital-to-analog conversions. This paper presents a new connectivity approach applied to Spiking Neuron (SN)-based neural networks. We suggest considering the SCTN neuron as a basic building block in the design of programmable analog electronics circuits. Usually, a neuron is used as a repeated modular element in any neural network structure, and the connectivity between the neurons located at different layers is well defined. Thus, generating a modular Neural Network structure composed of several layers with full or partial connectivity. The proposed approach suggests controlling the behavior of the spiking neurons, and applying smart connectivity to enable the design of simple analog circuits based on SNN. Unlike existing NN-based solutions for which the preprocessing phase is carried out using analog circuits and analog-to-digital conversion, we suggest integrating the preprocessing phase into the network. This approach allows referring to the basic SCTN as an analog module enabling the design of simple analog circuits based on SNN with unique inter-connections between the neurons. The efficiency of the proposed approach is demonstrated by implementing SCTN-based resonators for sound feature extraction and classification. The proposed SCTN-based sound classification approach demonstrates a classification accuracy of 98.73% using the Real-World Computing Partnership (RWCP) database.


2018 ◽  
Vol 2018 ◽  
pp. 1-17
Author(s):  
M. De la Sen

This paper is concerned with the property of asymptotic hyperstability of a continuous-time linear system under a class of continuous-time nonlinear and perhaps time-varying feedback controllers belonging to a certain class with two main characteristics; namely, (a) it satisfies discrete-type Popov’s inequality at sampling instants and (b) the control law within the intersample period is generated based on its value at sampling instants being modulated by two design weighting auxiliary functions. The closed-loop continuous-time system is proved to be asymptotically hyperstable, under some explicit conditions on such weighting functions, provided that the discrete feed-forward transfer function is strictly positive real.


1973 ◽  
Vol 63 (3) ◽  
pp. 937-958
Author(s):  
Anton Ziolkowski

abstract Approximately half the noise observed by long-period seismometers at LASA is nonpropagating; that is, it is incoherent over distances greater than a few kilometers. However, because it is often strongly coherent with microbarograph data recorded at the same site, a large proportion of it can be predicted by convolving the microbarogram with some transfer function. The reduction in noise level using this technique can be as high as 5 db on the vertical seismometer and higher still on the horizontals. If the source of this noise on the vertical seismogram were predominantly buoyancy, the transfer function would be time-invariant. It is not. Buoyancy on the LASA long-period instruments is quite negligible. The noise is caused by atmospheric deformation of the ground and, since so much of it can be predicted from the output of a single nearby microbarograph, it must be of very local origin. The loading process may be adequately described by the static deformation of a flat-earth model; however, for the expectation of the noise to be finite, it is shown that the wave number spectrum of the pressure distribution must be band-limited. An expression for the expected noise power is derived which agrees very well with observations and predicts the correct attenuation with depth. It is apparent from the form of this expression why it is impossible to obtain a stable transfer function to predict the noise without an array of microbarographs and excessive data processing. The most effective way to suppress this kind of noise is to bury the seismometer: at 150 m the reduction in noise level would be about 10 db.


2021 ◽  
pp. 562-598
Author(s):  
Stevan Berber

Due to the importance of the concept of independent variable modification, the definition of linear-time-invariant system, and their implications for discrete-time signal processing, Chapter 11 presents basic deterministic continuous-time signals and systems. These signals, expressed in the form of functions and functionals such as the Dirac delta function, are used throughout the book for deterministic and stochastic signal analysis, in both the continuous-time and the discrete-time domains. The definition of the autocorrelation function, and an explanation of the convolution procedure in linear-time-invariant systems, are presented in detail, due to their importance in communication systems analysis and synthesis. A linear modification of the independent continuous variable is presented for specific cases, like time shift, time reversal, and time and amplitude scaling.


Author(s):  
Ronald K. Pearson

It was emphasized in Chapter 1 that low-order, linear time-invariant models provide the foundation for much intuition about dynamic phenomena in the real world. This chapter provides a brief review of the characteristics and behavior of linear models, beginning with these simple cases and then progressing to more complex examples where this intuition no longer holds: infinite-dimensional and time-varying linear models. In continuous time, infinite-dimensional linear models arise naturally from linear partial differential equations whereas in discrete time, infinite-dimensional linear models may be used to represent a variety of “slow decay” effects. Time-varying linear models are also extremely flexible: In the continuous-time case, many of the ordinary differential equations defining special functions (e.g., the equations defining Bessel functions) may be viewed as time-varying linear models; in the discrete case, the gamma function arises naturally as the solution of a time-varying difference equation. Sec. 2.1 gives a brief discussion of low-order, time-invariant linear dynamic models, using second-order examples to illustrate both the “typical” and “less typical” behavior that is possible for these models. One of the most powerful results of linear system theory is that any time-invariant linear dynamic system may be represented as either a moving average (i.e., convolution-type) model or an autoregressive one. Sec. 2.2 presents a short review of these ideas, which will serve to establish both notation and a certain amount of useful intuition for the discussion of NARMAX models presented in Chapter 4. Sec. 2.3 then briefly considers the problem of characterizing linear models, introducing four standard input sequences that are typical of those used in linear model characterization. These standard sequences are then used in subsequent chapters to illustrate differences between nonlinear model behavior and linear model behavior. Sec. 2.4 provides a brief introduction to infinite-dimensional linear systems, including both continuous-time and discrete-time examples. Sec. 2.5 provides a similar introduction to the subject of time-varying linear systems, emphasizing the flexibility of this class. Finally, Sec. 2.6 briefly considers the nature of linearity, presenting some results that may be used to define useful classes of nonlinear models.


Sign in / Sign up

Export Citation Format

Share Document