scholarly journals Noninvasive Stability Measurement of Linear Voltage Regulator in the Closed-loop Condition

Author(s):  
Syukri Zamri
2003 ◽  
Vol 15 (4) ◽  
pp. 831-864 ◽  
Author(s):  
Bernd Porr ◽  
Florentin Wörgötter

In this article, we present an isotropic unsupervised algorithm for temporal sequence learning. No special reward signal is used such that all inputs are completely isotropic. All input signals are bandpass filtered before converging onto a linear output neuron. All synaptic weights change according to the correlation of bandpass-filtered inputs with the derivative of the output. We investigate the algorithm in an open- and a closed-loop condition, the latter being defined by embedding the learning system into a behavioral feedback loop. In the open-loop condition, we find that the linear structure of the algorithm allows analytically calculating the shape of the weight change, which is strictly heterosynaptic and follows the shape of the weight change curves found in spike-time-dependent plasticity. Furthermore, we show that synaptic weights stabilize automatically when no more temporal differences exist between the inputs without additional normalizing measures. In the second part of this study, the algorithm is is placed in an environment that leads to closed sensor-motor loop. To this end, a robot is programmed with a prewired retraction reflex reaction in response to collisions. Through isotropic sequence order (ISO) learning, the robot achieves collision avoidance by learning the correlation between his early range-finder signals and the later occurring collision signal. Synaptic weights stabilize at the end of learning as theoretically predicted. Finally, we discuss the relation of ISO learning with other drive reinforcement models and with the commonly used temporal difference learning algorithm. This study is followed up by a mathematical analysis of the closed-loop situation in the companion article in this issue, “ISO Learning Approximates a Solution to the Inverse-Controller Problem in an Unsupervised Behavioral Paradigm” (pp. 865–884).


2021 ◽  
Author(s):  
Mustafa Shakir ◽  
Sohaib Aslam ◽  
Muhammad Adnan ◽  
Kashif A. Janjua

2016 ◽  
Vol 2016 (HiTEC) ◽  
pp. 000106-000111 ◽  
Author(s):  
R.C. Murphree ◽  
S. Ahmed ◽  
M. Barlow ◽  
A. Rahman ◽  
H.A. Mantooth ◽  
...  

Abstract This paper establishes the first linear regulator in a 1.2 μm CMOS silicon carbide (SiC) process. The linear regulator presented consists of a SiC error amplifier and a pass transistor which has a W/L = 70,000 μm / 1.2 μm. The feedback loop is internal and the frequency compensation network is a combination of internal and external components. As a result of potential process variation in this emerging technology, the voltage reference used at the negative input terminal of the error amplifier has been made external. With an input voltage of 20 V to 30 V, the voltage regulator is able to provide a 15 V output and a continuous load current of 100 mA at temperatures ranging from 25 °C to over 400 °C. At a temperature of 400 °C, testing of the fabricated circuit has shown line regulation of less than 4 mV/V. Under the same test conditions, a load regulation of less than 420 mV/A is achieved.


2005 ◽  
Vol 17 (2) ◽  
pp. 245-319 ◽  
Author(s):  
Florentin Wörgötter ◽  
Bernd Porr

In this review, we compare methods for temporal sequence learning (TSL) across the disciplines machine-control, classical conditioning, neuronal models for TSL as well as spike-timing-dependent plasticity (STDP). This review introduces the most influential models and focuses on two questions: To what degree are reward-based (e.g., TD learning) and correlation-based (Hebbian) learning related? and How do the different models correspond to possibly underlying biological mechanisms of synaptic plasticity? We first compare the different models in an open-loop condition, where behavioral feedback does not alter the learning. Here we observe that reward-based and correlation-based learning are indeed very similar. Machine control is then used to introduce the problem of closed-loop control (e.g., actor-critic architectures). Here the problem of evaluative (rewards) versus nonevaluative (correlations) feedback from the environment will be discussed, showing that both learning approaches are fundamentally different in the closed-loop condition. In trying to answer the second question, we compare neuronal versions of the different learning architectures to the anatomy of the involved brain structures (basal-ganglia, thalamus, and cortex) and the molecular biophysics of glutamatergic and dopaminergic synapses. Finally, we discuss the different algorithms used to model STDP and compare them to reward-based learning rules. Certain similarities are found in spite of the strongly different timescales. Here we focus on the biophysics of the different calcium-release mechanisms known to be involved in STDP.


Sign in / Sign up

Export Citation Format

Share Document