scholarly journals A study of expected loss rates in the counting of particles from pulsed sources

The use in nuclear physics of accelerators giving a pulsed output leads to difficulties when electrical methods are used for detecting the particles produced. The counting losses due to the finite resolving time of the counting system are enhanced by reason of the pulsed nature of the source, and may become considerable unless the counting speed is kept quite low. The present paper contains some calculations on the loss rates to be expected. It is desirable for the resolving time of the counting system, i.e. the dead time following each count, to be much shorter than the pulse length of the accelerator, but since this is itself usually only a few microseconds, this condition is not easy to achieve. If the dead-time is greater than the duration of the accelerator pulse, the calculations are relatively easy but the losses may be high. Calculations for intermediate cases show that the losses can be estimated fairly accurately, and results of practical value can be obtained up to fairly high counting speeds when the dead-time is as high as 40 % of the pulse length. The accuracy with which the loss can be calculated will usually be limited by the uncertainty in our knowledge of the shape of the output pulse of the accelerator and of the exact length of the dead-time of the counting arrangement. The attention of the reader is particularly directed to appendix III, where the results of the calculations of this paper will be found summarized. The experimental physicist who wishes to make use of the results without following the detailed analysis may pass directly from the end of § 1 to this appendix, which will also be of value to others for rapid reference.

Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 220
Author(s):  
Cheng Lin ◽  
Jilei Xing ◽  
Xingming Zhuang

Sensorless control technology of PMSMs is of great importance for safety and reliability in electric vehicles. Among all existing methods, only the extended flux-based method has great performance over all speed range. However, the accuracy and reliability of the extended flux rotor position observer are greatly affected by the dead-time effect. In this paper, the extended flux-based observer is adopted to develop a sensorless control system. The influence of dead-time effect on the observer is analyzed and a dead-time correction method is specially designed to guarantee the reliability of the whole control system. A comparison of estimation precision among the extended flux-based method, the electromotive force (EMF)-based method and the high frequency signal injection method is given by simulations. The performance of the proposed sensorless control system is verified by experiments. The experimental results show that the proposed extended flux-based sensorless control system with dead-time correction has satisfactory performance over full speed range in both loaded and non-loaded situations. The estimation error of rotor speed is within 4% in all working conditions. The dead-time correction method improves the reliability of the control system effectively.


2018 ◽  
Vol 7 (4) ◽  
pp. 2800
Author(s):  
Avani Kirit Mehta ◽  
R. Swarnalatha

Dead-time is common to real time processes and occurs when the process variable doesn’t acknowledge to any changes in the set point. Existence of dead time in the systems poses a challenge to control and stabilize, especially in a control feedback loop. Padé approximation provides a determinate approximation of the dead time in the continuous process systems, which can be utilized in the further simulations of equivalent First Order plus Dead Time Models. However, the standard Padé approximation with the same numerator- denominator derivative power, exhibits a jolt at time t=0. This gives an inaccurate approximation of the dead time. To avoid this phenomenon, increasing orders of Padé approximation is applied. In the following manuscript, equivalent First Order plus Dead-Time models of two blending systems of orders four and seven are analysed for the same. As the orders of the Padé approximation increases, the accuracy of the response also increases. The oscillations are increased on a much smaller scale rather than having one big dip in the negative region (as observed in the first few orders of Padé approximation), and the approximation tries to synchronize with the desired response curve in the positive region. All the simulations are done in MATLAB.  


Heliyon ◽  
2020 ◽  
Vol 6 (2) ◽  
pp. e03302
Author(s):  
Rinaldi Idroes ◽  
Muslem ◽  
Mahmudi ◽  
Saiful ◽  
Ghazi Mauer Idroes ◽  
...  

2016 ◽  
Vol 9 (4) ◽  
pp. 1799-1816 ◽  
Author(s):  
Ilias Fountoulakis ◽  
Alberto Redondas ◽  
Alkiviadis F. Bais ◽  
Juan José Rodriguez-Franco ◽  
Konstantinos Fragkos ◽  
...  

Abstract. Brewer spectrophotometers are widely used instruments which perform spectral measurements of the direct, the scattered and the global solar UV irradiance. By processing these measurements a variety of secondary products can be derived such as the total columns of ozone (TOC), sulfur dioxide and nitrogen dioxide and aerosol optical properties. Estimating and limiting the uncertainties of the final products is of critical importance. High-quality data have a lot of applications and can provide accurate estimations of trends.The dead time is specific for each instrument and improper correction of the raw data for its effect may lead to important errors in the final products. The dead time value may change with time and, with the currently used methodology, it cannot always be determined accurately. For specific cases, such as for low ozone slant columns and high intensities of the direct solar irradiance, the error in the retrieved TOC, due to a 10 ns change in the dead time from its value in use, is found to be up to 5 %. The error in the calculation of UV irradiance can be as high as 12 % near the maximum operational limit of light intensities. While in the existing documentation it is indicated that the dead time effects are important when the error in the used value is greater than 2 ns, we found that for single-monochromator Brewers a 2 ns error in the dead time may lead to errors above the limit of 1 % in the calculation of TOC; thus the tolerance limit should be lowered. A new routine for the determination of the dead time from direct solar irradiance measurements has been created and tested and a validation of the operational algorithm has been performed. Additionally, new methods for the estimation and the validation of the dead time have been developed and are analytically described. Therefore, the present study, in addition to highlighting the importance of the dead time for the processing of Brewer data sets, also provides useful information for their quality control and re-evaluation.


2007 ◽  
Vol 97 (1) ◽  
pp. 795-805 ◽  
Author(s):  
Casimir J. H. Ludwig ◽  
John W. Mildinhall ◽  
Iain D. Gilchrist

During movement programming, there is a point in time at which the movement system is committed to executing an action with certain parameters even though new information may render this action obsolete. For saccades programmed to a visual target this period is termed the dead time. Using a double-step paradigm, we examined potential variability in the dead time with variations in overall saccade latency and spatiotemporal configuration of two sequential targets. In experiment 1, we varied overall saccade latency by manipulating the presence or absence of a central fixation point. Despite a large and robust gap effect, decreasing the saccade latency in this way did not alter the dead time. In experiment 2, we varied the separation between the two targets. The dead time increased with separation up to a point and then leveled off. A stochastic accumulator model of the oculomotor decision mechanism accounts comprehensively for our findings. The model predicts a gap effect through changes in baseline activity without producing variations in the dead time. Variations in dead time with separation between the two target locations are a natural consequence of the population coding assumption in the model.


Sign in / Sign up

Export Citation Format

Share Document