POSIT vs. Floating Point in Implementing IIR Notch Filter by Enhancing Radix-4 Modified Booth Multiplier

Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 163
Author(s):  
Anwar A. Esmaeel ◽  
Sa’ed Abed ◽  
Bassam J. Mohd ◽  
Abbas A. Fairouz

The increased demand for better accuracy and precision and wider data size has strained current the floating point system and motivated the development of the POSIT system. The POSIT system supports flexible formats and tapered precision and provides equivalent accuracy with fewer bits. This paper examines the POSIT and floating point systems, comparing the performance of 32-bit POSIT and 32-bit floating point systems using IIR notch filter implementation. Given that the bulk of the calculations in the filter are multiplication operations, an Enhanced Radix-4 Modified Booth Multiplier (ERMBM) is implemented to increase the calculation speed and efficiency. ERMBM enhances area, speed, power, and energy compared to the POSIT regular multiplier by 26.80%, 51.97%, 0.54%, and 52.22%, respectively, without affecting the accuracy. Moreover, the Taylor series technique is adopted to implement the division operation along with cosine arithmetic unit for POSIT numbers. After comparing POSIT with floating point, the accuracy of POSIT is 92.31%, which is better than floating point’s accuracy of 23.08%. Moreover, POSIT reduces area by 21.77% while increasing the delay. However, when the ERMBM is utilized instead of the POSIT regular multiplier in implementing the filter, POSIT outperforms floating point in all the performance metrics including area, speed, power, and energy by 35.68%, 20.66%, 31.49%, and 45.64%, respectively.

2000 ◽  
Vol 78 (2) ◽  
pp. 320-326 ◽  
Author(s):  
Frank AM Tuyttens

The algebraic relationships, underlying assumptions, and performance of the recently proposed closed-subpopulation method are compared with those of other commonly used methods for estimating the size of animal populations from mark-recapture records. In its basic format the closed-subpopulation method is similar to the Manly-Parr method and less restrictive than the Jolly-Seber method. Computer simulations indicate that the accuracy and precision of the population estimators generated by the basic closed-subpopulation method are almost comparable to those generated by the Jolly-Seber method, and generally better than those of the minimum-number-alive method. The performance of all these methods depends on the capture probability, the number of previous and subsequent trapping occasions, and whether the population is demographically closed or open. Violation of the assumption of equal catchability causes a negative bias that is more pronounced for the closed-subpopulation and Jolly-Seber estimators than for the minimum-number-alive. The closed-subpopulation method provides a simple and flexible framework for illustrating that the precision and accuracy of population-size estimates can be improved by incorporating evidence, other than mark-recapture data, of the presence of recognisable individuals in the population (from radiotelemetry, mortality records, or sightings, for example) and by exploiting specific characteristics of the population concerned.


1977 ◽  
Vol 57 (2) ◽  
pp. 365-374 ◽  
Author(s):  
I. R. SIBBALD ◽  
K. PRICE

Thirty samples of wheat and 28 samples of oats were assayed for true and apparent metabolizable energy (TME, AME). Within grains, the difference TME−AME increased with decreasing AME values; there is evidence that this trend is associated with reduced voluntary consumption of AME assay diets containing low energy grains. The TME and AME data were compared with ME values predicted from physical and chemical data describing the grains. Previously published prediction equations were tested and new equations were derived. Comparisons between predicted and observed data suggested that both the TME and AME values of wheat were predicted with insufficient accuracy and precision for practical use. Similar comparisons using the oat data showed high correlations between observed and predicted values, although the predictions were no more accurate than for wheat; however, when data describing four samples of naked oats were removed, the correlations were reduced substantially. Comparisons involving data for the hulled oats indicated that most equations were able to predict AME better than TME. Multiple regression analysis was used to identify those combinations of variables best able to predict TME data. No combination of variables was best for both wheat and oats. The combinations of variables used in published equations performed quite well. With four variables, the percentage of the TME variation explained was as high as 52 for wheat, 82 for oats and 64 for hulled oats. Predictions based on air-dry data are associated with higher correlations than those based on dry matter data, but the air-dry predictions are the less useful in practice. The reason for this is discussed.


Energies ◽  
2020 ◽  
Vol 13 (19) ◽  
pp. 5097
Author(s):  
Gianfranco Chicco ◽  
Andrea Mazza

In the power and energy systems area, a progressive increase of literature contributions that contain applications of metaheuristic algorithms is occurring. In many cases, these applications are merely aimed at proposing the testing of an existing metaheuristic algorithm on a specific problem, claiming that the proposed method is better than other methods that are based on weak comparisons. This ‘rush to heuristics’ does not happen in the evolutionary computation domain, where the rules for setting up rigorous comparisons are stricter but are typical of the domains of application of the metaheuristics. This paper considers the applications to power and energy systems and aims at providing a comprehensive view of the main issues that concern the use of metaheuristics for global optimization problems. A set of underlying principles that characterize the metaheuristic algorithms is presented. The customization of metaheuristic algorithms to fit the constraints of specific problems is discussed. Some weaknesses and pitfalls that are found in literature contributions are identified, and specific guidelines are provided regarding how to prepare sound contributions on the application of metaheuristic algorithms to specific problems.


Author(s):  
Noor Nateq Alfaisaly ◽  
Suhad Qasim Naeem ◽  
Azhar Hussein Neama

Worldwide interoperability microwave access (WiMAX) is an 802.16 wireless standard that delivers high speed, provides a data rate of 100 Mbps and a coverage area of 50 km. Voice over internet protocol (VoIP) is flexible and offers low-cost telephony for clients over IP. However, there are still many challenges that must be addressed to provide a stable and good quality voice connection over the internet. The performance of various parameters such as multipath channel model and bandwidth over the Star trajectoryWiMAX network were evaluated under a scenario consisting of four cells. Each cell contains one mobile and one base station. Network performance metrics such as throughput and MOS were used to evaluate the best performance of VoIP codecs. Performance was analyzed via OPNET program14.5. The result use of multipath channel model (disable) was better than using the model (ITU pedestrian A). The value of the throughput at 15 dB was approximately 1600 packet/sec, and at -1 dB was its value 1300 packet/se. According to data, the Multipath channel model of the disable type the value of the MOS was better than the ITU Pedestrian A type.


2011 ◽  
Vol 7 (4) ◽  
pp. 47-64 ◽  
Author(s):  
Toly Chen

This paper presents a dynamically optimized fluctuation smoothing rule to improve the performance of scheduling jobs in a wafer fabrication factory. The rule has been modified from the four-factor bi-criteria nonlinear fluctuation smoothing (4f-biNFS) rule, by dynamically adjusting factors. Some properties of the dynamically optimized fluctuation smoothing rule were also discussed theoretically. In addition, production simulation was also applied to generate some test data for evaluating the effectiveness of the proposed methodology. According to the experimental results, the proposed methodology was better than some existing approaches to reduce the average cycle time and cycle time standard deviation. The results also showed that it was possible to improve the performance of one without sacrificing the other performance metrics.


1965 ◽  
Vol 48 (4) ◽  
pp. 855-858
Author(s):  
Glen M Shue

Abstract Examination of data obtained from the 1963 collaborative study of a proposed chemical assay of vitamin D shows the following: the vitamin D content of the sample may be calculated by vising a reading at 550 mμ as the blank, with an accuracy and precision equal to or better than that of the proposed calculation; an internal standard is not necessary. These findings suggest a simplified colorimetric procedure which requires less than 5 μg (200 units). Data demonstrate a marked improvement in the color reagent resulting from reduction of the concentration of antimony trichloride. Preliminary data indicate that the method can be applied to high potency irradiated yeast.


2002 ◽  
Vol 48 (11) ◽  
pp. 1963-1969 ◽  
Author(s):  
John F Wilson ◽  
Ian D Watson ◽  
John Williams ◽  
Pat A Toseland ◽  
Alison H Thomson ◽  
...  

Abstract Background: The accuracy and precision of methods for the measurement of the anticonvulsants phenytoin, phenobarbital, primidone, carbamazepine, ethosuximide, and valproate in human serum were assessed in 297 laboratories that were participants in the United Kingdom National External Quality Assessment Scheme (UKNEQAS). Methods: We distributed lyophilized, serum-based materials containing low, medium, and high weighed-in concentrations of the drugs. The 297 participating laboratories received the materials on two occasions, 7 months apart. Expected concentrations were determined by gas chromatography or HPLC methods in five laboratories using serum-based NIST reference materials as calibrators. Results: In general, bias was consistent across concentrations for a method but often differed in magnitude for different drugs. Bias ranged from −1.9% to 8.6% for phenytoin, −2.7% to 3.1% for phenobarbital, −2.7% to 0.5% for primidone, −8.6% to 0.3% for carbamazepine, −5.6% to 2.0% for ethosuximide, and −7.2% to 0.1% for valproate. Intralaboratory sources of imprecision significantly exceeded interlaboratory sources for many drug/method combinations. The mean CVs for intra- and interlaboratory errors for the different drugs were 6.3–7.8% and 3.3–4.2%, respectively. Conclusions: For these long-established and relatively high-concentration analytes, the closed analytical platforms generally performed no better than open systems or chromatography, where use of calibrators prepared in house predominated. To improve the accuracy of measurements, work is required principally by the manufacturers of immunoassays to ensure minimal calibration error and to eliminate batch-to-batch variability of reagents. Individual laboratories should concentrate on minimizing dispensing errors.


2018 ◽  
Vol 8 (12) ◽  
pp. 2654
Author(s):  
Joaquin Mass-Sanchez ◽  
Erica Ruiz-Ibarra ◽  
Ana Gonzalez-Sanchez ◽  
Adolfo Espinoza-Ruiz ◽  
Joaquin Cortez-Gonzalez

Localization is a fundamental problem in Wireless Sensor Networks, as it provides useful information regarding the detection of an event. There are different localization algorithms applied in single-hop or multi-hop networks; in both cases their performance depends on several factors involved in the evaluation scenario such as node density, the number of reference nodes and the log-normal shadowing propagation model, determined by the path-loss exponent (η) and the noise level (σdB) which impact on the accuracy and precision performance metrics of localization techniques. In this paper, we present a statistical analysis based on the 2k factorial methodology to determine the key factors affecting the performance metrics of localization techniques in a single-hop network to concentrate on such parameters, thus reducing the amount of simulation time required. For this proposal, MATLAB simulations are carried out in different scenarios, i.e., extreme values are used for each of the factors of interest and the impact of the interaction among them in the performance metrics is observed. The simulation results show that the path-loss exponent (η) and noise level (σdB) factors have the greatest impact on the accuracy and precision metrics evaluated in this study. Based on this statistical analysis, we recommend estimating the propagation model as close to reality as possible to consider it in the design of new localization techniques and thus improve their accuracy and precision metrics.


2017 ◽  
Vol 2017 ◽  
pp. 1-16 ◽  
Author(s):  
Mariam Akbar ◽  
Nadeem Javaid ◽  
Wadood Abdul ◽  
Sanaa Ghouzali ◽  
Abid Khan ◽  
...  

Mobile Sink (MS) based routing strategies have been widely investigated to prolong the lifetime of Wireless Sensor Networks (WSNs). In this paper, we propose two schemes for data gathering in WSNs: (i) MS moves on random paths in the network (RMS) and (ii) the trajectory of MS is defined (DMS). In both the schemes, the network field is logically divided into small squares. The center point of each partitioned area is the sojourn location of the MS. We present three linear programming based models: (i) to maximize network lifetime, (ii) to minimize path loss, and (iii) to minimize end to end delay. Moreover, a geometric model is proposed to avoid redundancy while collecting information from the network nodes. Simulation results show that our proposed schemes perform better than the selected existing schemes in terms of the selected performance metrics.


Sign in / Sign up

Export Citation Format

Share Document