simple arithmetic
Recently Published Documents


TOTAL DOCUMENTS

286
(FIVE YEARS 72)

H-INDEX

27
(FIVE YEARS 2)

Author(s):  
Vaishali Sharma

Abstract: This paper proposed the layout of Vedic Multiplier based totally on Urdhva Trigbhyam approach of multiplication. It is most effective Vedic sutras for multiplication. Urdhva triyagbhyam is a vertical and crosswise approach to discover product of two numbers. Multiplication is an essential quintessential feature in arithmetic logic operation. Computational overall performance of a DSP device is limited via its multiplication overall performance and since, multiplication dominates the execution time of most DSP algorithms. Multiplication is one of the simple arithmetic operations and it requires extensively extra hardware assets and processing time than addition and subtraction. Our work is to compare different bit Vedic multiplier structure using carry look ahead adder technique. Keywords: Carry Look Ahead Adder, Urdhva Trigbhyam, DSP algorithms, Vedic Multiplier


2022 ◽  
Vol 14 (1) ◽  
pp. 55
Author(s):  
Shaimaa said soltan

In this document, we will present a new way to visualize the distribution of Prime Numbers in the number system to spot Prime numbers in a subset of numbers using a simpler algorithm. Then we will look throw a classification algorithm to check if a number is prime using only 7 simple arithmetic operations with an algorithm complexity less than or equal to O (7) operations for any number.


2022 ◽  
Vol 20 (1) ◽  
pp. 81
Author(s):  
DAVID KIKI BARINGIN MARULI TUA SAMOSIR

<pre>Research on green building in terms of accounting science is still rare. This research aims to explore the benchmarks and criteria for green building in its application to multi-storey buildings and to contribute to increasing the efficiency of building operational costs.</pre><p>The method used in this research is through exploration of data from questionnaires collected using simple arithmetic techniques and graphic techniques in summarizing the observational data. The number of respondents who responded to the questionnaire that was run until this data was processed was 111 respondents.</p><p>The results of this study indicate that the application of green building benchmarks can be said to have been implemented because the average percentage of respondents who answered Yes was 58.4% or above the standardization used in this study, namely 57% (gold rank).</p><p>This research provides theoretical implications, which is able to strengthen the theory of the reliability of accounting. One of them is green accounting, which is the triple bottom line (planet, people and profit). In order that implementation of green building which has been applied only from the civil engineering condition of the building, the art of building architecture and the electrical engineering of the building, but now it has begun to be calculated regarding advantages and disadvantages similarly the benefit of the green building .</p><p>From a micro economics (organizational) point of view, this research contributes to educating property business and stakeholders that green building is not object that is expensive although is a solution for cost efficiency. People can distinguish the price of green buildings and ordinary buildings.</p>


Author(s):  
Ms. Amita P. Thakare ◽  
Dr. Sunil Kumar

System getting to know algorithms are complicated to version on hardware. that is due to the truth that those algorithms require quite a few complicated design systems, which are not effort lessly synthesizable. Therefore, through the years, multiple researchers have developed diverse kingdom-of-the artwork techniques, every of them has sure distinct advantages over the others. In this newsletter, we compare the specific strategies for hardware modelling of the various device gaining knowledge of machine learning algorithms, and their hardware-stage overall performance. this newsletter could be useful for any researcher or gadget dressmaker that needs to first evaluate the superior techniques for ML layout, and then inspired with the aid of this, they are able to similarly enlarge it and optimize the device’s performance. Our assessment is based on the 3 number one parameters of hardware layout; that is; place, power and postpone. Any layout approach that can find a stability among those three parameters may be termed as greatest. This work additionally recommends sure enhancements for some of the techniques, which can be taken up for similarly studies. Machine Learning is a concept to find out from examples and skill, while not being expressly programmed. Rather than writing code, you feed knowledge to the generic formula, and it builds logic supported the info given. for instance, one reasonably formula could be a classification formula. It will place knowledge into totally different teams. The classification formula accustomed notice written alphabets may even be accustomed classifies emails into spam and not-spam. Machine learning has resolve many errors ranging from simple arithmetic problems like TSP (Travelling Salesman Problem) to complex issues like predicting the variations in stock market price, Machine learning algorithms like genetic algorithm, particles swarm optimization, deep nets and Q-learning are currently being developed on software platforms due to the ease of implementation. But the full utilization of core algorithms can only be possible. If they are designed & integrated inside the silicon chip. Companies like Apple, Google and Snapdragon etc. are continuously updating their ICs to incorporate these algorithms. But there is no standard architecture defined to implement these algorithms at chip level, due to these inefficiencies of every alternative multiply when these devices connected together. In this research work, we plan to develop a standard architecture for implementation of machine learning algorithms on integrated circuits so that these circuits. connected together work seamlessly with each other & improve the overall system performance. Finally, we planned to implement at least two algorithms on the proposed architecture & verify its optimization capability for practical systems. Our assessment is based on the 3 number one parameters of hardware layout; i.e.; place, power and postpone. Any layout approach that can find a stability among those three parameters may be termed as greatest. This work additionally recommends sure enhancements for some of the techniques, which can be taken up for similarly studies. Machine learning has solved many problems ranging from simple arithmetic problems like TSP (Travelling Salesman Problem) to complex issues like predicting the variations in stock market price, Machine learning algorithms like genetic algorithm.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 421
Author(s):  
Pedro Juan Roig ◽  
Salvador Alcaraz ◽  
Katja Gilly ◽  
Cristina Bernad ◽  
Carlos Juiz

Multi-access edge computing implementations are ever increasing in both the number of deployments and the areas of application. In this context, the easiness in the operations of packet forwarding between two end devices being part of a particular edge computing infrastructure may allow for a more efficient performance. In this paper, an arithmetic framework based in a layered approach has been proposed in order to optimize the packet forwarding actions, such as routing and switching, in generic edge computing environments by taking advantage of the properties of integer division and modular arithmetic, thus simplifying the search of the proper next hop to reach the desired destination into simple arithmetic operations, as opposed to having to look into the routing or switching tables. In this sense, the different type of communications within a generic edge computing environment are first studied, and afterwards, three diverse case scenarios have been described according to the arithmetic framework proposed, where all of them have been further verified by using arithmetic means with the help of applying theorems, as well as algebraic means, with the help of searching for behavioral equivalences.


2021 ◽  
Vol 7 (3) ◽  
pp. 248-258 ◽  
Author(s):  
Jamie I. D. Campbell ◽  
Yalin Chen ◽  
Maham Azhar

We conducted two conceptual replications of Experiment 1 in Mathieu, Gourjon, Couderc, Thevenot, and Prado (2016, https://doi.org/10.1016/j.cognition.2015.10.002). They tested a sample of 34 French adults on mixed-operation blocks of single-digit addition (4 + 3) and subtraction (4 – 3) with the three problem elements (O1, +/-, O2) presented sequentially. Addition was 34 ms faster if O2 appeared 300 ms after the operation sign and displaced 5° to the right of central fixation, whereas subtraction was 19 ms faster when O2 was displaced to the left. Replication Experiment 1 (n = 74 recruited at the University of Saskatchewan) used the same non-zero addition and subtraction problems and trial event sequence as Mathieu et al., but participants completed blocks of pure addition and pure subtraction followed by the mixed-operation condition used by Mathieu et al. Addition RT showed a 32 ms advantage with O2 shifted rightward relative to leftward but only in mixed-operation blocks. There was no effect of O2 position on subtraction RT. Experiment 2 (n = 74) was the same except mixed-operation blocks occurred before the pure-operation blocks. There was an overall 13 ms advantage with O2 shifted right relative to leftward but no interaction with operation or with mixture (i.e., pure vs mixed operations). Nonetheless, the rightward RT advantage was statistically significant for both addition and subtraction only in mixed-operation blocks. Taken together with the robust effects of mixture in Experiment 1, the results suggest that O2 position effects in this paradigm might reflect task specific demands associated with mixed operations.


2021 ◽  
Author(s):  
Anna Byszewska ◽  
Jacek Rudowicz ◽  
Katarzyna Lewczuk ◽  
Joanna Jabłońska ◽  
Marek Rękas

Abstract PurposeThis study aimed to assess refractive astigmatism, in Phaco-Canaloplasty (PC) vs Phaco-Non-Penetrating Deep sclerectomy (PDS) in a randomized, prospective study within 24 months.MethodsPatients were randomized pre-operatively, 37 underwent PC and 38 PDS. The following data was collected: BCVA, IOP, number of antiglaucoma medications, refraction with autokeratorefractometry. The assessment of astigmatism was simple arithmetic and vector analysis included double angle plots and cumulative refractive astigmatism graphs.ResultsPre-operative mean BCVA in PC was 0.40±0.43 and was comparable to BCVA in PDS 0.30±0.32logMAR (P=0.314). In the sixth month follow-up, mean BCVA showed no difference (P=0.708) and was 0.07±0.13 and 0.05±0.11, respectively. However, two years after the intervention mean BCVA was better in PC 0.05±0.12 than in PDS 0.12±0.23 and it was statistically significant (P=0.039). Mean astigmatism in PC at baseline was 1.13±0.73Dcyl and 1.35±0.91 for PDS(P=0.544). At six months it was 1.09±0.61 and 1.24±0.86 respectively,(P=0.595). At two years 1.17±0.51 for PC and 1.24±0.82(P=0.917). The direction of mean astigmatism was against the rule throughout observation for both groups. Centroids pre-operatively were 0.79D@172˚±1.10Dcyl in PC and 0.28D@10˚±1.63D in PDS. At six months 0.75D@166˚±1.01 and 0.26D@11˚±1.5, respectively. At 24-months 0.64D@164˚±1.11 and 0.47D@20˚±1.43. The mean baseline IOP in PC was 19.4±5.8mmHg and 19.7±5.4mmHg in PDS(P=0.639). From the six-month IOP was lower in PC, at 24-months it was 13.8±3.3mmHg in PC and 15.1±2.9mmHg in PDS(P=0.048). In both groups preoperatively patients used median(Me) of 3 antiglaucoma medications(P=0.197), at 24-months in PC mean 0.5±0.9 Me=0.0 and 1.1±1.2 Me=1.0 in PDS(P=0.058). ConclusionsBoth surgeries in mid-term observation are safe and effective. They do not generate vision-threatening astigmatism and don’t even change the preoperative direction of mean astigmatism. Refractive astigmatism is stable throughout the observation.


2021 ◽  
Author(s):  
Akshay Kumar Avvaru ◽  
Rakesh K Mishra ◽  
Divya Tej Sowpati

Numerical or vector representations of DNA sequences have been applied for identification of specific sequence characteristics and patterns which are not evident in their character (A, C, G, T) representations. These transformations often reveal a mathematical structure to the sequences which can be captured efficiently using established mathematical methods. One such transformation, the 2-bit format, represents each nucleotide using only two bits instead of eight for efficient storage of genomic data. Here we describe a mathematical property that exists in the 2-bit representation of tandemly repeated DNA sequences. Our tool, DiviSSR (pronounced divisor), leverages this property and subsequent arithmetic for ultrafast and accurate identification of tandem repeats. DiviSSR can process the entire human genome in ~30s, and short sequence reads at a rate of >1 million reads/s on a single CPU thread. Our work also highlights the implications of using simple mathematical properties of DNA sequences for faster algorithms in genomics.


Author(s):  
Александр Анатольевич Васильев

В экономическом прогнозировании коротких временных рядов часто применяется модель Брауна нулевого порядка. К одной из проблем использования этой модели на первых шагах прогнозирования относится оценка начального значения экспоненциальной средней. Как правило, в качестве такой оценки используется простое среднее арифметическое значение первых уровней ряда, которое является неустойчивой статистической оценкой. Поэтому в данном исследовании предложено для оценки начального значения экспоненциальной средней использовать робастные М-оценки Тьюки, Хампеля, Хьюбера и Эндрюса. Цель исследования заключается в определении целесообразности применения М-оценок для определения начального значения экспоненциальной средней в модели Брауна при прогнозировании коротких временных рядов экономических показателей. В результате проведенного экспериментального исследования установлено: а) к наиболее значимым факторам, влияющим на точность прогноза с использованием модели Брауна, относятся вид временного ряда, значение постоянной сглаживания, отбраковка аномальных уровней и вид весов; б) вид оценки начального значения экспоненциальной средней и число итераций при вычислении М-оценки являются менее значимыми факторами (в связи с этим обоснована целесообразность применения одношаговых М-оценок); в) на начальных шагах прогнозирования при ограниченном количестве уровней временного ряда, когда невозможно достоверно определить вид ряда и когда отсутствуют основания для отбраковки аномальных уровней, предпочтительнее использовать модель Брауна с весами Вейда и определять начальное значение экспоненциальной средней на основе одношаговых робастных М-оценок (в остальных случаях целесообразно применять простое среднее арифметическое значение). In economic forecasting of short-term time series Braun’s model of zero level is often applied. One of issues of usage of this model from the very beginning of forecasting is estimation of start value of exponential average. As usual, simple arithmetic mean of first levels of series, used as such estimate, is volatile statistical estimate. That’s why in this investigation it’s suggested to use Tukey’s, Hampel’s, Huber’s and Andrews’ robust M-estimates for estimation of start value of exponential average. Purpose of research is definition of reasonability of M-estimates application to define start value of exponential average in Braun’s model during forecasting of short-term time series of economic indicators. The results of conducted experimental research are as follows: a) the most important factors, that have significant impact on forecast accuracy with usage of Braun’s model, are type of time series, value of smoothing constant, removal of abnormal levels and type of weights; b) type of estimate of start value of exponential average and quantity of iterations in process of calculation of M-estimate are less significant factors; c) consequently, reasonability of usage of one-step M-estimates is justified; d) on the first steps of forecasting with limited quantity of levels of time series, when it’s impossible to define with certainty type of series and when there is no reasons for removal of abnormal levels, it’s preferable to use Braun’s model with Wade’s weights and define start value of exponential average based on one-step robust M-estimates (in other cases it’s better to use simple arithmetical mean).


Sign in / Sign up

Export Citation Format

Share Document