scholarly journals Efficient Memory-based Multiplier Technique for SWL DSP Systems

Author(s):  
B Ajay Kumar

The DSP systems usually deal with a lot of multiplications as it is dealt with many discrete signals. The combinational circuits consume a lot of power as there are many intermediate blocks (i.e., usually full adders & and gates). The combinational circuits take more area and the delay is also more. Usually there is a tradeoff between area and delay. To make the multiplier more efficient we usually prefer memory-based multiplier. Different types of techniques are there in memory-based multipliers like the APC (anti-symmetric product coding), OMS (odd multiple storage) etc. In these techniques LUT based storage is used. The multiplied products are stored efficiently based on the technique used to store the data. To optimize the memory required we combine the APC and OMS technique for better storage and retrieval of data. In this project we show how combined technique increases the performance of multiplier. The suggested combined technique reduces the size of the LUT to one-fourth that of a standard LUT. It is demonstrated that the proposed LUT architecture for tiny input sizes can be used to execute high-precision multiplication with input operand decomposition in an efficient manner.

Electronics ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 741
Author(s):  
Yuseok Ban ◽  
Kyungjae Lee

Many researchers have suggested improving the retention of a user in the digital platform using a recommender system. Recent studies show that there are many potential ways to assist users to find interesting items, other than high-precision rating predictions. In this paper, we study how the diverse types of information suggested to a user can influence their behavior. The types have been divided into visual information, evaluative information, categorial information, and narrational information. Based on our experimental results, we analyze how different types of supplementary information affect the performance of a recommender in terms of encouraging users to click more items or spend more time in the digital platform.


Author(s):  
Jie Zhou ◽  
◽  
Bicheng Li ◽  
Yongwang Tang ◽  

Person name clustering disambiguation is the process that partitions name mentions according to corresponding target person entities in reality. The existed methods can not realize effective identification of important features to disambiguate person names. This paper presents a method of Chinese person name disambiguation based on two-stage clustering. This method adopts a stage-by-stage processing model to identify and utilize different types of important features. Firstly, we extract three kinds of core evidences namely direct social relation, indirect social relation and common description prefix, recognize document-pairs referring to the same person entity, and realize initial clustering of person names with high precision. Then, we take the result of initial clustering as new initial input, utilize the statistical properties of multi-documents to recognize and evaluate important features, and build a double-vector representation of clusters (cluster feature vector and important feature vector). Based on the processes above, the final clustering of person names is generated, and the recall of clustering is improved effectively. The experiments have been conducted on the dataset of CLP2010 Chinese person names disambiguation, and experimental results show that this method has good performance in person name clustering disambiguation.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Jiyu Chen ◽  
Nicholas Geard ◽  
Justin Zobel ◽  
Karin Verspoor

Abstract Background Literature-based gene ontology (GO) annotation is a process where expert curators use uniform expressions to describe gene functions reported in research papers, creating computable representations of information about biological systems. Manual assurance of consistency between GO annotations and the associated evidence texts identified by expert curators is reliable but time-consuming, and is infeasible in the context of rapidly growing biological literature. A key challenge is maintaining consistency of existing GO annotations as new studies are published and the GO vocabulary is updated. Results In this work, we introduce a formalisation of biological database annotation inconsistencies, identifying four distinct types of inconsistency. We propose a novel and efficient method using state-of-the-art text mining models to automatically distinguish between consistent GO annotation and the different types of inconsistent GO annotation. We evaluate this method using a synthetic dataset generated by directed manipulation of instances in an existing corpus, BC4GO. We provide detailed error analysis for demonstrating that the method achieves high precision on more confident predictions. Conclusions Two models built using our method for distinct annotation consistency identification tasks achieved high precision and were robust to updates in the GO vocabulary. Our approach demonstrates clear value for human-in-the-loop curation scenarios.


1977 ◽  
Vol 81 ◽  
pp. 53-56
Author(s):  
K.S Dueholm ◽  
A.K Pedersen ◽  
F Ulff-Møller

As part of a programme of experimental photogrammetric mapping we present results of geological mapping using advanced photogrammetric instruments in order to test the applicability of such sophisticated methods to geology. Two experiments are described. They were performed to test the methods on regional and detailed mapping respectively. In both cases vertical aerial photographs at an approximate scale of 1:50000 were used. Different types of photogrammetric instruments constructed for and normally employed in topographical mapping were used. For photograllllpetric principles and instruments see for instance Schwidefsky & Ackermann (1976).


2021 ◽  
Vol 13 (18) ◽  
pp. 10401
Author(s):  
Adeline Montlaur ◽  
Luis Delgado ◽  
César Trapote-Barreira

Recently, there has been much interest in measuring the environmental impact of short-to-medium-haul flights. Emissions of CO2 are usually measured to consider the environmental footprint, and CO2 calculators are available using different types of approximations. We propose analytical models calculating gate-to-gate CO2 emissions and travel time based on the flight distance and on the number of available seats. The accuracy of the numerical results were in line with other CO2 calculators, and when applying an analytical fitting, the error of interpolation was low. The models presented the advantage with respect to other calculators of being sensitive to the number of available seats, a parameter generally not explicitly considered. Its applicability was shown in two practical examples where emissions and travel time per kilometre were calculated for several European routes in a simple and efficient manner. The model enabled the identification of routes where rail would be a viable alternative both from the emissions and total travel time perspectives.


1989 ◽  
Vol 110 ◽  
pp. 72-76
Author(s):  
Robyn M. Shobbrook

Astronomers and librarians have been experiencing difficulties in keeping up with the amount of published literature. The astronomer tries to keep abreast in his particular field and the librarian in the management, control and retrieval of scientific information. The 1980’s have seen a revolution in the methods for information storage and retrieval and in particular the advent of the online database. The speed of processing information for storage has been embraced by all, however little thought has been given to how we shall achieve effective high precision recall of documents.Many librarians firmly believe the best road to success in information retrieval from automated systems is provided by vocabulary control. Contrary to belief, free text or natural language searching alone does not lead to high precision recall. Consistency and integrity of the online catalogue can only be achieved with the addition of a controlled vocabulary. With today’s technology it is possible to maintain the best of both worlds. The controlled vocabulary is used to index the major concepts of a given document over and above the natural language used within the document.


2009 ◽  
Vol 416 ◽  
pp. 327-331
Author(s):  
Zi Qiang Zhang ◽  
Shang Lin Wen ◽  
Shao Bo Chen ◽  
Zhi Dan Zheng

The grinding for cylindrical cam’s groove means grinding for groove’s flanks. For cylindrical cam with translating cylindrical followers, an auto-programming system for NC machining of cylindrical cam’s groove is developed. It can be applied to machine cylindrical cam with different types of curve grooves, by using cutter or wheel with different diameter. For validating geometry precision of the auto-programming system, emulation is carried out using the VERICUT, which is a NC machining emulator system. It is showed by the results of the emulation, there is excess of geometry error at bottom of groove when following situations are met: big pressure angle, high precision requirement and big diameter difference between wheel and roller. An important origin of such excess is computational error introduced by the auto-programming system in calculating the coordinates of wheel center.


2017 ◽  
Vol 28 (6) ◽  
pp. 065007
Author(s):  
Zhen Cao ◽  
Guohang Hu ◽  
Hongbo He ◽  
Yuanan Zhao ◽  
Yueliang Wang ◽  
...  

2021 ◽  
Vol 81 (01) ◽  
pp. 111-118
Author(s):  
Mohd Harun ◽  
Cini Varghese ◽  
Seema Jaggi ◽  
Eldho Varghes

Triallel crosses can be readily exploited as breeding tool for developing commercial hybrids with traits of genetical and commercial importance by acquiring information on specific combining ability effects along with general combining ability effects if the experimentation size is reduced to an economical extent. In this paper, methods of constructing designs involving partial triallel crosses in smaller blocks using different types of lattice designs have been introduced. The designs have low degree of fractionation, which suggests their utility when there is a resource crunch. Canonical efficiency factor of these designs relative to an orthogonal design with same number of lines, assuming constant error variance for both situations, is high indicating that adoption of these designs for the trials could bring about improvement as the recommendations from the experiment will be associated with a high precision.


Author(s):  
Ranjith P V

<div><p><em>Service quality improves   customer relationship as far as organizations are concerned. Service quality is about areas like reliability   , empathy, assurance etc which if provided in an efficient manner improves customer profitability and organisation’s return on investment</em></p><p><em>The service given generally helps in retaining customers thereby reducing cost of retention and makes customers happy to deal with the organisation. This is required for all service organisations and it is true for banks also. The aspect of service is vital for institutions dealing with money and so the study focuses on the importance of service in customer.</em></p><p><em>Service quality of different types of banks are studied using tests lime ANOVA and K means cluster analysis to find out the effect on customer satisfaction of different variables and also to find out different customer segments based on their responses. </em></p></div>


Sign in / Sign up

Export Citation Format

Share Document