vector weight
Recently Published Documents


TOTAL DOCUMENTS

8
(FIVE YEARS 2)

H-INDEX

2
(FIVE YEARS 1)

2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Zhe Xu

The 3D lip synchronization is one of the hot topics and difficulties in the field of computer graphics. How to carry out 3D lip synchronization effectively and accurately is an important research direction in the field of multimedia. On this basis, a comprehensive weighted algorithm is introduced in this paper to sort out the related laws and the time of lip pronunciation in animation multimedia, carry out the vector weight analysis on the texts in the animation multimedia, and synthesize a matching evaluation model for 3D lip synchronization. At the same time, the goal of simultaneous evaluation can be achieved by synthesizing the transitional mouth pattern sequence between consecutive mouth patterns. The results of the simulation experiment indicate that the comprehensive weighted algorithm is effective and can support the evaluation and analysis of animation multimedia 3D lip synchronization.


Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 576 ◽  
Author(s):  
Hassan Yousif Ahmed ◽  
Medien Zeghid ◽  
Waqas A.Imtiaz ◽  
Teena Sharma ◽  
Abdellah Chehri ◽  
...  

In this paper, we present a new algorithm to generate two-dimensional (2D) permutation vectors’ (PV) code for incoherent optical code division multiple access (OCDMA) system to suppress multiple access interference (MAI) and system complexity. The proposed code design approach is based on wavelength-hopping time-spreading (WHTS) technique for code generation. All possible combinations of PV code sets were attained by employing all permutations of the vectors with repetition of each vector weight (W) times. Further, 2D-PV code set was constructed by combining two code sequences of the 1D-PV code. The transmitter-receiver architecture of 2D-PV code-based WHTS OCDMA system is presented. Results indicated that the 2D-PV code provides increased cardinality by eliminating phase-induced intensity noise (PIIN) effects and multiple user data can be transmitted with minimum likelihood of interference. Simulation results validated the proposed system for an agreeable bit error rate (BER) of 10−9.


2011 ◽  
Vol 63-64 ◽  
pp. 846-849
Author(s):  
Jian Ni ◽  
Yu Duo Li

To achieve human face identification, this paper adopts the method of geometric feature extraction and the enlargement of image interpolation on the basis of the completion of face detection. First of all, the input digital image will be normalized to reduce the complexity of the image, and then the feature of human face will be extract. With the feature information extracted, we can construct the feature vector and assign different weights to different feature vector. Weight is interpreted as the EXP obtained after a large amount of training experience is gained. Finally, to get the similarity of picture, the bilinear interpolation method is adopted on the basis of the nearest interpolation. Thus, we will get the results of face identification according to the similarity quality. Through the development and implementation of practical programming, this paper proves the feasibility of such method.


Author(s):  
Walter E. Perry

In the world of private (not publicly traded) investment fund dealing a very substantial portion of the data which should result from daily transactions is, and historically has always been, unavailable, misstated, or flat-out in error. Transactions are a tortuous path of one-at-a-time interactions between each of the entities with one other — the "high net worth" individuals who put up money for investment; the named investors who aggregate that money; the nominee banks where those monies are lodged (and aggregated); the marketers who solicit the named investors on behalf of particular investment Funds; the managers who allocate assets of those funds to particular investments; the Prime Brokers who execute the transactions to realize those managers' asset allocations; the custodians who hold securities in the name of those Funds; and the administrators hired to oversee and account for the business done by Funds and managers. In each interaction each of the two parties is — by definition of his role — transacting business at a different granularity than his counter-party, and in most cases with a materially different understanding of the substance of the transaction, as that substance might be formally defined with the basic semantic operators IsA and HasA. It is usual that managers (and further up the chain of documentation and accounting, administrators) do not know accurately whose money has gone into, or come out of, a given transaction nor, from the other point of view, in which particular transaction was a investor's stake in a Fund secured, and at what basis. Historically these problems are considered intractable. However, investor skepticism in the wake of recent losses and scandals, and government insistence on regulation will not allow these problems to remain unsolved, and the particulars of regulation are grounded in knowledge and transparency about whose money, and through what chain of provenance, is deployed in what exact amounts in which transactions for which investment assets at what basis. Databases and document stores depend on static, or at least general, definitions of structure or linkage: the database schema which defines a table as the particular structure of attribute columns, or the document type definition or other schematic representation of a document by the structure of its sub-entities. In any case, in databases and document stores the structural definition is not itself the instance data stored in each record nor, more exactly, an instance aggregation of instance linkages into a unique record. Yet by redefining the substance of the "data" record as just such an instance aggregation of linkages we can insure an instance record transactable across gross differences of granularity separating the parties to the transaction, and across widely different understandings of the IsA and HasA semantics of the instance transaction. As a matter of implementation, the design of Google BigTable and the API for Google App Engine applications built atop BigTable are far more hospitable than either a relational database or a document sore model to the "linksbase" design. An enthusiastic application of Ockham's Razor to linksbase implementation practices leaves me with six fundamental entity types for recording each unique linksbase instance: Identity, Provenance, Structure, Aptitude, Revision and Events. Any recorded linksbase instance may be understood as an extended arc, on the path of which my lie any number of instances of any of these entity types, each separately influencing the aggregate vector weight of the resultant arc. In other words, most specifically in contrast to the database and document store models, the cardinality of any particular attribute on an instance record is unlimited, while the permissible — even assumed — uniqueness in the structure of any instance "data" record means that the presumed cardinality of any given record type is 1. As it turns out, this "backwards" thinking about record types and the attributes upon them is particularly facilitated by the design of Google BigTable, and the implementation of an IPSA RE linksbase seems well-suited to Google App Engine.


Sign in / Sign up

Export Citation Format

Share Document