scholarly journals Performance Assessment of Iterative, Optimization and Non-Optimization Methods for Page Rank Aggregation

The annoyance of combining the ranked possibilities of many experts is an antique and particularly deep hassle that has won renewed importance in many machine getting to know, statistics mining, and information retrieval applications. Powerful rank aggregation turns into hard in actual-international situations in which the ratings are noisy, incomplete, or maybe disjoint. We cope with those difficulties by extending numerous standard methods of rank aggregation to do not forget similarity between gadgets within the diverse ranked Lists, further to their ratings. The intuition is that comparable items must obtain similar scores, given the right degree of similarity for the domain of hobby.

Author(s):  
J.A. Eades ◽  
A. van Dun

The measurement of magnification in the electron microscope is always troublesome especially when a goniometer stage is in use, since there can be wide variations from calibrated values. One elegant method (L.M.Brown, private communication) of avoiding the difficulties of standard methods would be to fit a device which displaces the specimen a small but known distance and recording the displacement by a double exposure. Such a device would obviate the need for changing the specimen and guarantee that the magnification was measured under precisely the conditions used.Such a small displacement could be produced by any suitable transducer mounted in one of the specimen translation mechanisms. In the present case a piezoelectric crystal was used. Modern synthetic piezo electric ceramics readily give reproducible displacements in the right range for quite modest voltages (for example: Joyce and Wilson, 1969).


Author(s):  
Shashwat Gupta ◽  
Andrés D. Román-Ospino ◽  
Yukteshwar Baranwal ◽  
Douglas Hausner ◽  
Rohit Ramachandran ◽  
...  

Author(s):  
Cristian MARTONOS ◽  
Cristian DEZDROBITU ◽  
Florin STAN ◽  
Aurel DAMIAN ◽  
Alexandru GUDEA

For the present study a number of 5 female chinchilla carcasses were used. The animals were slaughtered for commercial purpuses (fur). The anatomical dissection started with the identification of the aorta (Aorta abdominalis). The next step was the intra-arterial injection of a colouring substance. The carcasses was fixed in the formaldehyde solution and subsequently the renal arteries were dissected. The first renal artery was the right renal artery (Arteria renalis dextra) and, at 0,5 cm caudally, the left renal artery (Arteria renalis sinister) arose . The origin of those arteries were disposed on the lateral part of the abdominal aorta.The origin, traject and distribution of renal arteries on the studied species have a high degree of similarity with the literature dates described for leporids.


2018 ◽  
Vol 7 (3.3) ◽  
pp. 119
Author(s):  
B Lokesh ◽  
Ravoori Charishma ◽  
Natuva Hiranmai

Farmers face a multitude of problems nowadays such as lower crop production, tumultuous weather patterns, and crop infections. All of these issues can be solved if they have access to the right information. The current methods of information retrieval, such as search engine lookup and talking to an Agriculture Officer, have multiple defects. A more suitable solution, that we are proposing, is an android application, available at all times, that can give succinct answers to any question a farmer may pose. The application will include an image recognition component that will be able to recognize a variety of crop diseases in the case that the farmer does not know what he is dealing with and is unable to describe it.  Image recognition is the ability of a computer to recognize and distinguish between different objects, and is actually a much harder problem to solve than it seems. We are using Tensorflow, a tool that uses convolutional neural networks, to implement it  


1981 ◽  
Vol 21 (05) ◽  
pp. 551-557 ◽  
Author(s):  
Ali H. Dogru ◽  
John H. Seinfeld

Abstract The efficiency of automatic history matchingalgorithms depends on two factors: the computationtime needed per iteration and the number of iterations needed for convergence. In most historymatching algorithms, the most time-consumingaspect is the calculation of the sensitivitycoefficientsthe derivatives of the reservoir variables(pressure and saturation) with respect to the reservoirproperties (permeabilities and porosity). This paper presents an analysis of two methodsthe direct andthe variationalfor calculating sensitivitycoefficients, with particular emphasis on thecomputational requirements of the methods.If the simulator consists of a set of N ordinary differential equations for the grid-block variables(e.g., pressures)and there are M parameters forwhich the sensitivity coefficients are desired, the ratioof the computational efforts of the direct to thevariational method is N(M + 1)R = .N(N + 1) + M Thus, for M less than N the direct method is moreeconomical, whereas as M increases, a point isreached at which the variational method is preferred. Introduction There has been considerable interest in thedevelopment of automatic history matching algorithms.Although automatic history matching can offer significant advantages over trial-and-errorapproaches, its adoption has been somewhatlower than might have been anticipated when thefirst significant papers on the subject appeared. Oneobvious reason for the persistence of thetrial-and-error approach is that it does not requireadditional code development beyond that already involvedin the basic simulator, whereas automatic routinesrequire the appendixing of an iterative optimization routine to the basic simulator. Nevertheless, theinvestment of additional time in code developmentfor the history matching algorithm may be returned many fold during the actual history matchingexercise. In spite of the inherent advantages ofautomatic history matching, however, the automatic adjustment of the number of reservoir parameterstypically unknown even in a moderately sizedsimulation can require excessive amounts ofcomputation time. Therefore, it is of utmost importancethat an automatic history matching algorithm be asefficient as possible. Setting aside for the moment the issue of code complexity, the efficiency of analgorithm depends on two factors, the computationtime needed per iteration and the number ofiterations needed for convergence (whereconvergence is usually defined in terms of reaching acertain level of incremental change in either theparameters themselves or the objective function). Formost iterative optimization methods, the speed ofconvergence increases with the complexity of thealgorithm. SPEJ P. 551^


Author(s):  
JANE HUFFMAN HAYES ◽  
ALEX DEKHTYAR

The building of traceability matrices by those other than the original developers is an arduous, error prone, prolonged, and labor intensive task. Thus, after-the-fact requirements tracing is a process where the right kind of automation can definitely assist an analyst. Recently, a number of researchers have studied the application of various methods, often based on information retrieval after-the-fact tracing. The studies are diverse enough to warrant a means for comparing them easily as well as for determining areas that require further investigation. To that end, we present here an experimental framework for evaluating requirements tracing and traceability studies. Common methods, metrics and measures are described. Recent experimental requirements tracing journal and conference papers are catalogued using the framework. We compare these studies and identify areas for future research. Finally, we provide suggestions on how the field of tracing and traceability research may move to a more mature level.


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3299
Author(s):  
Dostonjon Barotov ◽  
Aleksey Osipov ◽  
Sergey Korchagin ◽  
Ekaterina Pleshakova ◽  
Dilshod Muzafarov ◽  
...  

: In recent years, various methods and directions for solving a system of Boolean algebraic equations have been invented, and now they are being very actively investigated. One of these directions is the method of transforming a system of Boolean algebraic equations, given over a ring of Boolean polynomials, into systems of equations over a field of real numbers, and various optimization methods can be applied to these systems. In this paper, we propose a new transformation method for Solving Systems of Boolean Algebraic Equations (SBAE). The essence of the proposed method is that firstly, SBAE written with logical operations are transformed (approximated) in a system of harmonic-polynomial equations in the unit n-dimensional cube Kn with the usual operations of addition and multiplication of numbers. Secondly, a transformed (approximated) system in Kn is solved by using the optimization method. We substantiated the correctness and the right to exist of the proposed method with reliable evidence. Based on this work, plans for further research to improve the proposed method are outlined.


Author(s):  
William G. Banfield ◽  
Cecil W. Lee

Lamella-particle complexes are a distinct group of morphologic structures. They have been reported in a growing number of papers often without detailed description. This could be accounted for because of the paucity of occurrence in the tissue or cells examined, the limitations inherent in preparing the tissue for electron microscopy, or because the photographs were not at a high enough magnification. We will take a closer look at these complexes, adding new information on their morphology, make correlations overlooked or not possible when initial observations were published and dispel some erroneous impressions as to the degree of similarity between complexes of different origin. Standard methods of fixation, dehydration and epon embedding were used. Staining was with lead and uranyl acetate.The lamella-particle complex is well illustrated in a lymphoma cell of the northern pike (Fig. 1). In cross section its wall is made up of a striated lamella associated with ribosome-like particles.


Sign in / Sign up

Export Citation Format

Share Document