scholarly journals Keyword Search over Distributed Graphs with Compressed Signature

Catchphrase search graph has attracted tons research interest, due to the fact the version diagram can speak pleasant for maximum prepared and dependent database and scan the slogan can launch good sized statistics to the customer with out simple data about the sample and language questions. Practically speaking, information photographs may be very huge, for instance, Web-scale diagram containing billions of vertices. The fine in elegance technique utilizing delivered collectively for the calculation of the slogan seek process diagram, after which they do now not deserve to chart a totally huge, because of confined computing power and further area at the server focused. To remedy this hassle, we look at the slogan test graph scale web page is introduced in splendid situation. We first offer calculation effortlessly believe the response request productive questions. In any case, the calculation of flood searching harmless make use of search techniques that obtain huge time and system overhead. To treatment this weak point, we're at that time advise pursuing calculation based totally marks. In precise, we construct that encodes vertex signatures short way an excellent manner from factor to some random catchphrase in the graph. Thus, we can locate solutions to questions by investigating the dearth of way, with the aim that point and correspondence low value. In addition, we changed the diagram facts in the organization after dividing irregular underlying with the goal that the method is primarily based at the sign greater interesting. Finally, the results of our trial show achievability of our proposed method in carrying out watchword top view diagram statistics Web scale.

2021 ◽  
Vol 42 (1) ◽  
pp. 38-46
Author(s):  
Umme Habiba ◽  
Shamima Yesmin ◽  
Rozifa Akhter

The study’s main purpose was to investigate faculty members’ information searching behaviors while administering any research. This study designed an online questionnaire and printed questionnaires used for data collection. The data were analysed using several descriptive statistics, such as frequencies, percentages, and non-parametric tests, i.e., Mann-Whitney and Kruskal-Wallis. The findings showed that faculty members were heavily dependent on search engines to access information, and they have mainly used academic social media sites such as Google Scholar (n=139) and ResearchGate (n=133). Additionally, to keep up-to-date with new publications, they primarily relied on journal alerts (n=126). In the case of applying searching strategies, they used more than one keyword search and sometimes one keyword. Conversely, they do not apply proximity operators, discovery and federated tools and Boolean operators in their search techniques. Furthermore, for modifying search techniques, they used several keywords searching and utilise search engines, databases, and advanced search techniques. Moreover, the Mann-Whitney test result found no significant differences in terms of their gender regarding the types of e-resources used by them, and the Kruskal-Wallis tests found substantial differences in terms of faculty demographic characteristics of using only indexed databases, search engines, academic, social media sites (e.g., ResearchGate, and Zotero Network), current awareness services (i.e., Journal alerts, Web alerts, and discussion lists), and search techniques (i.e., Boolean operators, and Truncation).


Author(s):  
Amira Sallam ◽  
Ahmed Moustafa ◽  
Ibrahim El-Henawy

2004 ◽  
Vol 13 (01) ◽  
pp. 27-44 ◽  
Author(s):  
ARASH RAKHSHAN ◽  
LAWRENCE B. HOLDER ◽  
DIANE J. COOK

We present a new approach in web search engines. The web creates new challenges for information retrieval. The vast improvement in information access is not the only advantage resulting from the keyword search. Additionally, much potential exists for analyzing interests and relationships within the structure of the web. The creation of a hyperlink by the author of a web page explicitly represents a relationship between the source and destination pages which demonstrates the hyperlink structure between web pages. Our web search engine searches not only for the keywords in the web pages, but also for the hyperlink structure between them. Comparing the results of structural web search versus keyword-based search indicates an improved ability to access desired information. We also discuss steps toward mining the queries input to the structural web search engine.


2021 ◽  
Vol 16 (4) ◽  
pp. 30-35
Author(s):  
Prachi Gurav ◽  
Sanjeev Panandikar

As the world progresses towards automation, manual search for data from large databases also needs to keep pace. When the database includes health data, even minute aspects need careful scrutiny. Keyword search techniques are helpful in extracting data from large databases. There are two keyword search techniques: Exact and Approximate. When the user wants to search through EHR, a short search time is expected. To this end, this work investigates Metaphone (Exact search) and Similar_Text (approximate search) Techniques. We have applied keyword search to the data, which includes the symptoms and names of medicines. Our results indicate that the search time for Similar_text is better than for Metaphone.


Author(s):  
J. Frank ◽  
P.-Y. Sizaret ◽  
A. Verschoor ◽  
J. Lamy

The accuracy with which the attachment site of immunolabels bound to macromolecules may be localized in electron microscopic images can be considerably improved by using single particle averaging. The example studied in this work showed that the accuracy may be better than the resolution limit imposed by negative staining (∽2nm).The structure used for this demonstration was a halfmolecule of Limulus polyphemus (LP) hemocyanin, consisting of 24 subunits grouped into four hexamers. The top view of this structure was previously studied by image averaging and correspondence analysis. It was found to vary according to the flip or flop position of the molecule, and to the stain imbalance between diagonally opposed hexamers (“rocking effect”). These findings have recently been incorporated into a model of the full 8 × 6 molecule.LP hemocyanin contains eight different polypeptides, and antibodies specific for one, LP II, were used. Uranyl acetate was used as stain. A total of 58 molecule images (29 unlabelled, 29 labelled with antl-LPII Fab) showing the top view were digitized in the microdensitometer with a sampling distance of 50μ corresponding to 6.25nm.


Author(s):  
S.J.B. Reed

Characteristic fluorescenceThe theory of characteristic fluorescence corrections was first developed by Castaing. The same approach, with an improved expression for the relative primary x-ray intensities of the exciting and excited elements, was used by Reed, who also introduced some simplifications, which may be summarized as follows (with reference to K-K fluorescence, i.e. K radiation of element ‘B’ exciting K radiation of ‘A’):1.The exciting radiation is assumed to be monochromatic, consisting of the Kα line only (neglecting the Kβ line).2.Various parameters are lumped together in a single tabulated function J(A), which is assumed to be independent of B.3.For calculating the absorption of the emerging fluorescent radiation, the depth distribution of the primary radiation B is represented by a simple exponential.These approximations may no longer be justifiable given the much greater computing power now available. For example, the contribution of the Kβ line can easily be calculated separately.


Author(s):  
Stuart McKernan

For many years the concept of quantitative diffraction contrast experiments might have consisted of the determination of dislocation Burgers vectors using a g.b = 0 criterion from several different 2-beam images. Since the advent of the personal computer revolution, the available computing power for performing image-processing and image-simulation calculations is enormous and ubiquitous. Several programs now exist to perform simulations of diffraction contrast images using various approximations. The most common approximations are the use of only 2-beams or a single systematic row to calculate the image contrast, or calculating the image using a column approximation. The increasing amount of literature showing comparisons of experimental and simulated images shows that it is possible to obtain very close agreement between the two images; although the choice of parameters used, and the assumptions made, in performing the calculation must be properly dealt with. The simulation of the images of defects in materials has, in many cases, therefore become a tractable problem.


Author(s):  
Jose-Maria Carazo ◽  
I. Benavides ◽  
S. Marco ◽  
J.L. Carrascosa ◽  
E.L. Zapata

Obtaining the three-dimensional (3D) structure of negatively stained biological specimens at a resolution of, typically, 2 - 4 nm is becoming a relatively common practice in an increasing number of laboratories. A combination of new conceptual approaches, new software tools, and faster computers have made this situation possible. However, all these 3D reconstruction processes are quite computer intensive, and the middle term future is full of suggestions entailing an even greater need of computing power. Up to now all published 3D reconstructions in this field have been performed on conventional (sequential) computers, but it is a fact that new parallel computer architectures represent the potential of order-of-magnitude increases in computing power and should, therefore, be considered for their possible application in the most computing intensive tasks.We have studied both shared-memory-based computer architectures, like the BBN Butterfly, and local-memory-based architectures, mainly hypercubes implemented on transputers, where we have used the algorithmic mapping method proposed by Zapata el at. In this work we have developed the basic software tools needed to obtain a 3D reconstruction from non-crystalline specimens (“single particles”) using the so-called Random Conical Tilt Series Method. We start from a pair of images presenting the same field, first tilted (by ≃55°) and then untilted. It is then assumed that we can supply the system with the image of the particle we are looking for (ideally, a 2D average from a previous study) and with a matrix describing the geometrical relationships between the tilted and untilted fields (this step is now accomplished by interactively marking a few pairs of corresponding features in the two fields). From here on the 3D reconstruction process may be run automatically.


Sign in / Sign up

Export Citation Format

Share Document