Fragment Finder 2.0: a computing server to identify structurally similar fragments

2012 ◽  
Vol 45 (2) ◽  
pp. 332-334 ◽  
Author(s):  
R. Nagarajan ◽  
S. Siva Balan ◽  
R. Sabarinathan ◽  
M. Kirti Vaishnavi ◽  
K. Sekar

Fragment Finder 2.0is a web-based interactive computing server which can be used to retrieve structurally similar protein fragments from 25 and 90% nonredundant data sets. The computing server identifies structurally similar fragments using the protein backbone Cα angles. In addition, the identified fragments can be superimposed using either of the two structural superposition programs,STAMPandPROFIT, provided in the server. The freely available Java plug-inJmolhas been interfaced with the server for the visualization of the query and superposed fragments. The server is the updated version of a previously developed search engine and employs an in-house-developed fast pattern matching algorithm. This server can be accessed freely over the World Wide Web through the URL http://cluster.physics.iisc.ernet.in/ff/.

2009 ◽  
Vol 1 (4) ◽  
pp. 58-69 ◽  
Author(s):  
Chad M.S. Steel

While the supply of child pornography through the World Wide Web has been frequently speculated upon, the demand has not adequately been explored. Quantification and qualification of the demand provides forensic examiners a behavioral basis for determining the sophistication of individual seeking child pornography. Additionally, the research assists an examiner in searching for and presenting the evidence of child pornography browsing. The overall search engine demand for child pornography is bounded as being between .19 and .49%, depending on the inclusion of ambiguous phrases, with the top search for child pornography being “lolita bbs”. Unlike peer-to-peer networks, however, the top child pornography related query ranks only as the 198th most popular query overall. The queries on search engines appear to be decreasing as well, and the techniques employed are becoming less reliant direct links to content.


2014 ◽  
Vol 53 ◽  
Author(s):  
Loek Cleophas ◽  
Derrick G. Kourie ◽  
Bruce W. Watson

In indexing of, and pattern matching on, DNA and text sequences, it is often important to represent all factors of a sequence. One efficient, compact representation is the factor oracle (FO). At the same time, any classical deterministic finite automata (DFA) can be transformed to a so-called failure one (FDFA), which may use failure transitions to replace multiple symbol transitions, potentially yielding a more compact representation. We combine the two ideas and directly construct a failure factor oracle (FFO) from a given sequence, in contrast to ex post facto transformation to an FDFA. The algorithm is suitable for both short and long sequences. We empirically compared the resulting FFOs and FOs on number of transitions for many DNA sequences of lengths 4 − 512, showing gains of up to 10% in total number of transitions, with failure transitions also taking up less space than symbol transitions. The resulting FFOs can be used for indexing, as well as in a variant of the FO-using backward oracle matching algorithm. We discuss and classify this pattern matching algorithm in terms of the keyword pattern matching taxonomies of Watson, Cleophas and Zwaan. We also empirically compared the use of FOs and FFOs in such backward reading pattern matching algorithms, using both DNA and natural language (English) data sets. The results indicate that the decrease in pattern matching performance of an algorithm using an FFO instead of an FO may outweigh the gain in representation space by using an FFO instead of an FO.


Web Mining ◽  
2011 ◽  
pp. 69-98 ◽  
Author(s):  
Roberto Navigli

Domain ontologies are widely recognized as a key element for the so-called semantic Web, an improved, “semantic aware” version of the World Wide Web. Ontologies define concepts and interrelationships in order to provide a shared vision of a given application domain. Despite the significant amount of work in the field, ontologies are still scarcely used in Web-based applications. One of the main problems is the difficulty in identifying and defining relevant concepts within the domain. In this chapter, we provide an approach to the problem, defining a method and a tool, OntoLearn, aimed at the extraction of knowledge from Websites, and more generally from documents shared among the members of virtual organizations, to support the construction of a domain ontology. Exploiting the idea that a corpus of documents produced by a community is the most representative (although implicit) repository of concepts, the method extracts a terminology, provides a semantic interpretation of relevant terms and populates the domain ontology in an automatic manner. Finally, further manual corrections are required from domain experts in order to achieve a rich and usable knowledge resource.


Author(s):  
Sathiyamoorthi V.

It is generally observed throughout the world that in the last two decades, while the average speed of computers has almost doubled in a span of around eighteen months, the average speed of the network has doubled merely in a span of just eight months! In order to improve the performance, more and more researchers are focusing their research in the field of computers and its related technologies. Internet is one such technology that plays a major role in simplifying the information sharing and retrieval. World Wide Web (WWW) is one such service provided by the Internet. It acts as a medium for sharing of information. As a result, millions of applications run on the Internet and cause increased network traffic and put a great demand on the available network infrastructure.


Author(s):  
Giorgos Laskaridis ◽  
Konstantinos Markellos ◽  
Penelope Markellou ◽  
Angeliki Panayiotaki ◽  
Athanasios Tsakalidis

The emergence of semantic Web opens up boundless new opportunities for e-business. According to Tim Berners-Lee, Hendler, and Lassila (2001), “the semantic Web is an extension of the current Web in which information is given well-defined meaning, better enabling computers and people to work in cooperation”. A more formal definition by W3C (2001) refers that, “the semantic Web is the representation of data on the World Wide Web. It is a collaborative effort led by W3C with participation from a large number of researchers and industrial partners. It is based on the resource description framework (RDF), which integrates a variety of applications using eXtensible Markup Language (XML) for syntax and uniform resource identifiers (URIs) for naming”. The capability of the semantic Web to add meaning to information, stored in such way that it can be searched and processed as well as recent advances in semantic Web-based technologies provide the mechanisms for semantic knowledge representation, exchange and collaboration of e-business processes and applications.


Author(s):  
Melissa B. Holler

The foundation for much of the technology being used in today’s classroom is the Microsoft Office suite. It is fast becoming the integrated software package of choice for many schools and school districts. Word, PowerPoint, Excel, and Access are the staples for many students and teachers. Complimenting these capabilities, Internet Explorer and Netscape Communicator are the tools of choice for accessing the World Wide Web. Why not help teachers utilize these same tools to develop text, visual, and Web-based materials for the classroom, and leave the more complex and costly packages to multimedia designers and commercial artists? The success of this philosophy has been borne out by a blistering growth in applications from K-12 classroom teachers, technology coordinators, and corporate trainers.


Author(s):  
Deborah L. Lowther ◽  
Marshall G. Jones ◽  
Robert T. Plants

The potential impact of the World Wide Web (WWW) on our educational system is limitless. However, if our teachers do not possess the appropriate knowledge and skills to use the Web, the impact could be less than positive. It is evident, then, that our teachers need to be prepared to effectively use these powerful on-line resources to prepare our children to thrive in a digital society. The purpose of this chapter is to discuss the impact of Web-based education on teacher education programs by addressing the following questions: • How is the World Wide Web impacting education? • Are teacher education programs meeting the challenge of producing certified teachers who are capable of integrating meaningful use of technology into K-12 classrooms? • What is expected of teacher education programs in regards to technology and Web-based education? • What knowledge and skills do preservice teachers need to effectively use Web-based education? • What instructional approaches should be used to prepare preservice teachers to use Web-based education?


Author(s):  
Man-Hua Wu ◽  
Herng-Yow Chen

With the rapid growth of the Internet and the increasing popularity of the World Wide Web, web-based learning systems have become more and more popular. However, in general Web-based learning systems, learners may often get lost in the enormous educational materials (Eirinaki & Vazirgiannis, 2003; Murray, 2002). This kind of situation refers to a disorientation problem. In addition to the disorientation problem, general Web-based learning systems provide every learner with uniform course content and presentation without considering the different knowledge level of learners. Therefore, the course content may be insufficient or unnecessary for learners with different knowledge level. This kind of situation was referred to as cognitive-overhead problem by Murray (2002).


2004 ◽  
pp. 338-349
Author(s):  
Robert S. Owen ◽  
Bosede Aworuwa

Many of us are using the World Wide Web in ways that are similar to the teaching machines and automatic tutoring devices of the 1950-1960s, yet we are moving ahead without building upon a base of knowledge that already exists from that era. This chapter reviews the basic ideas of the original automatic teaching and tutoring machines of those two decades — a linear programmed learning model and a programmed branching model — and compares these to hypermedia methods that are now enabled via web technology. Some classic ideas in assessing the cognitive and affective learning outcomes of teaching — somewhat analogous to usability issues of utility and likability — are reviewed. Greater emphasis on considering the educational outcomes is advocated when we use new online teaching technologies in programmed instruction.


Sign in / Sign up

Export Citation Format

Share Document