Indexing of Free, Web-based Electronic Resources

2010 ◽  
Vol 10 (1) ◽  
pp. 28-33 ◽  
Author(s):  
Glenda Browne

AbstractThe internet provides access to a huge amount of information, and most people experience problems with information overload rather than scarcity. Glenda Browne explains how indexing provides a way of increasing retrieval of relevant information from the content available. Manual, book-style indexes can be created for websites and individual web documents such as online books. Keyword metadata is a crucial behind the scenes aid to improved search engine functioning, and categorisation, social bookmarking and automated indexing also play a part.

2012 ◽  
pp. 684-705 ◽  
Author(s):  
Luis Terán ◽  
Andreas Ladner ◽  
Jan Fivaz ◽  
Stefani Gerber

The use of the Internet now has a specific purpose: to find information. Unfortunately, the amount of data available on the Internet is growing exponentially, creating what can be considered a nearly infinite and ever-evolving network with no discernable structure. This rapid growth has raised the question of how to find the most relevant information. Many different techniques have been introduced to address the information overload, including search engines, Semantic Web, and recommender systems, among others. Recommender systems are computer-based techniques that are used to reduce information overload and recommend products likely to interest a user when given some information about the user’s profile. This technique is mainly used in e-Commerce to suggest items that fit a customer’s purchasing tendencies. The use of recommender systems for e-Government is a research topic that is intended to improve the interaction among public administrations, citizens, and the private sector through reducing information overload on e-Government services. More specifically, e-Democracy aims to increase citizens’ participation in democratic processes through the use of information and communication technologies. In this chapter, an architecture of a recommender system that uses fuzzy clustering methods for e-Elections is introduced. In addition, a comparison with the smartvote system, a Web-based Voting Assistance Application (VAA) used to aid voters in finding the party or candidate that is most in line with their preferences, is presented.


Author(s):  
Luis Terán ◽  
Andreas Ladner ◽  
Jan Fivaz ◽  
Stefani Gerber

The use of the Internet now has a specific purpose: to find information. Unfortunately, the amount of data available on the Internet is growing exponentially, creating what can be considered a nearly infinite and ever-evolving network with no discernable structure. This rapid growth has raised the question of how to find the most relevant information. Many different techniques have been introduced to address the information overload, including search engines, Semantic Web, and recommender systems, among others. Recommender systems are computer-based techniques that are used to reduce information overload and recommend products likely to interest a user when given some information about the user’s profile. This technique is mainly used in e-Commerce to suggest items that fit a customer’s purchasing tendencies. The use of recommender systems for e-Government is a research topic that is intended to improve the interaction among public administrations, citizens, and the private sector through reducing information overload on e-Government services. More specifically, e-Democracy aims to increase citizens’ participation in democratic processes through the use of information and communication technologies. In this chapter, an architecture of a recommender system that uses fuzzy clustering methods for e-Elections is introduced. In addition, a comparison with the smartvote system, a Web-based Voting Assistance Application (VAA) used to aid voters in finding the party or candidate that is most in line with their preferences, is presented.


Author(s):  
Jos van Iwaarden ◽  
Ton van der Wiele ◽  
Roger Williams ◽  
Steve Eldridge

The Internet has come of age as a global source of information about every topic imaginable. A company like Google has become a household name in Western countries and making use of its internet search engine is so popular that “Googling” has even become a verb in many Western languages. Whether it is for business or private purposes, people worldwide rely on Google to present them relevant information. Even the scientific community is increasingly employing Google’s search engine to find academic articles and other sources of information about the topics they are studying. Yet, the vast amount of information that is available on the internet is gradually changing in nature. Initially, information would be uploaded by the administrators of the web site and would then be visible to all visitors of the site. This approach meant that web sites tended to be limited in the amount of content they provided, and that such content was strictly controlled by the administrators. Over time, web sites have granted their users the authority to add information to web pages, and sometimes even to alter existing information. Current examples of such web sites are eBay (auction), Wikipedia (encyclopedia), YouTube (video sharing), LinkedIn (social networking), Blogger (weblogs) and Delicious (social bookmarking).


Author(s):  
Novario Jaya Perdana

The accuracy of search result using search engine depends on the keywords that are used. Lack of the information provided on the keywords can lead to reduced accuracy of the search result. This means searching information on the internet is a hard work. In this research, a software has been built to create document keywords sequences. The software uses Google Latent Semantic Distance which can extract relevant information from the document. The information is expressed in the form of specific words sequences which could be used as keyword recommendations in search engines. The result shows that the implementation of the method for creating document keyword recommendation achieved high accuracy and could finds the most relevant information in the top search results.


2011 ◽  
pp. 153-161
Author(s):  
Raj Gaurang Tiwari ◽  
Mohd. Husain ◽  
Anil Agrawal

As web users are facing the problems of information overload and drowning due to the significant and rapid growth in the amount of information and the number of users so there is need to provide Web users the more exactly needed information which is becoming a critical issue in web-based information retrieval and Web applications. In this work, we aspire to improve the performance of Web information retrieval and Web presentation through developing and employing Web data mining paradigms. Every search engine has a corresponding database that defines the set of documents that can be searched by the search engine. Generally, an index for all documents in the database is created and stored in the search engine. Text data in the Internet can be partitioned into several databases naturally. Proficient retrieval of preferred data can be attained if we can exactly predict the usefulness of each database, because with such information, we only need to retrieve potentially useful documents from useful databases. For a given query ‘q’ the usefulness of a text database is defined to be the no. of documents in the database that are sufficiently relevant to the query ‘q’. In this paper, we propose new approaches for database selection and documents selection. We also implement these algorithms using .net framework. Our experimental results indicate that these methods can yield substantial improvements over existing techniques.


2019 ◽  
Vol 8 (2S11) ◽  
pp. 1083-1086

In recent years everything is connected and passing through the internet, but Internet of Things (IOT), which will change all aspects of our lives and future. While the things are connected to the internet, they will generate the huge amount of information which has to be processed. The information that gathered from various IoT devices has to be recognized and organized according to the environments of their type. To recognize and organize the data gathered from different things, the important task to be played is making things passing through different Data Mining Techniques (DMT). In this article, we mainly focus on analysis of various Data Mining Techniques over the data that has been generated by the IOT Devices which are connected over the internet using DBSCAN Technique. And also performed review over different Data Mining Techniques for Data Analysis


Author(s):  
Kristina Tihomirova ◽  
Linda Mezule

It has been observed that huge amount of information received from teachers can create a feeling of overload for students. Selection of modern teaching methods do not always help to solve this issue. To identify the link between information overload at various study course organization models (regular, advanced and super-advanced), various lecturer types have been described. These include apathetic, formal, teacher-centred egoist, student-centred chaotic lecturer and activist. The results demonstrated that course organization in engineering studies is closely linked to the personality of the lecturer. Successful course organization is based on good time management, selection of appropriate amount of information. In advanced and super-advanced courses regular communication between lecturers and experts in practice is favoured. At the same time selection of adequate amount of study material based on the general knowledge level of the students is required. To achieve the goal, each lecturer should evaluate the level of information required and the overall interest level of students in the course topic on a regular basis before the beginning of the course.


Author(s):  
Ming Wang

The enormous amount of commercial information available on the Internet makes online shoppers overwhelmed and it difficult to find relevant information. The recent development of shopping agents (bots) has offered a practical solution for this information overload problem. From the customer’s point of view, a shopping agent reduces search complexity, increases search efficiency, and supports user mobility. It has been proposed that the availability of agent Web sites is one of the reasons why e-markets should be more efficient (Mougayar, 1998). Shopping bots are created with agent software that assists online shoppers by automatically gathering shopping information from the Internet. In this comparative shopping environment, shopping agents can provide the customer with comparative prices for a searched product, customer reviews of the product, and reviews of the corresponding merchants. The agent will first locate the merchants’ Web sites selling the searched product. Then, the agent will collect information about the prices of the product and its features from these merchants. Once a customer selects a product with a merchant, the individual merchant Web site will process the purchase order and the delivery details. The shopping agent receives a commission on each sale made by a visitor to its site from the merchant selling the product on the Internet. Some auction agent Web sites provide a negotiation service through intelligent agent functions. Agents will represent both buyers and sellers. Once a buyer identifies a seller, the agent can negotiate the transaction. The agents will negotiate a price and then execute the transaction for their respective owners. The buyer’s agent will use a credit card account number to pay for the product. The seller’s agent will accept the payment and transmit the proper instructions to deliver the item under the terms agreed upon by the agent.


Author(s):  
Santosh Kumar ◽  
Ravi Kumar

The internet is very huge in size and increasing exponentially. Finding any relevant information from such a huge information source is now becoming very difficult. Millions of web pages are returned in response to a user's ordinary query. Displaying these web pages without ranking makes it very challenging for the user to find the relevant results of a query. This paper has proposed a novel approach that utilizes web content, usage, and structure data to prioritize web documents. The proposed approach has applications in several major areas like web personalization, adaptive website development, recommendation systems, search engine optimization, business intelligence solutions, etc. Further, the proposed approach has been compared experimentally by other approaches, WDPGA, WDPSA, and WDPII, and it has been observed that with a little trade off time, it has an edge over these approaches.


2010 ◽  
Vol 9 (2) ◽  
pp. 305-309
Author(s):  
Alison Dawson

The URL addresses listed here access websites holding an array of electronic resources relevant to the understanding of harm, abuse, agency and resilience across the lifespan. Many websites include links to additional reports, research papers, reviews and other sources of information. Due to the breadth of subject area and limitations on available space, the websites should be regarded as an indicative sample rather than an exhaustive list of relevant information currently available on the internet. Only English language sites have been included. All website addresses were available on 31 July 2009.


Sign in / Sign up

Export Citation Format

Share Document