scholarly journals A New Feature based Scoring Technique to Discover Sentiments Present in the Online Product

The increasing usage of internet, online stores and social media has provided the users to express their opinion, attitude and views without any reluctance and fear on the World Wide Web. These opinions expressed by the users can be related to a product or service as well as any global issues. The colossal growth of the web technology has offered the consumers to know more about the products they intend to buy from the existing customer’s reviews. This paper focuses on analyzing the opinion by splitting the positive and negative opinion and then guides the user about the ground truth regarding the performance and quality of the product. The important idea here is to categorize the important features of the product and then accordingly provide the feature wise computation instead of roughly promoting a product as good or bad.

2001 ◽  
Vol 20 (4) ◽  
pp. 11-18 ◽  
Author(s):  
Cleborne D. Maddux

The Internet and the World Wide Web are growing at unprecedented rates. More and more teachers are authoring school or classroom web pages. Such pages have particular potential for use in rural areas by special educators, children with special needs, and the parents of children with special needs. The quality of many of these pages leaves much to be desired. All web pages, especially those authored by special educators should be accessible for people with disabilities. Many other problems complicate use of the web for all users, whether or not they have disabilities. By taking some simple steps, beginning webmasters can avoid these problems. This article discusses practical solutions to common accessibility problems and other problems seen commonly on the web.


Author(s):  
B. M. Subraya

For many years, the World Wide Web (Web) functioned quite well without any concern about the quality of performance. The designers of the Web page, as well as the users were not much worried about the performance attributes. The Web, in the initial stages of development, was primarily meant to be an information provider rather than a medium to transact business, into which it has grown. The expectations from the users were also limited only to seek the information available on the Web. Thanks to the ever growing population of Web surfers (now in the millions), information found on the Web underwent a dimensional change in terms of nature, content, and depth.


Hand Surgery ◽  
2003 ◽  
Vol 08 (02) ◽  
pp. 181-185 ◽  
Author(s):  
J. A. Sproule ◽  
C. Tansey ◽  
B. Burns ◽  
G. Fenelon

Healthcare information contained on the World Wide Web is not screened or regulated and claims may be unsubstantiated and misleading. The objective of this study was to evaluate the nature and quality of information on the Web in relation to hand surgery. Three search engines were assessed for information on three hand operations: carpal tunnel decompression, Dupuytren's release and trigger finger release. Websites were classified and evaluated for completeness, accuracy, accountability and reference to a reliable source of information. A total of 172 websites were examined. Although 85% contained accurate information, in 65% this information was incomplete. Eighty-seven per cent of websites were accountable for the information presented, but only 24% made references to reliable sources. Until an organised approach to website control is established, it is important for hand surgeons to emphasise to their patients that not everything they read is complete or accurate. Publicising sites known to be of high quality will promote safe browsing of the Web.


Author(s):  
Aylin Akaltun ◽  
Patrick Maisch ◽  
Bernhard Thalheim

The rapid growth of information demands that new technologies make the right information available to the right user. The more unspecified content published, the more general usability of the World Wide Web is lost. The next generation‘s Web information services will have to be more adaptive to keep the web usable. User demands for knowledge reflecting his or her life case, specific intentions, and therefore a particular quality of content must be served in an understandable way. As a possible solution, the authors present a new technology that matches content against particular life cases, user models and contexts. In a first approach the authors give a quick overview of knowledge, the way it is perceived and an example application dealing with content matching and different views of information required for different kinds of audiences.


1996 ◽  
Vol 5 (2) ◽  
pp. 16-18 ◽  
Author(s):  
Alistair Inglis

A comparative study was made of the ways in which Australian universities are disseminating information about their courses over the World Wide Web. The study examined the quantity and quality of the information provided, the forms in which information is presented, and means of access to the information. The results of the survey indicated that while the majority of universities are now publishing at least some information over the World Wide Web, both the quantity and quality of information is variable. Implications for further development of institutional course information databases are discussed.


Author(s):  
Anthony D. Andre

This paper provides an overview of the various human factors and ergonomics (HF/E) resources on the World Wide Web (WWW). A list of the most popular and useful HF/E sites will be provided, along with several critical guidelines relevant to using the WWW. The reader will gain a clear understanding of how to find HF/E information on the Web and how to successfully use the Web towards various HF/E professional consulting activities. Finally, we consider the ergonomic implications of surfing the Web.


2016 ◽  
Vol 28 (2) ◽  
pp. 241-251 ◽  
Author(s):  
Luciane Lena Pessanha Monteiro ◽  
Mark Douglas de Azevedo Jacyntho

The study addresses the use of the Semantic Web and Linked Data principles proposed by the World Wide Web Consortium for the development of Web application for semantic management of scanned documents. The main goal is to record scanned documents describing them in a way the machine is able to understand and process them, filtering content and assisting us in searching for such documents when a decision-making process is in course. To this end, machine-understandable metadata, created through the use of reference Linked Data ontologies, are associated to documents, creating a knowledge base. To further enrich the process, (semi)automatic mashup of these metadata with data from the new Web of Linked Data is carried out, considerably increasing the scope of the knowledge base and enabling to extract new data related to the content of stored documents from the Web and combine them, without the user making any effort or perceiving the complexity of the whole process.


2018 ◽  
Vol 31 (5) ◽  
pp. 154-182
Author(s):  
Cadence Kinsey

This article analyses Camille Henrot’s 2013 film Grosse Fatigue in relation to the histories of hypermedia and modes of interaction with the World Wide Web. It considers the development of non-hierarchical systems for the organisation of information, and uses Grosse Fatigue to draw comparisons between the Web, the natural history museum and the archive. At stake in focusing on the way in which information is organised through hypermedia is the question of subjectivity, and this article argues that such systems are made ‘user-friendly’ by appearing to accommodate intuitive processes of information retrieval, reflecting the subject back to itself as autonomous. This produces an ideology of individualism which belies the forms of heteronomy that in fact shape and structure access to information online in significant ways. At the heart of this argument is an attention to the visual, and the significance of art as an immanent mode of analysis. Through the themes of transparency and opacity, and order and chaos, the article thus proposes a defining dynamic between autonomy and automation as a model for understanding the contemporary subject.


2017 ◽  
Vol 4 (1) ◽  
pp. 95-110 ◽  
Author(s):  
Deepika Punj ◽  
Ashutosh Dixit

In order to manage the vast information available on web, crawler plays a significant role. The working of crawler should be optimized to get maximum and unique information from the World Wide Web. In this paper, architecture of migrating crawler is proposed which is based on URL ordering, URL scheduling and document redundancy elimination mechanism. The proposed ordering technique is based on URL structure, which plays a crucial role in utilizing the web efficiently. Scheduling ensures that URLs should go to optimum agent for downloading. To ensure this, characteristics of both agents and URLs are taken into consideration for scheduling. Duplicate documents are also removed to make the database unique. To reduce matching time, document matching is made on the basis of their Meta information only. The agents of proposed migrating crawler work more efficiently than traditional single crawler by providing ordering and scheduling of URLs.


Sign in / Sign up

Export Citation Format

Share Document