scholarly journals Less users more confidence: How AOIs don’t affect scanpath trend analysis

2017 ◽  
Vol 10 (4) ◽  
Author(s):  
Sukru Eraslan ◽  
Yeliz Yesilada ◽  
Simon Harper

User studies are typically difficult, recruiting enough users is often problematic and each experiment takes a considerable amount of time to be completed. In these studies, eye tracking is increasingly used which often increases time, therefore, the lower the number of users required for these studies the better for making these kinds of studies more practical in terms of economics and time expended. The possibility of achieving almost the same results with fewer users has already been raised. Specifically, the possibility of achieving 75% similarity to the results of 65 users with 27 users for searching tasks and 34 users for browsing tasks has been observed in scanpath trend analysis which discovers the most commonly followed path on a particular web page in terms of its visual elements or areas of interest (AOIs). Different approaches are available to segment or divide web pages into their visual elements or AOIs. In this paper, we investigate whether the possibility raised by the previous work is restricted to a particular page segmentation approach by replicating the experiments with two other segmentation approaches. The results are consistent with ~5% difference for the searching tasks and ~10% difference for the browsing tasks.

Author(s):  
Rebecca Grier ◽  
Philip Kortum ◽  
James Miller

This chapter presents the basic cognitive and perceptual attentional mechanisms that affect how users view web pages and the methods used to measure this attention. It describes the groundbreaking work of Faraday (2000), who proposed a visual scanning model of web pages based on salient visual elements and summarizes data from eye tracking techniques that reveal the strengths and weaknesses of the Faraday model. The primary goal of the chapter is to help the reader gain an understanding of what visual elements on a web page draw a user’s attention, how that knowledge can be collected, and how it can be applied to the design of useful and usable web sites.


Nowadays the usage of mobile phones is widely spread in our lifestyle; we use cell phones as a camera, a radio, a music player, and even as a web browser. Since most web pages are created for desktop computers, navigating through web pages is highly fatigued. Hence, there is a great interest in computer science to adopt such pages with rich content into small screens of our mobile devices. On the other hand, every web page has got many different parts that do not have the equal importance to the end user. Consequently, the authors propose a mechanism to identify the most useful part of a web page to a user regarding his or her search query while the information loss is avoided. The challenge here comes from the fact that long web contents cannot be easily displayed in both vertical and horizontal ways.


2014 ◽  
Vol E97.D (2) ◽  
pp. 223-230 ◽  
Author(s):  
Jun ZENG ◽  
Brendan FLANAGAN ◽  
Sachio HIROKAWA ◽  
Eisuke ITO

2013 ◽  
Vol 347-350 ◽  
pp. 2479-2482
Author(s):  
Yao Hui Li ◽  
Li Xia Wang ◽  
Jian Xiong Wang ◽  
Jie Yue ◽  
Ming Zhan Zhao

The Web has become the largest information source, but the noise content is an inevitable part in any web pages. The noise content reduces the nicety of search engine and increases the load of server. Information extraction technology has been developed. Information extraction technology is mostly based on page segmentation. Through analyzed the existing method of page segmentation, an approach of web page information extraction is provided. The block node is identified by analyzing attributes of HTML tags. This algorithm is easy to implementation. Experiments prove its good performance.


2012 ◽  
Vol 204-208 ◽  
pp. 4928-4931
Author(s):  
Yang Xin Yu

A Web information retrieval algorithm based on Web page segment is designed, the key idea of which is to segment each Web page into different topic areas or segments according to its HTML tags and contents since Web pages are semi-structure. First, the algorithm builds a HTML tag tree, and then it combines nodes in the tree under the rule of content similarity and visual similarity. During the process of retrieval and ranking, the algorithm makes full use of the segmentation information to sequence the relevant pages. The experimental results show that this method is able to improve the precision in search significantly and it is also a good reference for the design of the future search engines.


2011 ◽  
Vol 4 (1) ◽  
Author(s):  
Noriyuki Matsuda ◽  
Haruhiko Takeuchi

Heat maps highlight cumulative, static importance in eye-tracking records, while network analysis helps to elucidate dynamic importance from transitional relations. The present study was designed to perform both analyses in the same conceptual framework, i.e., network representation. For this purpose, heat maps comprising 5 × 5 segments were overlaid with networks, both of which were produced from the eye-tracking records of 20 subjects who read 10 top web pages that were classified into three layout types. The heat of the segments was graded on the basis of five percentile scores, whereas the core-peripheral nodes were identified by the agreement of centrality and ranking indices. The congruence between the two types of importance was generally good at the node level and the community levels. Additional findings included a) mixed patterns of the sustained fixations (i.e., loops) within the total fixations, and b) an increase in reciprocity as the network scope was narrowed to communities and then to the core neighborhoods.


In a web based application phishing attack plays a vital role. To find a solution for this problem, lots of work is carried out over a year, but still now no solution is find out for this problem. The existing solution, suffers from a few drawbacks such as to count potential to compromise consumer privacy. That is the reason for difficulty of detecting phishing attacks in the websites. In addition to this problem, the website content is changed dynamically, and confidence depends on the features of specific provisions of data. To solve these issues, a new direction for the detection of phishing attacks in web-pages is approached here. The proposed system, inherent the phishing limits starting from the constraints they faced while built a web-page. subsequently the implementation of our approach includes, off-the-hook- focused on extraordinary precision and brand-independence and semantic individuality. Here the off-the-hook is constructed from the fully-client-side browser add-on, which describes the user privacy. Additionally, off-the-hook focused on the target website and the phishing webpage is attempting to imitate and comprises this objective with warning. The proposed method is evaluated our genetic algorithm in below user studies.


2020 ◽  
Vol 14 ◽  
Author(s):  
Shefali Singhal ◽  
Poonam Tanwar

Abstract:: Now-a-days when everything is going digitalized, internet and web plays a vital role in everyone’s life. When one has to ask something or has any online task to perform, one has to use internet to access relevant web-pages throughout. These web-pages are mainly designed for large screen terminals. But due to mobility, handy and economic reasons most of the persons are using small screen terminals (SST) like mobile phone, palmtop, pagers, tablet computers and many more. Reading a web page which is actually designed for large screen terminal on a small screen is time consuming and cumbersome task because there are many irrelevant content parts which are to be scrolled or there are advertisements, etc. Here main concern is e-business users. To overcome such issues the source code of a web page is organized in tree data-structure. In this paper we are arranging each and every main heading as a root node and all the content of this heading as a child node of the logical structure. Using this structure, we regenerate a web-page automatically according to SST size. Background:: DOM and VIPS algorithms are the main background techniques which are supporting the current research. Objective:: To restructure a web page in a more user friendly and content presenting format. Method Backtracking:: Method Backtracking: Results:: web page heading queue generation. Conclusion:: Concept of logical structure supports every SST.


Sign in / Sign up

Export Citation Format

Share Document