scholarly journals Interactive Search vs. Automatic Search

Author(s):  
Phuong-Anh Nguyen ◽  
Chong-Wah Ngo

This article conducts user evaluation to study the performance difference between interactive and automatic search. Particularly, the study aims to provide empirical insights of how the performance landscape of video search changes, with tens of thousands of concept detectors freely available to exploit for query formulation. We compare three types of search modes: free-to-play (i.e., search from scratch), non-free-to-play (i.e., search by inspecting results provided by automatic search), and automatic search including concept-free and concept-based retrieval paradigms. The study involves a total of 40 participants; each performs interactive search over 15 queries of various difficulty levels using two search modes on the IACC.3 dataset provided by TRECVid organizers. The study suggests that the performance of automatic search is still far behind interactive search. Furthermore, providing users with the result of automatic search for exploration does not show obvious advantage over asking users to search from scratch. The study also analyzes user behavior to reveal insights of how users compose queries, browse results, and discover new query terms for search, which can serve as guideline for future research of both interactive and automatic search.

1996 ◽  
Vol 13 (4) ◽  
pp. 347-356 ◽  
Author(s):  
Janice Causgrove Dunn ◽  
E. Jane Watkinson

This study investigated whether the TOMI (Stott, Moyes, & Henderson, 1984), a motor skills test recommended for the identification of children who are physically awkward (Sugden, 1985; Wall, Reid, & Paton, 1990), contains biased items. Findings of a study by Causgrove and Watkinson (1993) indicated that an unexpectedly high proportion of girls from Grades 3 to 6 were identified as physically awkward, and the authors suggested that the TOMI may be biased in favor of boys. In the present study, this suggestion was investigated through comparison of performances of TOMI subtest items by boys and girls from Grades 1 to 6. Chi-square analyses on each of the eight test items revealed significant performance differences between boys and girls on the two ball skills tasks of catching and throwing (p < .0001) at Age Bands 3 and 4; a significantly greater proportion of boys than girls age 9 to 12 years passed the catching and throwing tasks. A significant performance difference was also found on the tracing task at Age Band 1, with more girls passing tracing than boys. Implications for future research requiring the identification of children who are physically awkward are discussed.


2021 ◽  
Author(s):  
Anne Oeldorf-Hirsch ◽  
German Neubaum

The increasing emergence of algorithms in our daily use of technologies comes with a growing field of empirical research trying to understand how aware and knowledgeable individuals are about algorithms. This field is marked by a certain diversity in terms of how it theorizes and measures people’s literacy when interacting with algorithms. We propose converging on the term algorithmic literacy that covers different dimensions used by previous research. This article summarizes the state of knowledge on algorithmic literacy by systematically presenting initial steps in theory building and measurement development. Drawing on this, we propose an agenda including five different directions that future research could focus on: 1) theory building to understand algorithmic literacy, 2) addressing the algorithmic divide, 3) uncovering the relationship between algorithmic literacy and attitudes, 4) examining algorithmic literacy as predictor for user behavior, and 5) exploring ways to increase algorithmic literacy.


10.28945/4176 ◽  
2019 ◽  
Vol 14 ◽  
pp. 027-044 ◽  
Author(s):  
Da Thon Nguyen ◽  
Hanh T Tan ◽  
Duy Hoang Pham

Aim/Purpose: In this article, we provide a better solution to Webpage access prediction. In particularly, our core proposed approach is to increase accuracy and efficiency by reducing the sequence space with integration of PageRank into CPT+. Background: The problem of predicting the next page on a web site has become significant because of the non-stop growth of Internet in terms of the volume of contents and the mass of users. The webpage prediction is complex because we should consider multiple kinds of information such as the webpage name, the contents of the webpage, the user profile, the time between webpage visits, differences among users, and the time spent on a page or on each part of the page. Therefore, webpage access prediction draws substantial effort of the web mining research community in order to obtain valuable information and improve user experience as well. Methodology: CPT+ is a complex prediction algorithm that dramatically offers more accurate predictions than other state-of-the-art models. The integration of the importance of every particular page on a website (i.e., the PageRank) regarding to its associations with other pages into CPT+ model can improve the performance of the existing model. Contribution: In this paper, we propose an approach to reduce prediction space while improving accuracy through combining CPT+ and PageRank algorithms. Experimental results on several real datasets indicate the space reduced by up to between 15% and 30%. As a result, the run-time is quicker. Furthermore, the prediction accuracy is improved. It is convenient that researchers go on using CPT+ to predict Webpage access. Findings: Our experimental results indicate that PageRank algorithm is a good solution to improve CPT+ prediction. An amount of though approximately 15 % to 30% of redundant data is removed from datasets while improving the accuracy. Recommendations for Practitioners: The result of the article could be used in developing relevant applications such as Webpage and product recommendation systems. Recommendation for Researchers: The paper provides a prediction model that integrates CPT+ and PageRank algorithms to tackle the problem of complexity and accuracy. The model has been experimented against several real datasets in order to show its performance. Impact on Society: Given an improving model to predict Webpage access using in several fields such as e-learning, product recommendation, link prediction, and user behavior prediction, the society can enjoy a better experience and more efficient environment while surfing the Web. Future Research: We intend to further improve the accuracy of webpage access prediction by using the combination of CPT+ and other algorithms.


2020 ◽  
Vol 28 (3) ◽  
pp. 81-91
Author(s):  
Tetyana S. Dronova ◽  
Yana Y. Trygub

Purpose – to study website’s work and content of the travel agency on the example of the "Laspi" travel agency, identify its technical properties and offer methods to increase the web-resource leading position in the Yandex and Google search engines by performing SEO-analysis. Design/Method/Research approach. Internet resources SEO-analysis. Findings.The travel product promotion directly depends on the travel market participants' advertising tools' effectiveness, mainly travel agents. It is determined that one of the new technologies that increase the advertising effectiveness, in particular via the travel agencies’ web resources, is SEO-technology. The authors Identified technical shortcomings of its operation, mainly related to search queries statistics, the subject site visits, the semantic core operation, the site improvement, the site increasing citation, and the number of persistent references in the network. It is proved that updating site development, changing its environment, analyzing user behavior, namely the Og Properties micro markup, updating HTML tags, analytical programs placing, iframe objects selection, and other activities, increase the content uniqueness. As a result, search engines scanned the site, and the search results took first place for the positions essential for the web resource. Originality/Value. The leading positions increasing mechanism application, website operation optimization allow search engines to bring it to the TOP of the most popular travel sites. Theoretical implications. To optimize the web resource operation, a mechanism for improving its leading position is proposed that includes three steps: the general website characteristics of marketing, SEO-analysis, recommendations provision. Practical implications. The research is practical in improving the site’s technical operation and increasing its leading position in Yandex and Google search engines. Research limitations/Future research. Further research aims at the site further analysis after making the proposed changes to its operation. Paper type – empirical.  


Energies ◽  
2020 ◽  
Vol 13 (21) ◽  
pp. 5617
Author(s):  
Michel Zade ◽  
Zhengjie You ◽  
Babu Kumaran Nalini ◽  
Peter Tzscheutschler ◽  
Ulrich Wagner

The adoption of electric vehicles is incentivized by governments around the world to decarbonize the mobility sector. Simultaneously, the continuously increasing amount of renewable energy sources and electric devices such as heat pumps and electric vehicles leads to congested grids. To meet this challenge, several forms of flexibility markets are currently being researched. So far, no analysis has calculated the actual flexibility potential of electric vehicles with different operating strategies, electricity tariffs and charging power levels while taking into account realistic user behavior. Therefore, this paper presents a detailed case study of the flexibility potential of electric vehicles for fixed and dynamic prices, for three charging power levels in consideration of Californian and German user behavior. The model developed uses vehicle and mobility data that is publicly available from field trials in the USA and Germany, cost-optimizes the charging process of the vehicles, and then calculates the flexibility of each electric vehicle for every 15 min. The results show that positive flexibility is mostly available during either the evening or early morning hours. Negative flexibility follows the periodic vehicle availability at home if the user chooses to charge the vehicle as late as possible. Increased charging power levels lead to increased amounts of flexibility. Future research will focus on the integration of stochastic forecasts for vehicle availability and electricity tariffs.


2009 ◽  
Vol 19 (11) ◽  
pp. 3023-3032 ◽  
Author(s):  
Yi-Qun LIU ◽  
Rong-Wei CEN ◽  
Min ZHANG ◽  
Li-Yun RU ◽  
Shao-Ping MA

Author(s):  
Enrique Alba ◽  
Javier Ferrer ◽  
Ignacio Villalobos

This work aims at giving an updated vision on the successful combination between Metaheuristics and Software Engineering (SE). Mostly during the 90s, varied groups of researchers dealing with search, optimization, and learning (SOL) met SE researchers, all of them looking for a quantified manner of modeling and solving problems in the software field. This paper will discuss on the construction, assessment, and exploitation tasks that help in making software programs a scientific object, subject to automatic study and control. We also want to show with several case studies how the quantification of software features and the automatic search for bugs can improve the software quality process, which eases compliance to ISO/IEEE standards. In short, we want to build intelligent automatic tools that will upgrade the quality of software products and services. Since we approach this new field as a cross-fertilization between two research domains, we then need to talk not only on metaheuristics for SE (well known by now), but also on SE for metaheuristics (not so well known nowadays). In summary, we will discuss here with three time horizons in mind: the old times [before the term search-based SE (SBSE) was used for this], the recent years on SBSE, and the many avenues for future research/development. A new body of knowledge in SOL and SE exists internationally, which is resulting in a new class of researchers able of building intelligent techniques for the benefit of software, that is, of modern societies.


2007 ◽  
Vol 76 (11-12) ◽  
pp. 780-789 ◽  
Author(s):  
Gondy Leroy ◽  
Jennifer Xu ◽  
Wingyan Chung ◽  
Shauna Eggers ◽  
Hsinchun Chen

2009 ◽  
Vol 1 (1) ◽  
pp. 22-37 ◽  
Author(s):  
Antony Millner

Abstract Understanding the economic value of weather and climate forecasts is of tremendous practical importance. Traditional models that have attempted to gauge forecast value have focused on a best-case scenario, in which forecast users are assumed to be statistically sophisticated, hyperrational decision makers with perfect knowledge and understanding of forecast performance. These models provide a normative benchmark for assessing forecast value, but say nothing about the value that actual forecast users realize. Real forecast users are subject to a variety of behavioral effects and informational constraints that violate the assumptions of normative models. In this paper, one of the normative assumptions about user behavior is relaxed—users are no longer assumed to be in possession of a perfect statistical understanding of forecast performance. In the case of a cost–loss decision, it is shown that a model of users’ forecast use choices based on the psychological theory of reinforcement learning leads to a behavioral adjustment factor that lowers the relative value score that the user achieves. The dependence of this factor on the user’s decision parameters (the ratio of costs to losses) and the forecast skill is deduced. Differences between the losses predicted by the behavioral and normative models are greatest for users with intermediate cost–loss ratios, and when forecasts have intermediate skill. The relevance of the model as a tool for directing user education initiatives is briefly discussed, and a direction for future research is proposed.


Author(s):  
Hamid Afshari ◽  
Qingjin Peng

One of the major concerns for adaptable products is to ensure the products to meet customer preferences. As customers may update their preferences over the product lifetime, designers need methods to measure those preferences. Lack of knowledge (uncertainty) in customer preferences could endanger the product success. If designers can update their views for customer requirements, a product can be designed to follow the user requirements. Huge data are generated continuously in product user behavior, product usage, manufacturing cost etc., now called as Big Data. Collecting, managing and applying such huge set of data in an innovative method can reduce uncertainties. In this paper, a method is discussed to minimize uncertainty effects on products to improve the product adaptability. Uncertainty is considered as changes of the customer preference. The proposed method uses Big Data (BD) in the analysis of uncertainty. The effect of quantified uncertainties on product adaptability is investigated. The method is concluded with the most affected parts and functional requirements to be updated to meet changing requirements. The proposed method is compared to a developed agent-based modeling (ABM) method in a case study. Although there are some differences between both methods in the uncertainty effect evaluation, The BD method provides more confidence for the design solution. The paper also proposes some future research directions for design of adaptable products using Big Data.


Sign in / Sign up

Export Citation Format

Share Document