scholarly journals TRENDS: How Internet Search Undermines the Validity of Political Knowledge Measures

2019 ◽  
Vol 73 (1) ◽  
pp. 141-155
Author(s):  
Brianna Smith ◽  
Scott Clifford ◽  
Jennifer Jerit

Political knowledge is central to understanding citizens’ engagement with politics. Yet, as surveys are increasingly conducted online, participants’ ability to search the web may undermine the validity of factual knowledge measures. Recent research shows this search behavior is common, even when respondents are instructed otherwise. However, we know little about how outside search affects the validity of political knowledge measures. Using a series of experimental and observational studies, we provide consistent evidence that outside search degrades the validity of political knowledge measures. Our findings imply that researchers conducting online surveys need to take steps to discourage and diagnose search engine use.

2011 ◽  
pp. 1390-1397
Author(s):  
Emma J. Stodel ◽  
Laura G. Farres ◽  
Colla J. MacDonald

The idea of providing mental training1 (MT) and sport psychology services online is becoming more prevalent as technology continues to shape education and the Web becomes more popular. In September 2000, an Internet search for “mental training” using the Google search engine identified 11,700 sites (Stodel & Farres, 2000a). An identical search in March 2004 revealed approximately 74,700 sites, representing an increase of almost 650%. Although a dynamic and fully interactive online MT environment does not yet appear to have been realised, it surely will not be long before this happens. In this chapter we highlight the importance of thoughtful design when developing such training and present a framework to guide the development of online MT.


Information ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 259
Author(s):  
Ioannis Drivas ◽  
Dimitrios Kouis ◽  
Daphne Kyriaki-Manessi ◽  
Georgios Giannakopoulos

While digitalization of cultural organizations is in full swing and growth, it is common knowledge that websites can be used as a beacon to expand the awareness and consideration of their services on the Web. Nevertheless, recent research results indicate the managerial difficulties in deploying strategies for expanding the discoverability, visibility, and accessibility of these websites. In this paper, a three-stage data-driven Search Engine Optimization schema is proposed to assess the performance of Libraries, Archives, and Museums websites (LAMs), thus helping administrators expand their discoverability, visibility, and accessibility within the Web realm. To do so, the authors examine the performance of 341 related websites from all over the world based on three different factors, Content Curation, Speed, and Security. In the first stage, a statistically reliable and consistent assessment schema for evaluating the SEO performance of LAMs websites through the integration of more than 30 variables is presented. Subsequently, the second stage involves a descriptive data summarization for initial performance estimations of the examined websites in each factor is taking place. In the third stage, predictive regression models are developed to understand and compare the SEO performance of three different Content Management Systems, namely the Drupal, WordPress, and custom approaches, that LAMs websites have adopted. The results of this study constitute a solid stepping-stone both for practitioners and researchers to adopt and improve such methods that focus on end-users and boost organizational structures and culture that relied on data-driven approaches for expanding the visibility of LAMs services.


2020 ◽  
Vol 4 (2) ◽  
pp. 5 ◽  
Author(s):  
Ioannis C. Drivas ◽  
Damianos P. Sakas ◽  
Georgios A. Giannakopoulos ◽  
Daphne Kyriaki-Manessi

In the Big Data era, search engine optimization deals with the encapsulation of datasets that are related to website performance in terms of architecture, content curation, and user behavior, with the purpose to convert them into actionable insights and improve visibility and findability on the Web. In this respect, big data analytics expands the opportunities for developing new methodological frameworks that are composed of valid, reliable, and consistent analytics that are practically useful to develop well-informed strategies for organic traffic optimization. In this paper, a novel methodology is implemented in order to increase organic search engine visits based on the impact of multiple SEO factors. In order to achieve this purpose, the authors examined 171 cultural heritage websites and their retrieved data analytics about their performance and user experience inside them. Massive amounts of Web-based collections are included and presented by cultural heritage organizations through their websites. Subsequently, users interact with these collections, producing behavioral analytics in a variety of different data types that come from multiple devices, with high velocity, in large volumes. Nevertheless, prior research efforts indicate that these massive cultural collections are difficult to browse while expressing low visibility and findability in the semantic Web era. Against this backdrop, this paper proposes the computational development of a search engine optimization (SEO) strategy that utilizes the generated big cultural data analytics and improves the visibility of cultural heritage websites. One step further, the statistical results of the study are integrated into a predictive model that is composed of two stages. First, a fuzzy cognitive mapping process is generated as an aggregated macro-level descriptive model. Secondly, a micro-level data-driven agent-based model follows up. The purpose of the model is to predict the most effective combinations of factors that achieve enhanced visibility and organic traffic on cultural heritage organizations’ websites. To this end, the study contributes to the knowledge expansion of researchers and practitioners in the big cultural analytics sector with the purpose to implement potential strategies for greater visibility and findability of cultural collections on the Web.


2001 ◽  
Vol 1 (3) ◽  
pp. 28-31 ◽  
Author(s):  
Valerie Stevenson

Looking back to 1999, there were a number of search engines which performed equally well. I recommended defining the search strategy very carefully, using Boolean logic and field search techniques, and always running the search in more than one search engine. Numerous articles and Web columns comparing the performance of different search engines came to different conclusions on the ‘best’ search engines. Over the last year, however, all the speakers at conferences and seminars I have attended have recommended Google as their preferred tool for locating all kinds of information on the Web. I confess that I have now abandoned most of my carefully worked out search strategies and comparison tests, and use Google for most of my own Web searches.


2016 ◽  
Vol 65 (1) ◽  
pp. 61-80 ◽  
Author(s):  
Nicholas Clark

While much is known about the micro-level predictors of political knowledge, there have been relatively few efforts to study the potential macro-level causes of knowledge. Seeking to improve our understanding of country-based variation in knowledge, this article demonstrates that individuals have an easier time finding and interpreting information in political environments that provide the public with greater opportunities to engage, observe, and learn about the political process. To investigate that possibility, the article analyzes how the procedural quality of the political process affects political knowledge. Using data from the Comparative Study of Electoral Systems and the Worldwide Governance Indicators Project, survey analyses show that the transparency and responsiveness of a political system indeed influence the public’s information about political parties and, to a lesser extent, the amount of factual knowledge retained by survey respondents. In other words, the quality of democratic governance affects how much individuals know about the political process.


2017 ◽  
Vol 19 (1) ◽  
pp. 46-49 ◽  
Author(s):  
Mark England ◽  
Lura Joseph ◽  
Nem W. Schlect

Two locally created databases are made available to the world via the Web using an inexpensive but highly functional search engine created in-house. The technology consists of a microcomputer running UNIX to serve relational databases. CGI forms created using the programming language Perl offer flexible interface designs for database users and database maintainers.


Sign in / Sign up

Export Citation Format

Share Document