Search Engines, Personal Information and the Problem of Privacy in Public

2005 ◽  
Vol 3 ◽  
pp. 39-45
Author(s):  
Herman T Tavani

The purpose of this paper is to show how certain uses of search-engine technology raise concerns for personal privacy. In particular, we examine some privacy implications involving the use of search engines to acquire information about persons. We consider both a hypothetical scenario and an actual case in which one or more search engines are used to find information about an individual. In analyzing these two cases, we note that both illustrate an existing problem that has been exacerbated by the use of search engines and the Internet – viz., the problem of articulating key distinctions involving the public vs. private aspects of personal information. We then draw a distinction between “public personal information” (or PPI) and “nonpublic personal information” (or NPI) to see how this scheme can be applied to a problem of protecting some forms of personal information that are now easily manipulated by computers and search engines – a concern that, following Helen Nissenbaum (1998, 2004), we describe as the problem of privacy in public.

2015 ◽  
Vol 5 (1) ◽  
pp. 51-64
Author(s):  
Tinu Jain ◽  
Prashant Mishra

Internet is quickly becoming the public electronic marketplace. Though the internet has revolutionized retail and direct marketing, the full scale incorporation and acceptance of the internet marketplace with the modern business is limited. One major inhibition shown by the internet buyers is in the form of lack of confidence in the newly developed marketing machinery/technology and concern related fear and distrust regarding loss of personal privacy due to easy access of personal information to the marketers. This concern about personal information and privacy varies with consumers especially with countries. It is also suggested that importance provided to various privacy dimensions would vary. The research investigates different dimensions and their interrelationship to identify factors affecting privacy concern among Net buyers in India.


2019 ◽  
pp. 1214-1229
Author(s):  
Tinu Jain ◽  
Prashant Mishra

Internet is quickly becoming the public electronic marketplace. Though the internet has revolutionized retail and direct marketing, the full scale incorporation and acceptance of the internet marketplace with the modern business is limited. One major inhibition shown by the internet buyers is in the form of lack of confidence in the newly developed marketing machinery/technology and concern related fear and distrust regarding loss of personal privacy due to easy access of personal information to the marketers. This concern about personal information and privacy varies with consumers especially with countries. It is also suggested that importance provided to various privacy dimensions would vary. The research investigates different dimensions and their interrelationship to identify factors affecting privacy concern among Net buyers in India.


Author(s):  
Tinu Jain ◽  
Prashant Mishra

Internet is quickly becoming the public electronic marketplace. Though the internet has revolutionized retail and direct marketing, the full scale incorporation and acceptance of the internet marketplace with the modern business is limited. One major inhibition shown by the internet buyers is in the form of lack of confidence in the newly developed marketing machinery/technology and concern related fear and distrust regarding loss of personal privacy due to easy access of personal information to the marketers. This concern about personal information and privacy varies with consumers especially with countries. It is also suggested that importance provided to various privacy dimensions would vary. The research investigates different dimensions and their interrelationship to identify factors affecting privacy concern among Net buyers in India.


Author(s):  
Novario Jaya Perdana

The accuracy of search result using search engine depends on the keywords that are used. Lack of the information provided on the keywords can lead to reduced accuracy of the search result. This means searching information on the internet is a hard work. In this research, a software has been built to create document keywords sequences. The software uses Google Latent Semantic Distance which can extract relevant information from the document. The information is expressed in the form of specific words sequences which could be used as keyword recommendations in search engines. The result shows that the implementation of the method for creating document keyword recommendation achieved high accuracy and could finds the most relevant information in the top search results.


Author(s):  
Cecil Eng Huang Chua ◽  
Roger H. Chiang ◽  
Veda C. Storey

Search engines are ubiquitous tools for seeking information from the Internet and, as such, have become an integral part of our information society. New search engines that combine ideas from separate search engines generally outperform the search engines from which they took ideas. Designers, however, may not be aware of the work of other search engine developers or such work may not be available in modules that can be incorporated into another search engine. This research presents an interoperability architecture for building customized search engines. Existing search engines are analyzed and decomposed into self-contained components that are classified into six categories. A prototype, called the Automated Software Development Environment for Information Retrieval, was developed to implement the interoperability architecture, and an assessment of its feasibility was carried out. The prototype resolves conflicts between components of separate search engines and demonstrates how design features across search engines can be integrated.


Author(s):  
Suely Fragoso

This chapter proposes that search engines apply a verticalizing pressure on the WWW many-to-many information distribution model, forcing this to revert to a distributive model similar to that of the mass media. The argument for this starts with a critical descriptive examination of the history of search mechanisms for the Internet. Parallel to this there is a discussion of the increasing ties between the search engines and the advertising market. The chapter then presents questions concerning the concentration of traffic on the Web around a small number of search engines which are in the hands of an equally limited number of enterprises. This reality is accentuated by the confidence that users place in the search engine and by the ongoing acquisition of collaborative systems and smaller players by the large search engines. This scenario demonstrates the verticalizing pressure that the search engines apply to the majority of WWW users, that bring it back toward the mass distribution mode.


Author(s):  
Michael Zimmer

Web search engines have emerged as a ubiquitous and vital tool for the successful navigation of the growing online informational sphere. As Google puts it, the goal is to "organize the world's information and make it universally accessible and useful" and to create the "perfect search engine" that provides only intuitive, personalized, and relevant results. Meanwhile, the so-called Web 2.0 phenomenon has blossomed based, largely, on the faith in the power of the networked masses to capture, process, and mashup one's personal information flows in order to make them more useful, social, and meaningful. The (inevitable) combining of Google's suite of information-seeking products with Web 2.0 infrastructures -- what I call Search 2.0 -- intends to capture the best of both technical systems for the touted benefit of users. By capturing the information flowing across Web 2.0, search engines can better predict users' needs and wants, and deliver more relevant and meaningful results. While intended to enhance mobility in the online sphere, this paper argues that the drive for Search 2.0 necessarily requires the widespread monitoring and aggregation of a users' online personal and intellectual activities, bringing with it particular externalities, such as threats to informational privacy while online.


AI Magazine ◽  
2015 ◽  
Vol 36 (4) ◽  
pp. 61-70 ◽  
Author(s):  
Daniel M. Russell

For the vast majority of queries (for example, navigation, simple fact lookup, and others), search engines do extremely well. Their ability to quickly provide answers to queries is a remarkable testament to the power of many of the fundamental methods of AI. They also highlight many of the issues that are common to sophisticated AI question-answering systems. It has become clear that people think of search programs in ways that are very different from traditional information sources. Rapid and ready-at-hand access, depth of processing, and the way they enable people to offload some ordinary memory tasks suggest that search engines have become more of a cognitive amplifier than a simple repository or front-end to the Internet. Like all sophisticated tools, people still need to learn how to use them. Although search engines are superb at finding and presenting information—up to and including extracting complex relations and making simple inferences—knowing how to frame questions and evaluate their results for accuracy and credibility remains an ongoing challenge. Some questions are still deep and complex, and still require knowledge on the part of the search user to work through to a successful answer. And the fact that the underlying information content, user interfaces, and capabilities are all in a continual state of change means that searchers need to continually update their knowledge of what these programs can (and cannot) do.


2013 ◽  
Vol 29 (3) ◽  
pp. 314-344 ◽  
Author(s):  
Jose M. Such ◽  
Agustín Espinosa ◽  
Ana García-Fornes

AbstractPrivacy has been a concern for humans long before the explosive growth of the Internet. The advances in information technologies have further increased these concerns. This is because the increasing power and sophistication of computer applications offers both tremendous opportunities for individuals, but also significant threats to personal privacy. Autonomous agents and multi-agent systems are examples of the level of sophistication of computer applications. Autonomous agents usually encapsulate personal information describing their principals, and therefore they play a crucial role in preserving privacy. Moreover, autonomous agents themselves can be used to increase the privacy of computer applications by taking advantage of the intrinsic features they provide, such as artificial intelligence, pro-activeness, autonomy, and the like. This article introduces the problem of preserving privacy in computer applications and its relation to autonomous agents and multi-agent systems. It also surveys privacy-related studies in the field of multi-agent systems and identifies open challenges to be addressed by future research.


Author(s):  
Antonius Antonius ◽  
Bernard Renaldy Suteja

Current development of the internet world has been growing rapidly, especially in the field of website. People use search engines to find the news or information they needed on a website. One of the many indications of the success of a website is traffic. Traffic could be received from various factors, one of which is website rank in Search Engine Result Page (SERP). To improve the SERP, SEO methods are required. This research will implement SEO to website especially on the image, and then analyzed by using a tester tools, for example SEOptimer, Pingdom Tools, and SEO Site Checkup. After the website has been optimized, tested with the same tester tools. From the research results can be seen whether image optimization can affect SERP.


Sign in / Sign up

Export Citation Format

Share Document