scholarly journals Aestheticizing Google critique: A 20-year retrospective

2018 ◽  
Vol 5 (1) ◽  
pp. 205395171876862 ◽  
Author(s):  
Richard Rogers

With Google marking its 20th year online, the piece provides a retrospective of cultural commentary and select works of Google art that have transformed the search engine into an object of critical interest. Taken up are artistic and cultural responses to Google by independent artists but also by cultural critics and technology writers, including the development of such evocative notions as the deep web, flickering man and filter bubble. Among the critiques that have taken shape in the works to be discussed here are objects and subjects brought into being by Google (such as ‘spammy neighbourhoods’), Googlization, Google’s information politics, its licensing (or what one is agreeing to when searching) as well as issues surrounding specific products such as Google Street View, as Google leaves the web, capturing more spaces to search.

2020 ◽  
Vol 11 (2020) ◽  
Author(s):  
Pauline Chasseray-Peraldi

Images of encounters between animals and drones or Google Street View cars are quite viral on the web. This article focuses on the different regimes of animacy and conflicts of affects in these images using an anthropo- semiotic approach. It investigates how other- ness reveals something that exceeds us, from the materiality of the machine to systems of values. It suggests that the disturbance of ani- mal presence in contemporary digital images helps us to read media technologies.


2021 ◽  
Author(s):  
Peiyuan Sun ◽  
Yu Sun

Due to the outbreak of the Covid-19 pandemic, college tours are no longer available, so many students have lost the opportunity to see their dream school’s campus. To solve this problem, we developed a product called “Virtourgo,” a university virtual tour website that uses Google Street View images gathered from a web scraper allowing students to see what college campuses are like even when tours are unavailable during the pandemic. The project consists of 3/4 parts: the web scraper script, the GitHub server, the Google Domains DNS Server, and the HTML files. Some challenges we met include scraping repeated pictures and letting the HTML dropdown menu jump to the correct location. We solved these by implementing Python and Javascript functions that specifically target such challenges. Finally, after experimenting with all the functions of the web scraper and website, we confirmed that it works as expected and can scrape and deliver tours of any university campus or public buildings we want.


2011 ◽  
Vol 50-51 ◽  
pp. 644-648
Author(s):  
Xiao Qing Zhou ◽  
Xiao Ping Tang

The traditional search engine is unable to correct search for the magnanimous information in Deep Web hides. The Web database's classification is the key step which integrates with the Web database classification and retrieves. This article has proposed one kind of classification based on machine learning's web database. The experiment has indicated that after this taxonomic approach undergoes few sample training, it can achieve the very good classified effect, and along with training sample's increase, this classifier's performance maintains stable and the rate of accuracy and the recalling rate fluctuate in the very small scope.


Author(s):  
Shoko Nishio ◽  
Fumiko Ito

AbstractIn recent years, big data entered use in various fields. Google Street View (hereinafter called “GSV”) can be regarded as open big data, and its images can be obtained using API. The streets can be viewed 360° horizontally and 290° vertically from each point on the web. In addition to those, zooming up is available, and the viewpoint can be moved approximately 10 m forward or backward to/from the current point. The original image to provide the view is the panoramic image associated with the latitude and longitude information on the street consecutively at intervals of 10 m, and they exist as massive data on the web. We determine the area of the sky using these images from GSV. In this research, we calculate the sky view factor (hereinafter called “SVF”) in an extended area by defining the area of the sky with the SVF and utilizing the computer.


2012 ◽  
Vol 220-223 ◽  
pp. 2920-2923
Author(s):  
Jia Qiang Dong

The Web database's classification is the key step which integrates with the Web database classification and retrieves. The traditional search engine is unable to correct search for the magnanimous information in Deep Web hides. This article has proposed one kind of classification based on machine learning's web database. The experiment has indicated that after this taxonomic approach undergoes few sample training, it can achieve the very good classified effect, and along with training sample's increase, this classifier's performance maintains stable and the rate of accuracy and the recalling rate fluctuate in the very small scope.


2013 ◽  
Vol 397-400 ◽  
pp. 2367-2370
Author(s):  
Xiao Qing Zhou ◽  
Jia Xiu Sun ◽  
Shu Bin Wang

The traditional search engine is unable to correct search for the magnanimous information in Deep Web hides. The Web database's classification is the key step which integrates with the Web database classification and retrieves. This article has proposed one kind of classification based on machine learning's web database. The experiment has indicated that after this taxonomic approach undergoes few sample training, it can achieve the very good classified effect, and along with training sample's increase, this classifier's performance maintains stable and the rate of accuracy and the recalling rate fluctuate in the very small scope.


Author(s):  
Hadrian Peter ◽  
Charles Greenidge

Traditionally a great deal of research has been devoted to data extraction on the web (Crescenzi, et al, 2001; Embley, et al, 2005; Laender, et al, 2002; Hammer, et al, 1997; Ribeiro-Neto, et al, 1999; Huck, et al, 1998; Wang & Lochovsky, 2002, 2003) from areas where data is easily indexed and extracted by a Search Engine, the so-called Surface Web. There are, however, other sites that are greater and potentially more vital, that contain information which cannot be readily indexed by standard search engines. These sites which have been designed to require some level of direct human participation (for example, to issue queries rather than simply follow hyperlinks) cannot be handled using the simple link traversal techniques used by many web crawlers (Rappaport, 2000; Cho & Garcia-Molina, 2000; Cho et al, 1998; Edwards et al, 2001). This area of the web, which has been operationally off-limits for crawlers using standard indexing procedures, is termed the Deep Web (Zillman, 2005; Bergman, 2000). Much work still needs to be done as Deep Web sites represent an area that is only recently being explored to identify where potential uses can be developed.


Author(s):  
Peiyuan Sun ◽  
Yu Sun

Due to the outbreak of the Covid-19 pandemic, college tours are no longer available, so many students have lost the opportunity to see their dream school’s campus. To solve this problem, we developed a product called “Virtourgo,” a university virtual tour website that uses Google Street View images gathered from a web scraper allowing students to see what college campuses are like even when tours are unavailable during the pandemic. The project consists of 3/4 parts: the web scraper script, the GitHub server, the Google Domains DNS Server, and the HTML files. Some challenges we met include scraping repeated pictures and letting the HTML dropdown menu jump to the correct location. We solved these by implementing Python and Javascript functions that specifically target such challenges. Finally, after experimenting with all the functions of the web scraper and website, we confirmed that it works as expected and can scrape and deliver tours of any university campus or public buildings we want.


Sign in / Sign up

Export Citation Format

Share Document