Speech Engines

Author(s):  
James Grimmelmann

98 Minnesota Law Review 868 (2014)Academic and regulatory debates about Google are dominated by two opposing theories of what search engines are and how law should treat them. Some describe search engines as passive, neutral conduits for websites’ speech; others describe them as active, opinionated editors: speakers in their own right. The conduit and editor theories give dramatically different policy prescriptions in areas ranging from antitrust to copyright. But they both systematically discount search users’ agency, regarding users merely as passive audiences.A better theory is that search engines are not primarily conduits or editors, but advisors. They help users achieve their diverse and individualized information goals by sorting through the unimaginable scale and chaos of the Internet. Search users are active listeners, affirmatively seeking out the speech they wish to receive. Search engine law can help them by ensuring two things: access to high-quality search engines, and loyalty from those search engines.The advisor theory yields fresh insights into long-running disputes about Google. It suggests, for example, a new approach to deciding when Google should be liable for giving a website the “wrong” ranking. Users’ goals are too subjective for there to be an absolute standard of correct and incorrect rankings; different search engines necessarily assess relevance differently. But users are also entitled to complain when a search engine deliberately misleads them about its own relevance assessments. The result is a sensible, workable compromise between the conduit and editor theories.

Author(s):  
Jose Triny K, Et. al.

Web pages have an increasing number of been used because thepatron interface of many software programsoftwarestructures. The simplicity of interplay with internet pages is an idealbenefit of the usage of them. However, the character interface also can get extracomplicatedwhilegreatercomplexnet pages are used to construct it. Understanding the complexity of net pages as perceived subjectively with the resource of clients is thereforecrucial to betterlayout this sort ofconsumer interface. Searching is one of thenot unusual placeassignmentachievedon the Internet. Search engines are the essentialtool of the net, from whereinyou willcollectassociatedstatistics and searched in keeping with the favoredkey-word given by the character. The recordson theinternet is developing dramatically. The consumer has to spend extra time with inside theinternetin case youneed to find outthe correctfactsthey may befascinated in. Existing net engines like Google do now no longerundergo in thoughtsuniqueneeds of character and serve eachpatron similarly. For this ambiguous query, some offiles on wonderfulsubjects are decreaselower backby engines like Google. Hence it will becomedifficult for the consumer to get the requiredcontent materialfabric. Moreover it additionally takes extra time in searching a pertinent content materialfabric. In this paper, we are able to survey the numerous algorithms for decreasing complexity in internetweb page navigations.


Author(s):  
Novario Jaya Perdana

The accuracy of search result using search engine depends on the keywords that are used. Lack of the information provided on the keywords can lead to reduced accuracy of the search result. This means searching information on the internet is a hard work. In this research, a software has been built to create document keywords sequences. The software uses Google Latent Semantic Distance which can extract relevant information from the document. The information is expressed in the form of specific words sequences which could be used as keyword recommendations in search engines. The result shows that the implementation of the method for creating document keyword recommendation achieved high accuracy and could finds the most relevant information in the top search results.


Author(s):  
Cecil Eng Huang Chua ◽  
Roger H. Chiang ◽  
Veda C. Storey

Search engines are ubiquitous tools for seeking information from the Internet and, as such, have become an integral part of our information society. New search engines that combine ideas from separate search engines generally outperform the search engines from which they took ideas. Designers, however, may not be aware of the work of other search engine developers or such work may not be available in modules that can be incorporated into another search engine. This research presents an interoperability architecture for building customized search engines. Existing search engines are analyzed and decomposed into self-contained components that are classified into six categories. A prototype, called the Automated Software Development Environment for Information Retrieval, was developed to implement the interoperability architecture, and an assessment of its feasibility was carried out. The prototype resolves conflicts between components of separate search engines and demonstrates how design features across search engines can be integrated.


Author(s):  
Suely Fragoso

This chapter proposes that search engines apply a verticalizing pressure on the WWW many-to-many information distribution model, forcing this to revert to a distributive model similar to that of the mass media. The argument for this starts with a critical descriptive examination of the history of search mechanisms for the Internet. Parallel to this there is a discussion of the increasing ties between the search engines and the advertising market. The chapter then presents questions concerning the concentration of traffic on the Web around a small number of search engines which are in the hands of an equally limited number of enterprises. This reality is accentuated by the confidence that users place in the search engine and by the ongoing acquisition of collaborative systems and smaller players by the large search engines. This scenario demonstrates the verticalizing pressure that the search engines apply to the majority of WWW users, that bring it back toward the mass distribution mode.


AI Magazine ◽  
2015 ◽  
Vol 36 (4) ◽  
pp. 61-70 ◽  
Author(s):  
Daniel M. Russell

For the vast majority of queries (for example, navigation, simple fact lookup, and others), search engines do extremely well. Their ability to quickly provide answers to queries is a remarkable testament to the power of many of the fundamental methods of AI. They also highlight many of the issues that are common to sophisticated AI question-answering systems. It has become clear that people think of search programs in ways that are very different from traditional information sources. Rapid and ready-at-hand access, depth of processing, and the way they enable people to offload some ordinary memory tasks suggest that search engines have become more of a cognitive amplifier than a simple repository or front-end to the Internet. Like all sophisticated tools, people still need to learn how to use them. Although search engines are superb at finding and presenting information—up to and including extracting complex relations and making simple inferences—knowing how to frame questions and evaluate their results for accuracy and credibility remains an ongoing challenge. Some questions are still deep and complex, and still require knowledge on the part of the search user to work through to a successful answer. And the fact that the underlying information content, user interfaces, and capabilities are all in a continual state of change means that searchers need to continually update their knowledge of what these programs can (and cannot) do.


Author(s):  
Antonius Antonius ◽  
Bernard Renaldy Suteja

Current development of the internet world has been growing rapidly, especially in the field of website. People use search engines to find the news or information they needed on a website. One of the many indications of the success of a website is traffic. Traffic could be received from various factors, one of which is website rank in Search Engine Result Page (SERP). To improve the SERP, SEO methods are required. This research will implement SEO to website especially on the image, and then analyzed by using a tester tools, for example SEOptimer, Pingdom Tools, and SEO Site Checkup. After the website has been optimized, tested with the same tester tools. From the research results can be seen whether image optimization can affect SERP.


2010 ◽  
Vol 55 (2) ◽  
pp. 374-386
Author(s):  
Joan Miquel-Vergés ◽  
Elena Sánchez-Trigo

The use of the Internet as a source of health information is greatly increasing. However, identifying relevant and valid information can be problematic. This paper firstly analyses the efficiency of Internet search engines specialized in health in order to then determine the quality of the online information related to a specific medical subdomain like that of neuromuscular diseases. Our aim is to present a model for the development and use of a bilingual electronic corpus (MYOCOR), related to the said neuromuscular diseases in order to: a) on one hand, provide a quality health information tool for health professionals, patients and relatives, as well as for translators and writers of specialized texts, and software developers, and b) on the other hand, use the same as a base for the implementation of a search engine (using keywords and semantics), like the ASEM (Federación Española Contra las Enfermedades Neuromusculares) search engine for neuromuscular diseases.


Blood ◽  
2018 ◽  
Vol 132 (Supplement 1) ◽  
pp. 4719-4719
Author(s):  
Steffi Shilly ◽  
Jane Lindahl ◽  
Dava Szalda ◽  
Caren Steinway ◽  
Sophia Jan

Abstract Introduction: As modern medicine has decreased mortality rates of children with Sickle Cell Disease (SCD), patients with SCD are living into adulthood and transitioning to adult care. However transition for these patients has proven to be a vulnerable time. Thus, it is important to prepare youth adequately for chronic care transition through expectations, knowledge, skills, efficacy, and support. The advancement of the Internet has provided patients a primary source to search and gather health-related knowledge. Internet usage is almost ubiquitous among American youth with 92% of them accessing the Internet regularly. Previous studies have shown a wide spectrum in the quality of information available on the Internet. Yet, to the best of our knowledge, a systematic review of online health information regarding transition of patients with SCD has not been conducted. Methods: Data were collected in December 2017 and January 2018 using the 5 search engines that have been identified as being most commonly utilized. Keywords were selected to represent phrases that people may use while searching for information on the Internet regarding SCD transition. Combinations of the keywords were used in the searches, and the first 20 links for each search term were considered in our study. Incognito window was used so that previous searches did not influence the results from the search engines. Websites that met the required inclusion/ exclusion criteria were included in this study. Websites were divided as SCD transition or non-SCD transition specific websites based on whether they mentioned sickle cell disease or not. Websites were classified as academic/educational institution, health department, hospital/private clinician, professional body, or other (includes Wiki, WebMD, and etc). Flesch Reading Ease (FRE) Score was used to evaluate website readability. A novel 12-item transition-specific content tool was produced to evaluate website content. Website quality was evaluated by assessing for the presence or absence of the HONcode certification and using the EQIP tool. A high quality website was defined as having an EQIP Score ≥ 75% in this study. Website quality and content was scored by two research assistants employed in the General Pediatrics department at Northwell Health. Statistical analysis was performed using Excel and online tools. A p-value <0.05 was the criterion for statistical significance. Results: Using the combination of keywords decided, 9522 websites were identified using the selected search engines. Of the 9522 websites, 157 eligible websites met the inclusion criteria and were analyzed. 92 websites were SCD specific links and 65 websites were non-SCD specific links. 27 websites had a HONCode certification issued to them. Only 1 non-SCD website and 26 SCD specific websites had HONCode certifications. The average EQIP score was 59.0 ± 3.0. The average EQIP score for SCD specific websites was 56.9 ± 5.2 and was 61.1 ± 5.0 for non-SCD specific websites. Based on the cutoff value of an EQIP score ≥ 75%, 6 SCD specific websites and 13 non-SCD specific websites are of high quality. The interrater reliability in EQIP ratings was good (Pearson correlation: 0.660). The average FRE score was 49.0 ± 4.0. The average FRE score was 51.9 ± 13.7 for SCD specific websites and 46.1 ± 15.8 for non-SCD specific websites. The average website content score was 28.6 ± 10.7. The average website content score was 21.0 ± 7.1 for SCD specific websites and 36.1 ± 10.2 for non-SCD specific websites. The results of the two tailed t-test indicated that FRE scores between HONCode certified and non-HONCode certified websites were significant for among SCD websites as well as among combined SCD and non-SCD websites (p < 0.05). All identified websites will also be reviewed by two physicians who specialize in caring for young adults with chronic illnesses. Analyses from their review will be conducted prior to the conference. Conclusion: Although seeking health care information online is very common, the overall quality of information about sickle cell disease transition on the Internet is poor. Steps should be taken to make changes that will allow for adequate online healthcare information regarding sickle cell disease transition. By doing this, youth going through transition will be prepared by having competent expectations, knowledge, skills, efficacy, and support available on the Internet. Disclosures No relevant conflicts of interest to declare.


2019 ◽  
Author(s):  
Muhammad Ilham Verardi Pradana

Thanks to the existence of Search engines, all of informations and datas could be easily found in the internet, one of the search engine that users use the most is Google. Google still be the most popular search engine to provide any informations available on the internet. The search result that Google provide, doesn't always give the result we wanted. Google just displayed the results based on the keyword we type. So sometimes, they show us the negative contents on the internet, such as pornography, pornsites, and many more that seems to be related to the keyword, whether the title or the other that makes the result going that way. In this paper, we will implement the "DNS SEHAT" to pass along client's request queries so the Google search engine on the client's side will provide more relevant search results without any negative contents.


Sign in / Sign up

Export Citation Format

Share Document