The Impact of Search Engines on Virus Propagation

Author(s):  
Cai Fu ◽  
Zhaokang Ke ◽  
Yunhe Zhang ◽  
Xiwu Chen ◽  
Liqing Cao ◽  
...  

With the popularization of computers and the development of information engineering, the emergence of search engines makes it possible to get the information needed from big data quickly and efficiently. However, in recent years, a multiplicity of new viruses have been propagated by search engines. Many researchers choose to cut off the source of virus propagation, ignoring the virus immunization strategy based on the search engine. In this paper, we analyze the impact of search engines on virus propagation. First, considering the immune effect and cost, two kinds of immune mechanisms based on the search engine that have greater practicability are defined. Second, immune mechanisms based on the search engine are theoretically analyzed by the iteration method and the dynamic method. The results show that this immunization strategy can slow down or eliminate the propagation of a virus to a certain extent. Third, three real social network data sets are used to simulate and analyze the immune mechanism. We find that when the proportion of nodes being infected and the proportion of infected nodes being identified by the search engine satisfy a certain relationship, our immune mechanism can inhibit the spread of viruses, which confirms our theoretical analysis results.

2019 ◽  
Vol 11 (9) ◽  
pp. 202 ◽  
Author(s):  
Rovira ◽  
Codina ◽  
Guerrero-Solé ◽  
Lopezosa

Search engine optimization (SEO) constitutes the set of methods designed to increase the visibility of, and the number of visits to, a web page by means of its ranking on the search engine results pages. Recently, SEO has also been applied to academic databases and search engines, in a trend that is in constant growth. This new approach, known as academic SEO (ASEO), has generated a field of study with considerable future growth potential due to the impact of open science. The study reported here forms part of this new field of analysis. The ranking of results is a key aspect in any information system since it determines the way in which these results are presented to the user. The aim of this study is to analyze and compare the relevance ranking algorithms employed by various academic platforms to identify the importance of citations received in their algorithms. Specifically, we analyze two search engines and two bibliographic databases: Google Scholar and Microsoft Academic, on the one hand, and Web of Science and Scopus, on the other. A reverse engineering methodology is employed based on the statistical analysis of Spearman’s correlation coefficients. The results indicate that the ranking algorithms used by Google Scholar and Microsoft are the two that are most heavily influenced by citations received. Indeed, citation counts are clearly the main SEO factor in these academic search engines. An unexpected finding is that, at certain points in time, Web of Science (WoS) used citations received as a key ranking factor, despite the fact that WoS support documents claim this factor does not intervene.


2019 ◽  
pp. 390-408
Author(s):  
Andrew Murray

This chapter examines brand identities, search engines, and secondary markets and their operation in the information society. It considers jurisdiction and online trademark disputes, as well as search engine optimization and the role of Google and the impact of its search engine services on brand profile and market presence. The chapter goes on to examine secondary markets and the liability of sellers of counterfeit products for the abuse of trademarks. The chapter concludes with a summary of the changing nature of online branding and the diminishing impact of domain names to cement brand identity, as well as the growing influence of developments to web browser functionality on consumer behaviour.


2012 ◽  
pp. 191-215
Author(s):  
Mona Sleem-Amer ◽  
Ivan Bigorgne ◽  
Stéphanie Brizard ◽  
Leeley Daio Pires Dos Santos ◽  
Yacine El Bouhairi ◽  
...  

Over the last years, research and industry players have become increasingly interested in analyzing opinions and sentiments expressed on the social media web for product marketing and business intelligence. In order to adapt to this need search engines not only have to be able to retrieve lists of documents but to directly access, analyze, and interpret topics and opinions. This article covers an intermediate phase of the ongoing industrial research project ’DoXa’ aiming at developing a semantic opinion and sentiment mining search engine for the French language. The DoXa search engine enables topic related opinion and sentiment extraction beyond positive and negative polarity using rich linguistic resources. Centering the work on two distinct business use cases, the authors analyze both unstructured Web 2.0 contents (e.g., blogs and forums) and structured questionnaire data sets. The focus is on discovering hidden patterns in the data. To this end, the authors present work in progress on opinion topic relation extraction and visual analytics, linguistic resource construction as well as the combination of OLAP technology with semantic search.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Sebastian Schultheiß ◽  
Dirk Lewandowski

PurposeIn commercial web search engine results rankings, four stakeholder groups are involved: search engine providers, users, content providers and search engine optimizers. Search engine optimization (SEO) is a multi-billion-dollar industry and responsible for making content visible through search engines. Despite this importance, little is known about its role in the interaction of the stakeholder groups.Design/methodology/approachWe conducted expert interviews with 15 German search engine optimizers and content providers, the latter represented by content managers and online journalists. The interviewees were asked about their perspectives on SEO and how they assess the views of users about SEO.FindingsSEO was considered necessary for content providers to ensure visibility, which is why dependencies between both stakeholder groups have evolved. Despite its importance, SEO was seen as largely unknown to users. Therefore, it is assumed that users cannot realistically assess the impact SEO has and that user opinions about SEO depend heavily on their knowledge of the topic.Originality/valueThis study investigated search engine optimization from the perspective of those involved in the optimization business: content providers, online journalists and search engine optimization professionals. The study therefore contributes to a more nuanced view on and a deeper understanding of the SEO domain.


2021 ◽  
pp. 016555152110141
Author(s):  
Sebastian Schultheiß ◽  
Dirk Lewandowski

People have a high level of trust in search engines, especially Google, but only limited knowledge of them, as numerous studies have shown. This leads to the question: To what extent is this trust justified considering the lack of familiarity among users with how search engines work and the business models they are founded on? We assume that trust in Google, search engine preferences and knowledge of result types are interrelated. To examine this assumption, we conducted a representative online survey with n = 2012 German Internet users. We show that users with little search engine knowledge are more likely to trust and use Google than users with more knowledge. A contradiction revealed itself – users strongly trust Google, yet they are unable to adequately evaluate search results. For those users, this may be problematic since it can potentially affect knowledge acquisition. Consequently, there is a need to promote user information literacy to create a more solid foundation for user trust in search engines. The impact of our study lies in emphasising the need for creating appropriate training formats to promote information literacy.


2021 ◽  
Author(s):  
Nicolas Guenon des Mesnards ◽  
David Scott Hunter ◽  
Zakaria el Hjouji ◽  
Tauhid Zaman

Bots Impact Opinions in Social Networks: Let’s Measure How Much There is a serious threat posed by bots that try to manipulate opinions in social networks. In “Assessing the Impact of Bots on Social Networks,” Nicolas Guenon des Mesnards, David Scott Hunter, Zakaria el Hjouiji, and Tauhid Zaman present a new set of operational capabilities to detect these bots and measure their impact. They developed an algorithm based on the Ising model from statistical physics to find coordinating gangs of bots in social networks. They then created an algorithm based on opinion dynamics models to quantify the impact that bots have on opinions in a social network. They applied their algorithms to a variety of real social network data sets. They found that, for topics such as Brexit, the bots had little impact, whereas for topics such as the U.S. presidential debate and the Gilets Jaunes protests in France, the bots had a significant impact.


2016 ◽  
Vol 8 (3) ◽  
pp. 163
Author(s):  
Ala’ A. Alkarablieh

<p>This study aims at measuring the impact of using the most popular search engines: Google and Yahoo Bing networks on attracting new customers towards company websites and on the effectiveness of online advertisements. Quantitative research method and questionnaires were used by researcher to collect data.</p><p>The researcher distributed (129) questionnaires on (33) companies that use E-marketing in Jordan. Among those only (87) questionnaires were returned and (42) of which only were usable and valid for the statistical analysis. After executing the analysis, the study concluded that Google search engine has a direct effect on attracting new customers and on online advertisements effectiveness at (α ≤ 0.05) level for companies using E-marketing in Jordan, while Yahoo Bing search engine has a direct effect on attracting new customers and on the effectiveness online advertisements at (α ≤ 0.05) level for companies using E-marketing in Jordan.</p>


2020 ◽  
Vol 24 (4) ◽  
pp. 9-20
Author(s):  
Ya. F. Zverev ◽  
A. Ya. Rykunova

The review is devoted to the consideration of the most common drugs currently used in the treatment of primary nephrotic syndrome. Mechanisms of pharmacological activity of glucocorticosteroids, ACTH, calcineurin inhibitors cyclosporine A and tacrolimus, alkylating compounds cyclophosphamide and chlorambucil, mycophenolate mofetil, levamisole, abatacept, rituximab and a number of other recently created monoclonal antibodies. An attempt is made to separate the immune and non-immune mechanisms of action of the most common drugs, concerning both the impact on the immunogenetics of the noted diseases and the direct impact on the podocytes that provide permeability of the glomerular filtration barrier and the development of proteinuria. It is shown that the immune mechanisms of corticosteroids are caused by interaction with glucocorticoid receptors of lymphocytes, and nonimmune – with stimulation of the same receptors in podocytes. It was found that the activation of adrenocorticotropic hormone melanocortin receptors contributes to the beneficial effect of the drug in nephrotic syndrome. It is discussed that the immune mechanism of calcineurin inhibitors is provided by the suppression of tissue and humoral immunity, and the non-immune mechanism is largely due to the preservation of the activity of podocyte proteins such as synaptopodin and cofilin. Evidence is presented to show that the beneficial effect of rituximab in glomerulopathies is related to the interaction of the drug with the protein SMPDL-3b in lymphocytes and podocytes. The mechanisms of action of mycophenolate mofetil, inhibiting the activity of the enzyme inosine 5-monophosphate dehydrogenase, which causes the suppression of the synthesis of guanosine nucleotides in both lymphocytes and glomerular mesangium cells, are considered. It is emphasized that the effect of levamisole in nephrotic syndrome is probably associated with the normalization of the ratio of cytokines produced by various T-helpers, as well as with an increase in the expression and activity of glucocorticoid receptors. The mechanisms of pharmacological activity of a number of monoclonal antibodies, as well as galactose, the beneficial effect of which may be provided by binding to the supposed permeability factor produced by lymphocytes, are considered.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yahya Albalawi ◽  
Jim Buckley ◽  
Nikola S. Nikolov

AbstractThis paper presents a comprehensive evaluation of data pre-processing and word embedding techniques in the context of Arabic document classification in the domain of health-related communication on social media. We evaluate 26 text pre-processings applied to Arabic tweets within the process of training a classifier to identify health-related tweets. For this task we use the (traditional) machine learning classifiers KNN, SVM, Multinomial NB and Logistic Regression. Furthermore, we report experimental results with the deep learning architectures BLSTM and CNN for the same text classification problem. Since word embeddings are more typically used as the input layer in deep networks, in the deep learning experiments we evaluate several state-of-the-art pre-trained word embeddings with the same text pre-processing applied. To achieve these goals, we use two data sets: one for both training and testing, and another for testing the generality of our models only. Our results point to the conclusion that only four out of the 26 pre-processings improve the classification accuracy significantly. For the first data set of Arabic tweets, we found that Mazajak CBOW pre-trained word embeddings as the input to a BLSTM deep network led to the most accurate classifier with F1 score of 89.7%. For the second data set, Mazajak Skip-Gram pre-trained word embeddings as the input to BLSTM led to the most accurate model with F1 score of 75.2% and accuracy of 90.7% compared to F1 score of 90.8% achieved by Mazajak CBOW for the same architecture but with lower accuracy of 70.89%. Our results also show that the performance of the best of the traditional classifier we trained is comparable to the deep learning methods on the first dataset, but significantly worse on the second dataset.


2021 ◽  
pp. 089443932110068
Author(s):  
Aleksandra Urman ◽  
Mykola Makhortykh ◽  
Roberto Ulloa

We examine how six search engines filter and rank information in relation to the queries on the U.S. 2020 presidential primary elections under the default—that is nonpersonalized—conditions. For that, we utilize an algorithmic auditing methodology that uses virtual agents to conduct large-scale analysis of algorithmic information curation in a controlled environment. Specifically, we look at the text search results for “us elections,” “donald trump,” “joe biden,” “bernie sanders” queries on Google, Baidu, Bing, DuckDuckGo, Yahoo, and Yandex, during the 2020 primaries. Our findings indicate substantial differences in the search results between search engines and multiple discrepancies within the results generated for different agents using the same search engine. It highlights that whether users see certain information is decided by chance due to the inherent randomization of search results. We also find that some search engines prioritize different categories of information sources with respect to specific candidates. These observations demonstrate that algorithmic curation of political information can create information inequalities between the search engine users even under nonpersonalized conditions. Such inequalities are particularly troubling considering that search results are highly trusted by the public and can shift the opinions of undecided voters as demonstrated by previous research.


Sign in / Sign up

Export Citation Format

Share Document