web content
Recently Published Documents





Arkadipta De ◽  
Dibyanayan Bandyopadhyay ◽  
Baban Gain ◽  
Asif Ekbal

Fake news classification is one of the most interesting problems that has attracted huge attention to the researchers of artificial intelligence, natural language processing, and machine learning (ML). Most of the current works on fake news detection are in the English language, and hence this has limited its widespread usability, especially outside the English literate population. Although there has been a growth in multilingual web content, fake news classification in low-resource languages is still a challenge due to the non-availability of an annotated corpus and tools. This article proposes an effective neural model based on the multilingual Bidirectional Encoder Representations from Transformer (BERT) for domain-agnostic multilingual fake news classification. Large varieties of experiments, including language-specific and domain-specific settings, are conducted. The proposed model achieves high accuracy in domain-specific and domain-agnostic experiments, and it also outperforms the current state-of-the-art models. We perform experiments on zero-shot settings to assess the effectiveness of language-agnostic feature transfer across different languages, showing encouraging results. Cross-domain transfer experiments are also performed to assess language-independent feature transfer of the model. We also offer a multilingual multidomain fake news detection dataset of five languages and seven different domains that could be useful for the research and development in resource-scarce scenarios.

Sharra Mae B. Fernandez ◽  

This experimental research study determined and compared the webpage browsing performance of proprietary and open source operating systems on wireless networks. It was intended to reveal the significant differences in the webpage browsing performance between proprietary and open source operating systems on wireless networks when classified as to hardware specifications and type’s web content. The researchers used the JavaScript Console of the Google Chrome web browser application to determine the time of the webpage to fully load. Operating system was the independent variable. Hardware specifications which were classified as old system and new system and types of web content which was also classified as static and dynamic webpages were the intervening variables. Webpages browsing performance was the dependent variable. The statistical tools used were arithmetic mean, and t-test. It also revealed that there were significant differences in the webpage browsing performance between proprietary and open source operating system on wireless networks when classified as to hardware specification and web content. The proprietary and open source operating systems were statistically different when classified as to hardware specifications and type of web content.

2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Hanan Alghamdi ◽  
Ali Selamat

PurposeWith the proliferation of terrorist/extremist websites on the World Wide Web, it has become progressively more crucial to detect and analyze the content on these websites. Accordingly, the volume of previous research focused on identifying the techniques and activities of terrorist/extremist groups, as revealed by their sites on the so-called dark web, has also grown.Design/methodology/approachThis study presents a review of the techniques used to detect and process the content of terrorist/extremist sites on the dark web. Forty of the most relevant data sources were examined, and various techniques were identified among them.FindingsBased on this review, it was found that methods of feature selection and feature extraction can be used as topic modeling with content analysis and text clustering.Originality/valueAt the end of the review, present the current state-of-the- art and certain open issues associated with Arabic dark Web content analysis.

2022 ◽  
Vol 22 (2) ◽  
pp. 251-288
Halise Şerefoğlu Henkoğlu

İçinde bulunduğumuz bilgi çağında gelişen bilişim teknolojilerinin sunduğu imkânlar ile birlikte bilgi kaynaklarının büyük bir çoğunluğu elektronik ortama taşınmakta ve kütüphaneler de dijital kütüphanelere dönüşerek kullanıcılarına web ortamında hizmet sunmaktadır. Bu kapsamda kütüphane web siteleri bilgiye ve kütüphane hizmetlerine erişimde kullanıcılar için birincil iletişim noktası olmaktadır. Bu noktada, kaynaklara ve hizmetlere erişimde bir geçit görevi üstlenen web sitelerinin erişilebilirlik standartları doğrultusunda tasarlanmasının önemi ortaya çıkmaktadır. Bu çalışmada, Türkiye’deki üniversite kütüphane web sitelerinin Web İçeriği Erişilebilirlik Kılavuzu (Web Content Accessibility Guidelines [WCAG]) ilkeleri doğrultusunda incelenmesi ve erişilebilirliklerine ilişkin mevcut durumun ortaya konulması amaçlanmaktadır. Betimleme modeli kullanılarak gerçekleştirilen çalışmada Türkiye’de 2020-2021 eğitim-öğretim yılında aktif olarak eğitim ve öğretim faaliyetlerine devam eden 183 üniversitenin kütüphane web sitesi WAVE otomatik değerlendirme aracı kullanılarak incelenmiştir. Elde edilen bulgular; üniversite kütüphane web sitelerinin tamamında en az bir erişilebilirlik hatasının bulunduğunu (maksimum=123), en sık karşılaşılan erişebilirlik hatasının “boş bağlantı” hatası olduğunu (n=160) ve web sitelerinin büyük bir çoğunluğunda (%93) en az bir öğede kontrast hatası olduğunu göstermektedir. Bununla birlikte web sitelerinin %97’sinde, doğrudan hata olarak değerlendirilmeyen ancak erişilebilirliği olumsuz etkileyebilecek unsurların da bulunduğu belirlenmiştir. Ayrıca, web sitelerinin erişilebilirliklerini olumsuz yönde etkileyen bir diğer unsurun da mobil uyumluluğa ilişkin tespit edilen hatalar olduğu görülmektedir. Bu çalışmanın ülkemizde kütüphane web sitelerinin erişilebilirliklerinin iyileştirilmesi amacıyla yapılacak planlamalara ve düzenlemelere katkı sağlayacağı ve aynı zamanda gelecekte konuya ilişkin yapılacak diğer çalışmalar için de örnek bir çalışma olacağı düşünülmektedir.

2022 ◽  
pp. 135-168
Zehra Altuntaş ◽  
Pınar Onay Durdu

In this chapter, a unified web accessibility assessment (UWAA) framework and its software has been proposed. UWAA framework was developed by considering Web Content Accessibility Guideline 2.0 to evaluate accessibility of web sites by integrating more than one evaluation approach. Achecker tool as an automated evaluation approach and barrier walkthrough (BW) as an expert-based evaluation approach were integrated in the UWAA framework. The framework also provides suggestions to recover from the problems determined to the evaluators. The websites of three universities were evaluated to determine the framework's accuracy and consistency. It was revealed that the results obtained from automated and expert-based evaluation methods were consistent and complementary with each other. Furthermore, it has been demonstrated that problems which cannot be determined by an automated tool but which can be detected by an expert can be identified by BW method.

2022 ◽  
pp. 394-414
Mohamed ElSayed ElAraby ◽  
Ahmed M. Anter

Web content is diverse and is regarded as the primary source of accessible information that can be accessed through reference links. Web facial images are one type of web content that relates to important web pages and is considered important information for individuals. This chapter proposes face recognition as a service architecture that is based on real-world images from the web. The proposed service is implemented as a service for other third parties via cloud computing; additionally, its architecture is built via cloud using virtual machines that can be expanded based on resource demands. Web crawlers crawl web pages and retrieve images for elastic cloud storage. The collected images are then used to remove human faces and prepare the face images for identification and identifying the matched face of the set through successive phases. This chapter used PCA for features extraction and KNN for identification. Experiments show that increasing the number of crawler instances improves crawling speed and improves face recognition accuracy by preferring Euclidean over other metrics.

2021 ◽  
Vol 14 (4) ◽  
pp. 1-21
Eduard Cristobal-Fransi ◽  
José Ramón-Cardona ◽  
Natalia Daries ◽  
Antoni Serra-Cantallops

In terms of destination image, museums represent a tourism resource of the first magnitude. However, just as the information available online influences visitors’ decision-making about destinations, the internet is also fundamental in promoting and attracting visitors to museums. For that reason, we sought to analyse the online presence of museums in the seven most visited cities in Spain. To examine the museums’ websites, we developed an integrative model based on web content analysis (WCA ) and the extended model of internet commerce adoption (eMICA) that we applied to 77 publicly and privately run museums in Spain. Both WCA and the eMICA indicated that, despite their great economic and touristic scope, museums in Spain's most visited cities tend to mismanage their online presence and communication. We thus tentatively attributed the online presence of museums in Spain to type of museum management and several city-related parameters as explanatory variables. Multiple linear regressions of the variables revealed that, under public management, museums have had better online presence, while their respective cities have attracted more tourists. Those findings imply that museums still have a long way to go when it comes to facilitating effective communication and interaction with their target public, which we address in relation to the study's limitations and directions for future research.

2021 ◽  
pp. 1-8
Elizabeth Keavney

BACKGROUND: College and university websites in the United States are legally required to meet accessibility standards to promote equal opportunity in education for blind and visually disabled students. Web Content Accessibility Guidelines are the recognized standard for website accessibility. OBJECTIVE: Determine how satisfied blind and visually disabled college and university students are with college and university websites in California, and whether compliance with Web Content Accessibility Guidelines is a good predictor of that satisfaction. METHODS: A random sample of websites from California colleges and universities was evaluated for accessibility compliance. A stratified sample of six websites was taken from the initial sample. Thirty blind or visually disabled students performed a prescribed series of tasks on each of the six websites, then answered a Likert-format survey regarding their satisfaction with each website. RESULTS: Sixty-three percent of websites did not meet the first priority accessibility criteria. Participant responses showed a majority were satisfied with websites, both compliant and non-compliant, and a strong correlation between satisfaction and accessibility compliance. CONCLUSIONS: Despite legal requirements, a majority or large minority of college and university websites in California do not meet accessibility guidelines, indicating a significant opportunity to improve the accessibility of those websites.

Sergey Orekhov ◽  
Hennadiy Malyhon

An approach to the mathematical description of the criterion for the effectiveness of a new object of research – virtual promotion is presented in thepaper. The emergence of this new object of research is connected, on the one hand, with the classical theory of marketing, and on the other withmodern Internet technologies. Marketing is based on the 4P principle: product, price, location and promotion. Promotion is a component of thisprinciple. But in modern conditions, this phenomenon is changing under the influence of the Internet. Now this 4P component is becoming a fullyvirtual instrument. The traditional scheme of promotion functioning is as follows. A message is created to a potential buyer and the delivery channel ofthis message undergoes a change. It is based on the principle: money – goods – money. While the new sales scheme is described by the scheme: weattract a client, make money on a client, we spend money. In the new scheme, we deal with product knowledge in the form of the so-called semanticcore of web content. Knowledge describes for a potential client how a given product can cover his need for something. Using the logistic principles ofthe transfer of goods, this semantic core is loaded into the specified Internet nodes. That is, virtual promotion is formed as two channels: logistics andmarketing. The first one performs three operations: concentration, formatting and distribution of semantic cores on the Internet. The second managesthis process, forming a virtual promotion map. This map is a graph of Internet nodes. It is required to define such a tree of Internet nodes so that virtualpromotion has maximum efficiency. The paper analyzes modern metrics related to the processes of search engine optimization on the Internet.Unfortunately, these metrics evaluate only statistically after the fact of visiting a web resource or the budget of the Internet site in which theadvertising message about the product was placed. Therefore, based on the conversion metric, a criterion for the effectiveness of virtual promotion wasproposed in the work, which takes into account both the attractiveness of the semantic core and the attractiveness of the Internet site where thesemantic core will be located. The criterion reflects the income that we receive depending on the attractiveness of the semantic kernel and the Internetsite.

Ye. A. Kosova ◽  
A. S. Gapon ◽  
K. I. Redkokosh

The purpose of the article is to assess the accessibility of electronic educational resources (EER) published in the university Moodle Learning Management System (LMS). The analysis involved 22 EERs in mathematical and information technology disciplines, located in the Moodle LMS of the V. I. Vernadsky Crimean Federal University. The examination algorithm included analysis using the Web Accessibility Evaluation Tool (WAVE) and expert analysis of web accessibility using visual, auditory and manual methods based on 89 checklist attributes. In the result of the analysis, multiple accessibility errors of the Moodle platform and the EERs hosted on it were found. The most serious platform problems include: lack of compatibility with text browsers; errors of reproduction by screen readers; errors of content reproduction on mobile devices. The list of accessibility errors made by the authors of EERs includes: incorrect design of hyperlinks (22.7 % of the EERs); lack of subtitles (13.6 %), transcripts (22.7 %), synopses of video lectures (27.3 %); lack of alternative descriptions for figures (68.2 %); time limit for tests (9.1 %); lack of special markup for mathematical notation (36.4 %) and program code (13.6 %), etc. Results of the survey show need in training of EERs’ authors in technologies for developing accessible educational web content. It is advisable to familiarize web developers deploying an LMS at universities with the basics of web accessibility, LMS accessibility functions and modules in order to select the most suitable platform, determine and install the required set of accessibility tools. Before launching all EERs should be subject to mandatory examination for compliance with the web accessibility guidelines.

Sign in / Sign up

Export Citation Format

Share Document