web contents
Recently Published Documents


TOTAL DOCUMENTS

238
(FIVE YEARS 41)

H-INDEX

10
(FIVE YEARS 2)

2022 ◽  
Vol 22 (1) ◽  
pp. 1-26
Author(s):  
Jingjing Wang ◽  
Wenjun Jiang ◽  
Kenli Li ◽  
Guojun Wang ◽  
Keqin Li

Predicting the popularity of web contents in online social networks is essential for many applications. However, existing works are usually under non-incremental settings. In other words, they have to rebuild models from scratch when new data occurs, which are inefficient in big data environments. It leads to an urgent need for incremental prediction, which can update previous results with new data and conduct prediction incrementally. Moreover, the promising direction of group-level popularity prediction has not been well treated, which explores fine-grained information while keeping a low cost. To this end, we identify the problem of incremental group-level popularity prediction, and propose a novel model IGPP to address it. We first predict the group-level popularity incrementally by exploiting the incremental CANDECOMP/PARAFCAC (CP) tensor decomposition algorithm. Then, to reduce the cumulative error by incremental prediction, we propose three strategies to restart the CP decomposition. To the best of our knowledge, this is the first work that identifies and solves the problem of incremental group-level popularity prediction. Extensive experimental results show significant improvements of the IGPP method over other works both in the prediction accuracy and the efficiency.


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2744
Author(s):  
Abdul Razaque ◽  
Bakhytzhan Valiyev ◽  
Bandar Alotaibi ◽  
Munif Alotaibi ◽  
Saule Amanzholova ◽  
...  

The Dark Web is known as a place triggering a variety of criminal activities. Anonymization techniques enable illegal operations, leading to the loss of confidential information and its further use as bait, a trade product or even a crime tool. Despite technical progress, there is still not enough awareness of the Dark Web and its secret activity. In this study, we introduced the Dark Web Enhanced Analysis (DWEA) in order to analyze and gather information about the content accessed on the Dark Net based on data characteristics. The research was performed to identify how the Dark Web has been influenced by recent global events, such as the COVID-19 epidemic. The research included the usage of a crawler, which scans the network and collects data for further analysis with machine learning. The result of this work determines the influence of the COVID-19 epidemic on the Dark Net.


Author(s):  
Abdul Razaque ◽  
Bakhytzhan Valiyev ◽  
Bandar Alotaibi ◽  
Munif Alotaibi ◽  
Saule Amanzholova ◽  
...  

The Dark Web is known as a place triggering a variety of criminal activities. Anonymization techniques enable illegal operations, leading to the loss of confidential information and its further use as bait, a trade product or even a crime tool. Despite technical progress, there is still not enough awareness of the Dark Web and its secret activity. In this study, we introduced the Dark Web Enhanced Analysis (DWEA), in order to analyze and gather information about the content accessed on the Dark Net based on data characteristics. The research was performed to identify how the Dark Web has been influenced by recent global events, such as the COVID-19 epidemic. The research included the usage of a crawler, which scans the network and collects data for further analysis with machine learning. The result of this work determines the influence of the COVID-19 epidemic on the Dark Net.


2021 ◽  
Author(s):  
Edgardo Samuel Barraza Verdesoto ◽  
Richard de Jesus Gil Herrera ◽  
Marlly Yaneth Rojas Ortiz

Abstract This paper introduces an abstract system for converting texts into structured information. The proposed architecture incorporates several strategies based on scientific models of how the brain records and recovers memories, and approaches that convert texts into structured data. The applications of this proposal are vast because, in general, the information that can be expressed like a text way, such as reports, emails, web contents, etc., is considered unstructured and, hence, the repositories based on a SQL do not capable to deal efficiently with this kind of data. The model in which was based on this proposal divides a sentence into clusters of words which in turn are transformed into members of a taxonomy of algebraic structures. The algebraic structures must comply properties of Abelian groups. Methodologically, an incremental prototyping approach has been applied to develop a satisfactory architecture that can be adapted to any language. A special case is studied, this deals with the Spanish language. The developed abstract system is a framework that permits to implements applications that convert unstructured textual information to structured information, this can be useful in contexts such as Natural Language Generation, Data Mining, dynamically generation of theories, among others.


2021 ◽  
Vol 33 (7) ◽  
pp. 295-317
Author(s):  
Maria Giovanna Confetto ◽  
Claudia Covucci

PurposeFor companies that intend to respond to the modern conscious consumers' needs, a great competitive advantage is played on the ability to incorporate sustainability messages in marketing communications. The aim of this paper is to address this important priority in the web context, building a semantic algorithm that allows content managers to evaluate the quality of sustainability web contents for search engines, considering the current semantic web development.Design/methodology/approachFollowing the Design Science (DS) methodological approach, the study develops the algorithm as an artefact capable of solving a practical problem and improving the operation of content managerial process.FindingsThe algorithm considers multiple factors of evaluation, grouped in three parameters: completeness, clarity and consistency. An applicability test of the algorithm was conducted on a sample of web pages of the Google blog on sustainability to highlight the correspondence between the established evaluation factors and those actually used by Google.Practical implicationsStudying content marketing for sustainability communication constitutes a new field of research that offers exciting opportunities. Writing sustainability contents in an effective way is a fundamental step to trigger stakeholder engagement mechanisms online. It could be a positive social engineering technique in the hands of marketers to make web users able to pursue sustainable development in their choices.Originality/valueThis is the first study that creates a theoretical connection between digital content marketing and sustainability communication focussing, especially, on the aspects of search engine optimization (SEO). The algorithm of “Sustainability-contents SEO” is the first operational software tool, with a regulatory nature, that is able to analyse the web contents, detecting the terms of the sustainability language and measuring the compliance to SEO requirements.


2021 ◽  
Vol 1 (3) ◽  
pp. 35-43
Author(s):  
Mohammed H. Ramadhan

Abstract—Service composition is gaining popularity because a composite service can perform functions that an individual service cannot. There are multiple web services available on the web for different tasks. The semantic web is an advanced form of the current web in which all contents have well-defined meanings due to nature, allowing machines to process web contents automatically. A web service composition is a collection of web services that collaborate to achieve a common goal. They reveal the established methods for web service composition in both syntactic and semantic environments. In this study Initially, we identify the existing techniques used for the composition. We classified these approaches according to the processing of the service descriptions, which can be syntactic or semantic-based service processes. We have reviewed more than 14 articles in this domain and concluded the merits of the methodologies applied for the implementation of web service composition.


Author(s):  
Raed Alshaddadi*

Electronic commerce has been reshaping the aspects of businesses and social life over this period of years. This is made possible with the constant innovation of information system (e.g. website, mobile application) and the global computer network (i.e. internet). There are a number of studies that emphasize on the benefits of adapting this strategy. However, though the benefits of this strategy may well overshadow the issues. The adoption of this strategy is not widely used for the small medium enterprise, opposed to large enterprise. Hence, this research study aims to underline the value and provide recommended guide for applying e- commerce for an small medium enterprise (SME) company. Saudi Perfumes & Cosmetics company located in the Kingdom of Saudi Arabia (KSA) was adapted as the case study. Quantitative research methodology was adopted as the primary techniques using online survey, alongside sources from books, articles, journals and web contents are used as the secondary data. It was found that the company is facing various issues when using direct selling method (e.g. time consuming, difficulty to understand) and the respondents from this survey believes that applying e-commerce would help to resolve this issues. It was concluded that using an off the shelf application provided by Shopify service is the best option. This is given the rationale of having the software provided by the service provider to support both web and mobile application in a single developed application. Therefore, saving cost and development time.


2021 ◽  
Vol 11 (9) ◽  
pp. 3978
Author(s):  
Alejandro Mañas-García ◽  
José Alberto Maldonado ◽  
Mar Marcos ◽  
Diego Boscá ◽  
Montserrat Robles

This work presents methods to combine data from the Semantic Web into existing EHRs, leading to an augmented EHR. An existing EHR extract is augmented by combining it with additional information from external sources, typically linked data sources. The starting point is a standardized EHR extract described by an archetype. The method consists of combining specific data from the original EHR with contents from the external information source by building a semantic representation, which is used to query the external source. The results are converted into a standardized EHR extract according to an archetype. This work sets the foundations to transform Semantic Web contents into normalized EHR extracts. Finally, to exemplify the approach, the work includes a practical use case in which the summarized EHR is augmented with drug–drug interactions and disease-related treatment information.


Author(s):  
Yuji Sakurai ◽  
Takuya Watanabe ◽  
Tetsuya Okuda ◽  
Mitsuaki Akiyama ◽  
Tatsuya Mori

With the recent rise of HTTPS adoption on the Web, attackers have begun “HTTPSifying” phishing websites. HTTPSifying a phishing website has the advantage of making the website appear legitimate and evading conventional detection methods that leverage URLs or web contents in the network. Further, adopting HTTPS could also contribute to generating intrinsic footprints and provide defenders with a great opportunity to monitor and detect websites, including phishing sites, as they would need to obtain a public-key certificate issued for the preparation of the websites. The potential benefits of certificate-based detection include (1) the comprehensive monitoring of all HTTPSified websites by using certificates immediately after their issuance, even if the attacker utilizes dynamic DNS (DDNS) or hosting services; this could be overlooked with the conventional domain-registration-based approaches; and (2) to detect phishing websites before they are published on the Internet. Accordingly, we address the following research question: How can we make use of the footprints of TLS certificates to defend against phishing attacks? For this, we collected a large set of TLS certificates corresponding to phishing websites from Certificate Transparency (CT) logs and extensively analyzed these TLS certificates. We demonstrated that a template of common names, which are equivalent to the fully qualified domain names, obtained through the clustering analysis of the certificates can be used for the following promising applications: (1) The discovery of previously unknown phishing websites and (2) understanding the infrastructure used to generate the phishing websites. Furthermore, we developed a real-time monitoring system using the analysis techniques. We demonstrate its usefulness for the practical security operation. We use our findings on the abuse of free certificate authorities (CAs) for operating HTTPSified phishing websites to discuss possible solutions against such abuse and provide a recommendation to the CAs.


Author(s):  
Florian Platzer ◽  
Marcel Schäfer ◽  
Martin Steinebach

Tor is a widely-used anonymity network with more than two million daily users. A prominent feature of Tor is the hidden service architecture. Hidden services are a popular method for communicating anonymously or sharing web contents anonymously. For security reasons, in Tor all data packets to be send over the network are structured completely identical. They are encrypted using the TLS protocol and its size is fixed to exactly 512 bytes. In this work we describe a method to deanonymize any hidden service on Tor based on traffic analysis. This method allows an attacker with modest resources to deanonymize any hidden services in less than 12.5 days. This poses a threat to anonymity online.


Sign in / Sign up

Export Citation Format

Share Document