scholarly journals “Sustainability-contents SEO”: a semantic algorithm to improve the quality rating of sustainability web contents

2021 ◽  
Vol 33 (7) ◽  
pp. 295-317
Author(s):  
Maria Giovanna Confetto ◽  
Claudia Covucci

PurposeFor companies that intend to respond to the modern conscious consumers' needs, a great competitive advantage is played on the ability to incorporate sustainability messages in marketing communications. The aim of this paper is to address this important priority in the web context, building a semantic algorithm that allows content managers to evaluate the quality of sustainability web contents for search engines, considering the current semantic web development.Design/methodology/approachFollowing the Design Science (DS) methodological approach, the study develops the algorithm as an artefact capable of solving a practical problem and improving the operation of content managerial process.FindingsThe algorithm considers multiple factors of evaluation, grouped in three parameters: completeness, clarity and consistency. An applicability test of the algorithm was conducted on a sample of web pages of the Google blog on sustainability to highlight the correspondence between the established evaluation factors and those actually used by Google.Practical implicationsStudying content marketing for sustainability communication constitutes a new field of research that offers exciting opportunities. Writing sustainability contents in an effective way is a fundamental step to trigger stakeholder engagement mechanisms online. It could be a positive social engineering technique in the hands of marketers to make web users able to pursue sustainable development in their choices.Originality/valueThis is the first study that creates a theoretical connection between digital content marketing and sustainability communication focussing, especially, on the aspects of search engine optimization (SEO). The algorithm of “Sustainability-contents SEO” is the first operational software tool, with a regulatory nature, that is able to analyse the web contents, detecting the terms of the sustainability language and measuring the compliance to SEO requirements.

Author(s):  
Carmen Domínguez-Falcón ◽  
Domingo Verano-Tacoronte ◽  
Marta Suárez-Fuentes

Purpose The strong regulation of the Spanish pharmaceutical sector encourages pharmacies to modify their business model, giving the customer a more relevant role by integrating 2.0 tools. However, the study of the implementation of these tools is still quite limited, especially in terms of a customer-oriented web page design. This paper aims to analyze the online presence of Spanish community pharmacies by studying the profile of their web pages to classify them by their degree of customer orientation. Design/methodology/approach In total, 710 community pharmacies were analyzed, of which 160 had Web pages. Using items drawn from the literature, content analysis was performed to evaluate the presence of these items on the web pages. Then, after analyzing the scores on the items, a cluster analysis was conducted to classify the pharmacies according to the degree of development of their online customer orientation strategy. Findings The number of pharmacies with a web page is quite low. The development of these websites is limited, and they have a more informational than relational role. The statistical analysis allows to classify the pharmacies in four groups according to their level of development Practical implications Pharmacists should make incremental use of their websites to facilitate real two-way communication with customers and other stakeholders to maintain a relationship with them by having incorporated the Web 2.0 and social media (SM) platforms. Originality/value This study analyses, from a marketing perspective, the degree of Web 2.0 adoption and the characteristics of the websites, in terms of aiding communication and interaction with customers in the Spanish pharmaceutical sector.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Juliana Parise Baldauf ◽  
Carlos Torres Formoso ◽  
Patricia Tzortzopoulos

PurposeThis paper proposes a method for managing client requirements with the use of Building Information Modelling (BIM). The development of healthcare projects demands a large amount of requirements information, in order to deal with a diversity of clients and frequents changes in healthcare services. The proposed method supports healthcare design by adopting a process-based approach for client requirements management, with the aim of improving value generation.Design/methodology/approachDesign Science Research was the methodological approach adopted in this investigation. The main outcome of this study emerged from an empirical study carried out in a healthcare project in Brazil.FindingsThe proposed method involves three stages: (1) capturing and processing requirements; (2) product and requirements modelling, which involves the connection between requirements and the BIM 3-D model and (3) supporting design solution refinement, through the communication of requirements and the assessment of design in relation to updated client requirements information.Originality/valueThis study explores client requirements management from a process perspective, proposing activities and their interdependences and possible sources of data, including healthcare services information. The main theoretical contributions are related to the understanding of the nature and complexity of the information involved in client requirements management, and how this can be modelled.


Author(s):  
Paulo Cardoso Lins-Filho ◽  
Thuanny Silva de Macêdo ◽  
Andressa Kelly Alves Ferreira ◽  
Maria Cecília Freire de Melo ◽  
Millena Mirella Silva de Araújo ◽  
...  

AbstractObjectiveThis study aimed to assess the quality, reliability and readability of internet-based information on COVID-19 available on Brazil’ most used search engines.MethodsA total of 68 websites were selected through Google, Bing, and Yahoo. The websites content quality and reliability were evaluated using the DISCERN questionnaire, the Journal of American Medical Association (JAMA) benchmark criteria, and the presence of the Health on Net (HON) certification. Readability was assessed by the Flesch Reading Ease adapted to Brazilian Portuguese (FRE-BP).ResultsThe web contents were considered moderate to low quality according to DISCERN and JAMA mean scores. Most of the sample presented very difficult reading levels and only 7.4% displayed HON certification. Websites of Governmental and health-related authorship nature showed lower JAMA mean scores and quality and readability measures did not correlate to the webpages content type.ConclusionCOVID-19 related contents available online were considered of low to moderate quality and not accessible.


Author(s):  
Francisco Brazuelo Grund ◽  
Maria Luz Cacheiro González

Presentamos a continuación un trabajo de investigación acerca del diseño de páginas web para teléfonos móviles en el ámbito educativo. En el marco teórico, estudiaremos la situación actual de la telefonía móvil como recurso educativo. A continuación estableceremos un marco de actuación metodológica basado en herramientas de la Web 2.0, llegando finalmente a la creación de la web móvil Diseño de Páginas Web en Contextos Educativos, perteneciente al programa de doctorado MODELTIC de la UNED.AbstractWe are presenting a research paper about designing web pages for mobile phones in an educational context. In the theoretical framework, we will study the current situation of mobile telephony as an educational resource. Then we will establish a framework of methodological approach based on the tools of the Web 2.0, finally arriving to the creation of mobile Web 'Diseño de Páginas Web en Contextos Educativos,' as part of the doctoral program at UNED MODELTIC.


Author(s):  
Ben Choi

Web mining aims for searching, organizing, and extracting information on the Web and search engines focus on searching. The next stage of Web mining is the organization of Web contents, which will then facilitate the extraction of useful information from the Web. This chapter will focus on organizing Web contents. Since a majority of Web contents are stored in the form of Web pages, this chapter will focus on techniques for automatically organizing Web pages into categories. Various artificial intelligence techniques have been used; however the most successful ones are classification and clustering. This chapter will focus on clustering. Clustering is well suited for Web mining by automatically organizing Web pages into categories each of which contain Web pages having similar contents. However, one problem in clustering is the lack of general methods to automatically determine the number of categories or clusters. For the Web domain, until now there is no such a method suitable for Web page clustering. To address this problem, this chapter describes a method to discover a constant factor that characterizes the Web domain and proposes a new method for automatically determining the number of clusters in Web page datasets. This chapter also proposes a new bi-directional hierarchical clustering algorithm, which arranges individual Web pages into clusters and then arranges the clusters into larger clusters and so on until the average inter-cluster similarity approaches the constant factor. Having the constant factor together with the algorithm, this chapter provides a new clustering system suitable for mining the Web.


Author(s):  
ALI SELAMAT ◽  
ZHI SAM LEE ◽  
MOHD AIZAINI MAAROF ◽  
SITI MARIYAM SHAMSUDDIN

In this paper, an improved web page classification method (IWPCM) using neural networks to identify the illicit contents of web pages is proposed. The proposed IWPCM approach is based on the improvement of feature selection of the web pages using class based feature vectors (CPBF). The CPBF feature selection approach has been calculated by considering the important term's weight for illicit web documents and reduce the dependency of the less important term's weight for normal web documents. The IWPCM approach has been examined using the modified term-weighting scheme by comparing it with several traditional term-weighting schemes for non-illicit and illicit web contents available from the web. The precision, recall, and F1 measures have been used to evaluate the effectiveness of the proposed IWPCM approach. The experimental results have shown that the proposed improved term-weighting scheme has been able to identify the non-illicit and illicit web contents available from the experimental datasets.


Author(s):  
JUNXIA GUO ◽  
HAO HAN

The technology that integrates various types of Web contents to build a new Web application through end-user programming is widely used nowadays. However, the Web contents do not have a uniform interface for accessing the data and computation. Most of the general Web users access information on the Web through applications until now. Hence, designing a uniform and flexible programmatic interface for integration of different Web contents is unavoidable. In this paper, we propose an approach that can be used to analyze Web applications automatically and reuse the information of Web applications through the programmatic interface we designed. Our approach can support the flexible integration of Web applications, Web services and Web feeds. In our experiments, we use a large number of Web pages from different types of Web applications and achieve the integration by the proposed programmatic interfaces. The experimental results show that our approach brings to the end-users a flexible and user-friendly programming environment.


2015 ◽  
Vol 49 (2) ◽  
pp. 205-223
Author(s):  
B T Sampath Kumar ◽  
D Vinay Kumar ◽  
K.R. Prithviraj

Purpose – The purpose of this paper is to know the rate of loss of online citations used as references in scholarly journals. It also indented to recover the vanished online citations using Wayback Machine and also to calculate the half-life period of online citations. Design/methodology/approach – The study selected three journals published by Emerald publication. All 389 articles published in these three scholarly journals were selected. A total of 15,211 citations were extracted of which 13,281 were print citations and only 1,930 were online citations. The online citations so extracted were then tested to determine whether they were active or missing on the Web. W3C Link Checker was used to check the existence of online citations. The online citations which got HTTP error message while testing for its accessibility were then entered in to the search box of the Wayback Machine to recover vanished online citations. Findings – Study found that only 12.69 percent (1,930 out of 15,211) citations were online citations and the percentage of online citations varied from a low of 9.41 in the year 2011 to high of 17.52 in the year 2009. Another notable finding of the research was that 30.98 percent of online citations were not accessible (vanished) and remaining 69.02 percent of online citations were still accessible (active). The HTTP 404 error message – “page not found” was the overwhelming message encountered and represented 62.98 percent of all HTTP error message. It was found that the Wayback Machine had archived only 48.33 percent of the vanished web pages, leaving 51.67 percent still unavailable. The half-life of online citations was increased from 5.40 years to 11.73 years after recovering the vanished online citations. Originality/value – This is a systematic and in-depth study on recovery of vanished online citations cited in journals articles spanning a period of five years. The findings of the study will be helpful to researchers, authors, publishers, and editorial staff to recover vanishing online citations using Wayback Machine.


2017 ◽  
Vol 13 (4) ◽  
pp. 425-444 ◽  
Author(s):  
Ngurah Agus Sanjaya Er ◽  
Mouhamadou Lamine Ba ◽  
Talel Abdessalem ◽  
Stéphane Bressan

Purpose This paper aims to focus on the design of algorithms and techniques for an effective set expansion. A tool that finds and extracts candidate sets of tuples from the World Wide Web was designed and implemented. For instance, when a given user provides <Indonesia, Jakarta, Indonesian Rupiah>, <China, Beijing, Yuan Renminbi>, <Canada, Ottawa, Canadian Dollar> as seeds, our system returns tuples composed of countries with their corresponding capital cities and currency names constructed from content extracted from Web pages retrieved. Design/methodology/approach The seeds are used to query a search engine and to retrieve relevant Web pages. The seeds are also used to infer wrappers from the retrieved pages. The wrappers, in turn, are used to extract candidates. The Web pages, wrappers, seeds and candidates, as well as their relationships, are vertices and edges of a heterogeneous graph. Several options for ranking candidates from PageRank to truth finding algorithms were evaluated and compared. Remarkably, all vertices are ranked, thus providing an integrated approach to not only answer direct set expansion questions but also find the most relevant pages to expand a given set of seeds. Findings The experimental results show that leveraging the truth finding algorithm can indeed improve the level of confidence in the extracted candidates and the sources. Originality/value Current approaches on set expansion mostly support sets of atomic data expansion. This idea can be extended to the sets of tuples and extract relation instances from the Web given a handful set of tuple seeds. A truth finding algorithm is also incorporated into the approach and it is shown that it can improve the confidence level in the ranking of both candidates and sources in set of tuples expansion.


2015 ◽  
Vol 67 (6) ◽  
pp. 663-686 ◽  
Author(s):  
Saed ALQARALEH ◽  
Omar RAMADAN ◽  
Muhammed SALAMAH

Purpose – The purpose of this paper is to design a watcher-based crawler (WBC) that has the ability of crawling static and dynamic web sites, and can download only the updated and newly added web pages. Design/methodology/approach – In the proposed WBC crawler, a watcher file, which can be uploaded to the web sites servers, prepares a report that contains the addresses of the updated and the newly added web pages. In addition, the WBC is split into five units, where each unit is responsible for performing a specific crawling process. Findings – Several experiments have been conducted and it has been observed that the proposed WBC increases the number of uniquely visited static and dynamic web sites as compared with the existing crawling techniques. In addition, the proposed watcher file not only allows the crawlers to visit the updated and newly web pages, but also solves the crawlers overlapping and communication problems. Originality/value – The proposed WBC performs all crawling processes in the sense that it detects all updated and newly added pages automatically without any human explicit intervention or downloading the entire web sites.


Sign in / Sign up

Export Citation Format

Share Document