scholarly journals COVIWD: COVID-19 Wikidata Dashboard

2021 ◽  
Vol 14 (1) ◽  
pp. 39-47
Author(s):  
Fariz Darari

COVID-19 (short for coronavirus disease 2019) is an emerging infectious disease that has had a tremendous impact on our daily lives. Globally, there have been over 95 million cases of COVID-19 and 2 million deaths across 191 countries and regions. The rapid spread and severity of COVID-19 call for a monitoring dashboard that can be developed quickly in an adaptable manner. Wikidata is a free, collaborative knowledge graph, collecting structured data about various themes, including that of COVID-19. We present COVIWD, a COVID-19 Wikidata dashboard, which provides a one-stop information/visualization service for topics related to COVID-19, ranging from symptoms and risk factors to comparison of cases and deaths among countries. The dashboard is one of the first that leverages open knowledge graph technologies, namely, RDF (for data modeling) and SPARQL (for querying), to give a live, concise snapshot of the COVID-19 pandemic. The use of both RDF and SPARQL enables rapid and flexible application development. COVIWD is available at http://coviwd.org.

Author(s):  
Charles Miller ◽  
Lucas Lecheler ◽  
Bradford Hosack ◽  
Aaron Doering ◽  
Simon Hooper

Information visualization involves the visual, and sometimes interactive, presentation and organization of complex data in a clear, compelling representation. Information visualization is an essential element in peoples’ daily lives, especially those in data-driven professions, namely online educators. Although information visualization research and methods are prevalent in the diverse fields of healthcare, statistics, economics, information technology, computer science, and politics, few examples of successful information visualization design or integration exist in online learning. The authors provide a background of information visualization in education, explore a set of potential roles for information visualization in the future design and integration of online learning environments, provide examples of contemporary interactive visualizations in education, and discuss opportunities to move forward with design and research in this emerging area.


Author(s):  
Lyubomir Penev ◽  
Teodor Georgiev ◽  
Viktor Senderov ◽  
Mariya Dimitrova ◽  
Pavel Stoev

As one of the first advocates of open access and open data in the field of biodiversity publishiing, Pensoft has adopted a multiple data publishing model, resulting in the ARPHA-BioDiv toolbox (Penev et al. 2017). ARPHA-BioDiv consists of several data publishing workflows and tools described in the Strategies and Guidelines for Publishing of Biodiversity Data and elsewhere: Data underlying research results are deposited in an external repository and/or published as supplementary file(s) to the article and then linked/cited in the article text; supplementary files are published under their own DOIs and bear their own citation details. Data deposited in trusted repositories and/or supplementary files and described in data papers; data papers may be submitted in text format or converted into manuscripts from Ecological Metadata Language (EML) metadata. Integrated narrative and data publishing realised by the Biodiversity Data Journal, where structured data are imported into the article text from tables or via web services and downloaded/distributed from the published article. Data published in structured, semanticaly enriched, full-text XMLs, so that several data elements can thereafter easily be harvested by machines. Linked Open Data (LOD) extracted from literature, converted into interoperable RDF triples in accordance with the OpenBiodiv-O ontology (Senderov et al. 2018) and stored in the OpenBiodiv Biodiversity Knowledge Graph. Data underlying research results are deposited in an external repository and/or published as supplementary file(s) to the article and then linked/cited in the article text; supplementary files are published under their own DOIs and bear their own citation details. Data deposited in trusted repositories and/or supplementary files and described in data papers; data papers may be submitted in text format or converted into manuscripts from Ecological Metadata Language (EML) metadata. Integrated narrative and data publishing realised by the Biodiversity Data Journal, where structured data are imported into the article text from tables or via web services and downloaded/distributed from the published article. Data published in structured, semanticaly enriched, full-text XMLs, so that several data elements can thereafter easily be harvested by machines. Linked Open Data (LOD) extracted from literature, converted into interoperable RDF triples in accordance with the OpenBiodiv-O ontology (Senderov et al. 2018) and stored in the OpenBiodiv Biodiversity Knowledge Graph. The above mentioned approaches are supported by a whole ecosystem of additional workflows and tools, for example: (1) pre-publication data auditing, involving both human and machine data quality checks (workflow 2); (2) web-service integration with data repositories and data centres, such as Global Biodiversity Information Facility (GBIF), Barcode of Life Data Systems (BOLD), Integrated Digitized Biocollections (iDigBio), Data Observation Network for Earth (DataONE), Long Term Ecological Research (LTER), PlutoF, Dryad, and others (workflows 1,2); (3) semantic markup of the article texts in the TaxPub format facilitating further extraction, distribution and re-use of sub-article elements and data (workflows 3,4); (4) server-to-server import of specimen data from GBIF, BOLD, iDigBio and PlutoR into manuscript text (workflow 3); (5) automated conversion of EML metadata into data paper manuscripts (workflow 2); (6) export of Darwin Core Archive and automated deposition in GBIF (workflow 3); (7) submission of individual images and supplementary data under own DOIs to the Biodiversity Literature Repository, BLR (workflows 1-3); (8) conversion of key data elements from TaxPub articles and taxonomic treatments extracted by Plazi into RDF handled by OpenBiodiv (workflow 5). These approaches represent different aspects of the prospective scholarly publishing of biodiversity data, which in a combination with text and data mining (TDM) technologies for legacy literature (PDF) developed by Plazi, lay the ground of an entire data publishing ecosystem for biodiversity, supplying FAIR (Findable, Accessible, Interoperable and Reusable data to several interoperable overarching infrastructures, such as GBIF, BLR, Plazi TreatmentBank, OpenBiodiv and various end users.


Author(s):  
Navin Tatyaba Gopal ◽  
Anish Raj Khobragade

The Knowledge graphs (KGs) catches structured data and relationships among a bunch of entities and items. Generally, constitute an attractive origin of information that can advance the recommender systems. But, present methodologies of this area depend on manual element thus don’t permit for start to end training. This article proposes, Knowledge Graph along with Label Smoothness (KG-LS) to offer better suggestions for the recommender Systems. Our methodology processes user-specific entities by prior application of a function capability that recognizes key KG-relationships for a specific user. In this manner, we change the KG in a specific-user weighted graph followed by application of a graph neural network to process customized entity embedding. To give better preliminary predisposition, label smoothness comes into picture, which places items in the KG which probably going to have identical user significant names/scores. Use of, label smoothness gives regularization above the edge weights thus; we demonstrate that it is comparable to a label propagation plan on the graph. Additionally building-up a productive usage that symbolizes solid adaptability concerning the size of knowledge graph. Experimentation on 4 datasets shows that our strategy beats best in class baselines. This process likewise accomplishes solid execution in cold start situations where user-entity communications remain meager.


Author(s):  
Yanish Pradhananga ◽  
Pothuraju Rajarajeswari

The evolution of Internet of Things (IoT) brought about several challenges for the existing Hardware, Network and Application development. Some of these are handling real-time streaming and batch bigdata, real- time event handling, dynamic cluster resource allocation for computation, Wired and Wireless Network of Things etc. In order to combat these technicalities, many new technologies and strategies are being developed. Tiarrah Computing comes up with integration the concept of Cloud Computing, Fog Computing and Edge Computing. The main objectives of Tiarrah Computing are to decouple application deployment and achieve High Performance, Flexible Application Development, High Availability, Ease of Development, Ease of Maintenances etc. Tiarrah Computing focus on using the existing opensource technologies to overcome the challenges that evolve along with IoT. This paper gives you overview of the technologies and design your application as well as elaborate how to overcome most of existing challenge.


Author(s):  
Yunqing Li ◽  
Shivakumar Raman ◽  
Paul Cohen ◽  
Binil Starly

Abstract Knowledge graph networks powering web search and chatbot agents have shown immense popularity. This paper discusses the first steps towards building a knowledge graph for manufacturing services discoverability. Due to the lack of a unified widely adopted schema for structured data in the manufacturing services domain as well as the limitations of existing relational database schemas to represent manufacturing service definitions, there does not exist a unified schema that connects manufacturing resources service descriptions and actual manufacturing service business entities. This gap severely limits the automated discoverability of manufacturing service business organizations. This paper designs a knowledge graph covering over 8,000+ manufacturers, the manufacturing services they provide and corresponding linkage with manufacturing service definitions available from Wikidata. In addition, this work also proposes extensions to Schema.org to assist small business manufacturers to contain embedded search engine optimization (SEO) tags for search and discovery through web search engines. Such vocabulary extensions are critical to the rapid identification and real-time capability assessment particularly when the service providers themselves are responsible for updating tags. A wider scale enhancement of manufacturing specific vocabulary extensions to schema.org can tremendously benefit small and medium scale manufacturers. This paper concludes with the additional work that must be done for a comprehensive addition to manufacturing service graph that spans the entire manufacturing knowledge base.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 42436-42446
Author(s):  
Jian Chen ◽  
Bing Li ◽  
Jian Wang ◽  
Yuqi Zhao ◽  
Li Yao ◽  
...  

Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 186 ◽  
Author(s):  
Shuang Liu ◽  
Hui Yang ◽  
Jiayi Li ◽  
Simon Kolmanič

The domestic population has paid increasing attention to ancient Chinese history and culture with the continuous improvement of people’s living standards, the rapid economic growth, and the rapid advancement of information science and technology. The use of information technology has been proven to promote the spread and development of historical culture, and it is becoming a necessary means to promote our traditional culture. This paper will build a knowledge graph of ancient Chinese history and culture in order to facilitate the public to more quickly and accurately understand the relevant knowledge of ancient Chinese history and culture. The construction process is as follows: firstly, use crawler technology to obtain text and table data related to ancient history and culture on Baidu Encyclopedia (similar to Wikipedia) and ancient Chinese history and culture related pages. Among them, the crawler technology crawls the semi-structured data in the information box (InfoBox) in the Baidu Encyclopedia to directly construct the triples required for the knowledge graph, crawls the introductory text information of the entries in Baidu Encyclopedia, and specialized historical and cultural websites (history Chunqiu.com, On History.com) to extract unstructured entities and relationships. Secondly, entity recognition and relationship extraction are performed on an unstructured text. The entity recognition part uses the Bidirectional Long Short-Term Memory-Convolutional Neural Networks-Conditions Random Field (BiLSTM-CNN-CRF) model for entity extraction. The relationship extraction between entities is performed by using the open source tool DeepKE (information extraction tool with language recognition ability developed by Zhejiang University) to extract the relationships between entities. After obtaining the entity and the relationship between the entities, supplement it with the triple data that were constructed from the semi-structured data in the existing knowledge base and Baidu Encyclopedia information box. Subsequently, the ontology construction and the quality evaluation of the entire constructed knowledge graph are performed to form the final knowledge graph of ancient Chinese history and culture.


2020 ◽  
pp. 263-286
Author(s):  
Julia Valentin Laurindo Santos ◽  
João Vitor Prudente ◽  
Letícia Parente-Ribeiro ◽  
Flavia Lins-de-Barros

In 2020, the rapid spread of Covid-19, a disease caused by a highly contagious virus, led many governments to adopt measures of social distancing, including the suspension of activities considered non-essential and the closure of public spaces. In Brazil, a country that is distinguished by sun, sea and sand tourism (3s), the effects were immediate in the months of March, April, May and June: closed beaches and the suspension of all economic activities linked to it. This article seeks to understand the effects of the Covid-19 pandemic on a traditional sector of the beach economy in Rio de Janeiro, the “tent business”. For that, we analyzed: 1) the organization of this sector in the pre-pandemic period; 2) the legal measures adopted to contain the spread of the new coronavirus and which affected the uses of beaches; 3) the effects of the pandemic on the daily lives of beach workers 4) the challenges for the resumption of activities in the post-pandemic period. The data used in this research are the result of surveys and fieldwork carried out in the period before the pandemic and the application, during quarantine, of semi-structured interviews, via social networks, with owners and employees of tents on the beaches of the city’s waterfront. For this study, the normative measures that affected the beaches of the city of Rio de Janeiro during the pandemic were also analyzed. As main results, we highlight, first, the importance of the “tent business” in the economic circuits associated with Rio beaches, as well as the role that tents play as poles of concentration of bathers in the sand strip. Regarding governmental measures of social distance, we noticed that the beaches were one of the areas affected for the longest time by the suspension of activities and that, until the total reopening occurred in October, the activities associated with the solarium, such as the “tent business”, were those that presented a more uncertain horizon of recovery. The impacts on the daily lives of the owners of the tents and their employees were enormous, with the vertiginous decrease of their incomes and the difficulties of finding alternative occupations. These effects were partially offset by the adoption of assistance measures by governments and the creation of support networks involving beachgoers, both Brazilian and foreigner, as a result of a relationship built over the years with stallholders and other beach workers. Finally, from a comparative exercise with other situations in the world, we highlight the challenges that are already being faced for the adoption of new ways of ordering the uses of beaches in the post-pandemic world. Keywords: Coastal management, social distancing, beach workers, beachfront, solarium.


Sign in / Sign up

Export Citation Format

Share Document