scholarly journals Peer Review #2 of "Challenges as enablers for high quality Linked Data: insights from the Semantic Publishing Challenge (v0.1)"

2017 ◽  
Vol 3 ◽  
pp. e105 ◽  
Author(s):  
Anastasia Dimou ◽  
Sahar Vahdati ◽  
Angelo Di Iorio ◽  
Christoph Lange ◽  
Ruben Verborgh ◽  
...  

While most challenges organized so far in the Semantic Web domain are focused on comparing tools with respect to different criteria such as their features and competencies, or exploiting semantically enriched data, the Semantic Web Evaluation Challenges series, co-located with the ESWC Semantic Web Conference, aims to compare them based on their output, namely the produced dataset. The Semantic Publishing Challenge is one of these challenges. Its goal is to involve participants in extracting data from heterogeneous sources on scholarly publications, and producing Linked Data that can be exploited by the community itself. This paper reviews lessons learned from both (i) the overall organization of the Semantic Publishing Challenge, regarding the definition of the tasks, building the input dataset and forming the evaluation, and (ii) the results produced by the participants, regarding the proposed approaches, the used tools, the preferred vocabularies and the results produced in the three editions of 2014, 2015 and 2016. We compared these lessons to other Semantic Web Evaluation Challenges. In this paper, we (i) distill best practices for organizing such challenges that could be applied to similar events, and (ii) report observations on Linked Data publishing derived from the submitted solutions. We conclude that higher quality may be achieved when Linked Data is produced as a result of a challenge, because the competition becomes an incentive, while solutions become better with respect to Linked Data publishing best practices when they are evaluated against the rules of the  challenge.


2016 ◽  
Author(s):  
Anastasia Dimou ◽  
Sahar Vahdati ◽  
Angelo Di Iorio ◽  
Christoph Lange ◽  
Ruben Verborgh ◽  
...  

While most challenges organized so far in the Semantic Web domain are focused on comparing tools with respect to different criteria such as their features and competencies, or exploiting semantically enriched data, the Semantic Web Evaluation Challenges series, co-located with the ESWC Semantic Web Conference, aims to compare them based on their output, namely the produced dataset. The Semantic Publishing Challenge is one of these challenges. Its goal is to involve participants in extracting data from heterogeneous sources on scholarly publications, and producing Linked Data that can be exploited by the community itself. This paper reviews lessons learned from both (i) the overall organization of the Semantic Publishing Challenge, regarding the definition of the tasks, building the input dataset and forming the evaluation, and (ii) the results produced by the participants, regarding the proposed approaches, the used tools, the preferred vocabularies and the results produced in the three editions of 2014, 2015 and 2016. We compared these lessons to other Semantic Web Evaluation challenges. In this paper, we (i) distill best practices for organizing such challenges that could be applied to similar events, and (ii) report observations on Linked Data publishing derived from the submitted solutions. We conclude that higher quality may be achieved when Linked Data is produced as a result of a challenge, because the competition becomes an incentive, while solutions become better with respect to Linked Data publishing best practices when they are evaluated against the rules of the challenge.


2016 ◽  
Author(s):  
Anastasia Dimou ◽  
Sahar Vahdati ◽  
Angelo Di Iorio ◽  
Christoph Lange ◽  
Ruben Verborgh ◽  
...  

While most challenges organized so far in the Semantic Web domain are focused on comparing tools with respect to different criteria such as their features and competencies, or exploiting semantically enriched data, the Semantic Web Evaluation Challenges series, co-located with the ESWC Semantic Web Conference, aims to compare them based on their output, namely the produced dataset. The Semantic Publishing Challenge is one of these challenges. Its goal is to involve participants in extracting data from heterogeneous sources on scholarly publications, and producing Linked Data that can be exploited by the community itself. This paper reviews lessons learned from both (i) the overall organization of the Semantic Publishing Challenge, regarding the definition of the tasks, building the input dataset and forming the evaluation, and (ii) the results produced by the participants, regarding the proposed approaches, the used tools, the preferred vocabularies and the results produced in the three editions of 2014, 2015 and 2016. We compared these lessons to other Semantic Web Evaluation challenges. In this paper, we (i) distill best practices for organizing such challenges that could be applied to similar events, and (ii) report observations on Linked Data publishing derived from the submitted solutions. We conclude that higher quality may be achieved when Linked Data is produced as a result of a challenge, because the competition becomes an incentive, while solutions become better with respect to Linked Data publishing best practices when they are evaluated against the rules of the challenge.


Logistics ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 6
Author(s):  

Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that Logistics maintains its standards for the high quality of its published papers [...]


2021 ◽  
Vol 11 (2) ◽  
pp. 138
Author(s):  

Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that Brain Sciences maintains its standards for the high quality of its published papers [...]


Dairy ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 71-72
Author(s):  

Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that Dairy maintains its standards for the high quality of its published papers [...]


Cosmetics ◽  
2021 ◽  
Vol 8 (1) ◽  
pp. 11
Author(s):  

Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that Cosmetics maintains its standards for the high quality of its published papers [...]


Geosciences ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 39
Author(s):  

Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that Geosciences maintains its standards for the high quality of its published papers [...]


2021 ◽  
Vol 3 (1) ◽  
pp. 79-80
Author(s):  

Peer review is the driving force of journal development, and reviewers are gatekeepers who ensure that Clean Technologies maintains its standards for the high quality of its published papers [...]


Sign in / Sign up

Export Citation Format

Share Document