scholarly journals Editorial note

2016 ◽  
Vol 23 (1) ◽  
pp. 1-2
Author(s):  
RUSLAN MITKOV

In one of my previous editorial notes I promised that the positive developments of the Journal of Natural Language Engineering (JNLE) would be a continuous and common practice. I am proud to report that I have been able to keep this promise. JNLE has enjoyed another very successful year. The impact factor of the journal increased for the second consecutive year, with the journal listed both among the Linguistics and Computer Science categories. From 2016 onwards, JNLE is offering six 160-page issues per year, which by far exceeds the four 96-page issues from less than 10 years ago!

2017 ◽  
Vol 24 (1) ◽  
pp. 1-1
Author(s):  
RUSLAN MITKOV

The Journal of Natural Language Engineering (JNLE) has enjoyed another very successful year. The impact factor of the journal has increased for the third consecutive year, with the journal being listed among both Linguistics and Computer Science categories. Against the background of a record number of submissions, JNLE has, since 2016, been offering six 160-page issues per year, which by far exceeds the four 96-page issues offered less than 10 years ago!


2014 ◽  
Vol 21 (1) ◽  
pp. 1-2
Author(s):  
Mitkov Ruslan

The Journal of Natural Language Engineering (JNLE) has enjoyed another very successful year. Two years after being accepted into Thomson Reuters Citation Index and being indexed in many of their products (including both the Science and the Social Science editions of the Journals Citation Rankings (JCR)), the journal further established itself as a leading forum for high-quality articles covering all aspects of Natural Language Processing research, including, but not limited to, the engineering of natural language methods and applications. I am delighted to report an increased number of submissions reaching a total of 92 between January–September 2014.


PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0244179
Author(s):  
Onur Güngör ◽  
Tunga Güngör ◽  
Suzan Uskudarli

The state-of-the-art systems for most natural language engineering tasks employ machine learning methods. Despite the improved performances of these systems, there is a lack of established methods for assessing the quality of their predictions. This work introduces a method for explaining the predictions of any sequence-based natural language processing (NLP) task implemented with any model, neural or non-neural. Our method named EXSEQREG introduces the concept of region that links the prediction and features that are potentially important for the model. A region is a list of positions in the input sentence associated with a single prediction. Many NLP tasks are compatible with the proposed explanation method as regions can be formed according to the nature of the task. The method models the prediction probability differences that are induced by careful removal of features used by the model. The output of the method is a list of importance values. Each value signifies the impact of the corresponding feature on the prediction. The proposed method is demonstrated with a neural network based named entity recognition (NER) tagger using Turkish and Finnish datasets. A qualitative analysis of the explanations is presented. The results are validated with a procedure based on the mutual information score of each feature. We show that this method produces reasonable explanations and may be used for i) assessing the degree of the contribution of features regarding a specific prediction of the model, ii) exploring the features that played a significant role for a trained model when analyzed across the corpus.


2018 ◽  
Vol 25 (1) ◽  
pp. 1-4 ◽  
Author(s):  
JOHN TAIT

Natural Language Engineering really came about from a meeting between Roberto Garigliano (then of Durham University) and myself in his office in late 1992 or early 1993. I had returned to academia the previous year after a spell doing a variety of jobs in industry, and had become aware of Roberto and the Natural Language Group at Durham (just about 15 miles from the University of Sunderland where I was working). Roberto and I discussed several possible avenues of cooperation, including sponsorship by Durham of students on existing Sunderland masters degrees, a joint Durham/Sunderland specialist Masters in Language Engineering (which came to nothing) and a new journal focused on practical, engineering work in the language domain. Incidentally, one of the sponsored master’s students was Siobhan Devlin, now Head of Computing at Sunderland.


2010 ◽  
Vol 16 (1) ◽  
pp. 1-2
Author(s):  
Ruslan Mitkov

Natural Language Engineering (NLE) enters the second decade of the twenty-first century having established itself as a leading forum for high-quality articles covering all aspects of applied Natural Language Processing research, including, but not limited to, the engineering of natural language methods and applications. It continues to promote first class original research and bridge the gap between traditional computational linguistics research and the implementation of practical applications with potential real-world use. The journal has responded in several ways to the ongoing interest in and growth of research in this area. In 2007 NLE increased its number of pages per issue, thus enabling the publication of more articles. As of January 2010, new publication types are also promoted. In addition to welcoming articles which report on original, unpublished research, the journal now invites surveys presenting the state of the art in important areas of Natural Language Engineering and Natural Language Processing (such as tasks, tools, resources or applications) as well as squibs discussing specific problems. Book reviews and reports on industrial applications will continue to have a prominent place in the Journal. Conference reports, comparative discussions of Natural Language Engineering products and policy-orientated papers examining, for example, funding programmes or market opportunities, are welcome too. Special issues will remain an important feature of the Journal. We envisage one special issue per year, on average. Special issues are selected on a competitive basis after regular calls for proposals.


2011 ◽  
Vol 18 (1) ◽  
pp. i-i
Author(s):  
Ruslan Mitkov

Natural Language Engineering (NLE) has enjoyed another year promoting research of applied Natural Language Processing and serving the research community in the field. We were particularly pleased to register an increasing number of submissions on a wide range of topics reflecting the growing importance of the field. We were also delighted to receive a number of submissions representing the new article types announced in 2010, such as surveys and squibs.


2020 ◽  
Vol 40 (01) ◽  
pp. 359-365
Author(s):  
Hilary I Okagbue ◽  
Shiela A. Bishop ◽  
Patience I. Adamu ◽  
Abiodun A. Opanuga ◽  
Emmanuela C.M. Obasi

Impact factor (Web of Science, Clarivate Analytics) and CiteScore (Scopus, Elsevier) are the two leading metrics for journal evaluation, assessment and ranking. The relationship between the two is now established, using their respective percentile in this paper for 105 journal in the Computer science, theory and methods (CSTM) subject category. The available studies did not consider the quartile comparison of the journal percentiles of the two database (Scopus and Science Citation Index expanded). The mean impact factor and CiteScore are 2.08 and 2.67 respectively. Pearson correlation coefficient between the impact factor and CiteScore is (0.919, p = 0.000) and between their respective journal percentiles is (r = 0.804, p = 0.000). Analysis of variance revealed that the means of the impact factor and CiteScore of the 105 CSTM journals are the same (F = 3.64, P = 0.058) but different (F = 38.94, P = 0.00) for their respective percentiles. The median test contradicts the ANOVA as the medians of impact factor and CiteScore are different at 0.05 level of significance. The median journal percentiles are the same for only 2 journal titles. The median journal percentile (SCIE) is greater than the median journal percentile (Scopus) for 5 journal titles and less than the median journal percentile (Scopus) for 98 journal titles. The same result was obtained when the percentiles were converted to quartiles, but in this case, the median journal quartiles are the same for 37 journal titles. The median journal quartile (SCIE) is greater than the median journal quartile (Scopus) for 67 journal titles and less than the median journal quartile (Scopus) in only one journal title. Only 37 (35 %) journals are in the same quartile of the two metrics. Caution is recommended in journal evaluation as conflicting different results can be obtained using the same metric.


2007 ◽  
Vol 148 (4) ◽  
pp. 165-171
Author(s):  
Anna Berhidi ◽  
Edit Csajbók ◽  
Lívia Vasas

Nobody doubts the importance of the scientific performance’s evaluation. At the same time its way divides the group of experts. The present study mostly deals with the models of citation-analysis based evaluation. The aim of the authors is to present the background of the best known tool – Impact factor – since, according to the authors’ experience, to the many people use without knowing it well. In addition to the „nonofficial impact factor” and Euro-factor, the most promising index-number, h-index is presented. Finally new initiation – Index Copernicus Master List – is delineated, which is suitable to rank journals. Studying different indexes the authors make a proposal and complete the method of long standing for the evaluation of scientific performance.


Sign in / Sign up

Export Citation Format

Share Document