Characterizing Task-Relevant Information in Natural Language Software Artifacts

Author(s):  
Arthur Marques ◽  
Nick C. Bradley ◽  
Gail C. Murphy
Author(s):  
Ricardo Colomo-Palacios ◽  
Marcos Ruano-Mayoral ◽  
Pedro Soto-Acosta ◽  
Ángel García-Crespo

In current organizations, the importance of knowledge and competence is unquestionable. In Information Technology (IT) companies, which are, by definition, knowledge intensive, this importance is critical. In such organizations, the models of knowledge exploitation include specific processes and elements that drive the production of knowledge aimed at satisfying organizational objectives. However, competence evidence recollection is a highly intensive and time consuming task, which is the key point for this system. SeCEC-IT is a tool based on software artifacts that extracts relevant information using natural language processing techniques. It enables competence evidence detection by deducing competence facts from documents in an automated way. SeCEC-IT includes within its technological components such items as semantic technologies, natural language processing, and human resource communication standards (HR-XML).


2012 ◽  
pp. 1-12 ◽  
Author(s):  
Ricardo Colomo-Palacios ◽  
Marcos Ruano-Mayoral ◽  
Pedro Soto-Acosta ◽  
Ángel García-Crespo

In current organizations, the importance of knowledge and competence is unquestionable. In Information Technology (IT) companies, which are, by definition, knowledge intensive, this importance is critical. In such organizations, the models of knowledge exploitation include specific processes and elements that drive the production of knowledge aimed at satisfying organizational objectives. However, competence evidence recollection is a highly intensive and time consuming task, which is the key point for this system. SeCEC-IT is a tool based on software artifacts that extracts relevant information using natural language processing techniques. It enables competence evidence detection by deducing competence facts from documents in an automated way. SeCEC-IT includes within its technological components such items as semantic technologies, natural language processing, and human resource communication standards (HR-XML).


Author(s):  
Ricardo Colomo-Palacios ◽  
Marcos Ruano-Mayoral ◽  
Pedro Soto-Acosta ◽  
Ángel García-Crespo

In current organizations, the importance of knowledge and competence is unquestionable. In Information Technology (IT) companies, which are, by definition, knowledge intensive, this importance is critical. In such organizations, the models of knowledge exploitation include specific processes and elements that drive the production of knowledge aimed at satisfying organizational objectives. However, competence evidence recollection is a highly intensive and time consuming task, which is the key point for this system. SeCEC-IT is a tool based on software artifacts that extracts relevant information using natural language processing techniques. It enables competence evidence detection by deducing competence facts from documents in an automated way. SeCEC-IT includes within its technological components such items as semantic technologies, natural language processing, and human resource communication standards (HR-XML).


Author(s):  
Mario Jojoa Acosta ◽  
Gema Castillo-Sánchez ◽  
Begonya Garcia-Zapirain ◽  
Isabel de la Torre Díez ◽  
Manuel Franco-Martín

The use of artificial intelligence in health care has grown quickly. In this sense, we present our work related to the application of Natural Language Processing techniques, as a tool to analyze the sentiment perception of users who answered two questions from the CSQ-8 questionnaires with raw Spanish free-text. Their responses are related to mindfulness, which is a novel technique used to control stress and anxiety caused by different factors in daily life. As such, we proposed an online course where this method was applied in order to improve the quality of life of health care professionals in COVID 19 pandemic times. We also carried out an evaluation of the satisfaction level of the participants involved, with a view to establishing strategies to improve future experiences. To automatically perform this task, we used Natural Language Processing (NLP) models such as swivel embedding, neural networks, and transfer learning, so as to classify the inputs into the following three categories: negative, neutral, and positive. Due to the limited amount of data available—86 registers for the first and 68 for the second—transfer learning techniques were required. The length of the text had no limit from the user’s standpoint, and our approach attained a maximum accuracy of 93.02% and 90.53%, respectively, based on ground truth labeled by three experts. Finally, we proposed a complementary analysis, using computer graphic text representation based on word frequency, to help researchers identify relevant information about the opinions with an objective approach to sentiment. The main conclusion drawn from this work is that the application of NLP techniques in small amounts of data using transfer learning is able to obtain enough accuracy in sentiment analysis and text classification stages.


2018 ◽  
Vol 17 (03) ◽  
pp. 883-910 ◽  
Author(s):  
P. D. Mahendhiran ◽  
S. Kannimuthu

Contemporary research in Multimodal Sentiment Analysis (MSA) using deep learning is becoming popular in Natural Language Processing. Enormous amount of data are obtainable from social media such as Facebook, WhatsApp, YouTube, Twitter and microblogs every day. In order to deal with these large multimodal data, it is difficult to identify the relevant information from social media websites. Hence, there is a need to improve an intellectual MSA. Here, Deep Learning is used to improve the understanding and performance of MSA better. Deep Learning delivers automatic feature extraction and supports to achieve the best performance to enhance the combined model that integrates Linguistic, Acoustic and Video information extraction method. This paper focuses on the various techniques used for classifying the given portion of natural language text, audio and video according to the thoughts, feelings or opinions expressed in it, i.e., whether the general attitude is Neutral, Positive or Negative. From the results, it is perceived that Deep Learning classification algorithm gives better results compared to other machine learning classifiers such as KNN, Naive Bayes, Random Forest, Random Tree and Neural Net model. The proposed MSA in deep learning is to identify sentiment in web videos which conduct the poof-of-concept experiments that proved, in preliminary experiments using the ICT-YouTube dataset, our proposed multimodal system achieves an accuracy of 96.07%.


2004 ◽  
Vol 18 (4) ◽  
pp. 323-336 ◽  
Author(s):  
Paul C. Amrhein

A psycholinguistic account of motivational interviewing (MI) is proposed. Critical to this view is the assumption that therapists and clients are natural language users engaged in a constructive conversation that reveals and augments relevant information about the status of future change in a client’s substance abuse. The role of client speech acts—most notably, verbal commitments—during MI is highlighted. How commitments can be signaled in client speech or gestures is discussed. How these commitment signals can inform therapeutic process and subsequent behavioral outcome is then put forth. Using natural language as a measure, a MI process model is presented that not only posits a mediational role for client commitment in relating underlying factors of desire, ability (self-efficacy), need, and reasons to behavior, but also a pivotal role as a need-satisfying enabler of a social-cognitive mechanism for personal change.


2015 ◽  
Vol 3 (1) ◽  
Author(s):  
Janu Saptari ◽  
Purwono Purwono

This Research aim to know the retrieval effectiveness on Online Union Catalog of UGM Library, index pattern at data bases, knowing which's more effective of searching at entri oftitle and subjec and also to know the cause ofeffectiveness difference retrieval at the both entri. Along with growth and information accretion, the main problem have shifted from way of accessing information become to chosen the relevant information with its requirement. Information retrieval is not possible to be done in the manual system, because very big information corps and non-stoped growing larger. The information retrieval system is very needed to to assist the consumer in finding information. One of the information retrieval system is online union catalog. With the online union catalog, by users easily can look for the book title, pickings of research and other documentation of library where from and any time. Getting a number of relevant document with the requirement represent the crux in the searching activity, and effectiveness from online union catalog is this represent the key. Effectiveness from a system influenced by a lot of component which each other related/relevant like: quality of input metadata in data bases, index, searching strategy, ability of system application and keyword election. A system told effective if the system can find more amount document/ appropriate information of request by precission /high accuracy. Research done by testing search at online union catalog to use the natural language keyword. Keyword inclusion done at entri of title and entri subjek. Keyword of taken as sampel come from one of title matakuliah of each; every faculty in UGM. From each; every the topic then translated /formulated into 4 natural language keyword. Data obtained is grouped by relevant level. From the acquirement data; then analysed with the test ofnonparametrik Mann Whitney. From processing data got by conclusion that ratio of retrieval at title entries of equal to 66,6% and subject entries of equal to 58,3%. Data got by mean at title entries of equal to 85,9 document with the detail: very relevant 42,4%, less be relevant 24,2% irrelevant and also 33,5%. While at subject entries found by a mean data of equal to 62,5 document with the detail: very relevant 31,9%, less be relevant 26,45 and irrelevant 30,6%. Pursuant to this research result is suggested to increase performance from Online Union Catalog of UGM library. Is things required to improved by technique ofsearching and ability of searching system, the importance of taking care ofquality ofdata input, similarity ofmetadata and always the existence of renewal /updating ofcatalogue data from all member library. Keywords: effectiveness encounter ofretrieval information retrieval system online union catalog.


2005 ◽  
Vol 6 (1-2) ◽  
pp. 86-93 ◽  
Author(s):  
Henk Harkema ◽  
Ian Roberts ◽  
Rob Gaizauskas ◽  
Mark Hepple

Recent years have seen a huge increase in the amount of biomedical information that is available in electronic format. Consequently, for biomedical researchers wishing to relate their experimental results to relevant data lurking somewhere within this expanding universe of on-line information, the ability to access and navigate biomedical information sources in an efficient manner has become increasingly important. Natural language and text processing techniques can facilitate this task by making the information contained in textual resources such as MEDLINE more readily accessible and amenable to computational processing. Names of biological entities such as genes and proteins provide critical links between different biomedical information sources and researchers' experimental data. Therefore, automatic identification and classification of these terms in text is an essential capability of any natural language processing system aimed at managing the wealth of biomedical information that is available electronically. To support term recognition in the biomedical domain, we have developed Termino, a large-scale terminological resource for text processing applications, which has two main components: first, a database into which very large numbers of terms can be loaded from resources such as UMLS, and stored together with various kinds of relevant information; second, a finite state recognizer, for fast and efficient identification and mark-up of terms within text. Since many biomedical applications require this functionality, we have made Termino available to the community as a web service, which allows for its integration into larger applications as a remotely located component, accessed through a standardized interface over the web.


2016 ◽  
Vol 78 (9-3) ◽  
Author(s):  
Rosmayati Mohemad ◽  
Abdul Razak Hamdan ◽  
Zulaiha Ali Othamn ◽  
Noor Maizura Mohamad Noor

The enormous amount of unstructured data presents the biggest challenge to decision makers in eliciting meaningful information to support business decision-making. This study explores the potential use of ontologies in extracting and populating the information from various combinations of unstructured and semi-structured data formats such as tabular, form-based and natural language-based text. The main objective of this study is to propose an architecture of information extraction for ontology population. Contractor selection is chosen as the domain of interest. Thus, this research focuses on the extraction of contractor profiles from tender documents in order to enrich ontological contractor profile by populating the relevant extracted information. The findings are significantly good in precision and recall, in which the performance measures have reached an accuracy of 100% precision and recall for extracting information in both tabular and form-based formats. However, the precision score of relevant information extracted in natural language text is average with a percentage of 42.86% due to the limitation of the linguistic approach for processing Malay texts. 


Sign in / Sign up

Export Citation Format

Share Document