scholarly journals THE RIGHT THRESHOLD VALUE: WHAT IS THE RIGHT THRESHOLD OF COSINE MEASURE WHEN USING LATENT SEMANTIC ANALYSIS FOR EVALUATING STUDENT ANSWERS?

2006 ◽  
Vol 15 (05) ◽  
pp. 767-777 ◽  
Author(s):  
PHANNI PENUMATSA ◽  
MATTHEW VENTURA ◽  
ARTHUR C. GRAESSER ◽  
MAX LOUWERSE ◽  
XIANGEN HU ◽  
...  

AutoTutor is an intelligent tutoring system that holds conversations with learners in natural language. AutoTutor uses Latent Semantic Analysis (LSA) to match student answers to a set of expected answers that would appear in a complete and correct response or which reflect common but incorrect understandings of the material. The correctness of student contributions is decided using a threshold value of the LSA cosine between the student answer and the expectations. In previous work LSA has shown to be effective in detecting good answers of students. The results indicate that the best agreement between LSA matches and the evaluations of subject matter experts can be obtained if the cosine threshold is allowed to be a function of the lengths of both student answer and the expectation being considered. Based on some of our experiences with LSA and AutoTutor, we are developing a new mathematical model to improve the precision of AutoTutor's natural language understanding and discriminative ability.

Author(s):  
Ann Neethu Mathew ◽  
Rohini V. ◽  
Joy Paulose

Computer-based knowledge and computation systems are becoming major sources of leverage for multiple industry segments. Hence, educational systems and learning processes across the world are on the cusp of a major digital transformation. This paper seeks to explore the concept of an artificial intelligence and natural language processing (NLP) based intelligent tutoring system (ITS) in the context of computer education in primary and secondary schools. One of the components of an ITS is a learning assistant, which can enable students to seek assistance as and when they need, wherever they are. As part of this research, a pilot prototype chatbot was developed, to serve as a learning assistant for the subject Scratch (Scratch is a graphical utility used to teach school children the concepts of programming). By the use of an open source natural language understanding (NLU) or NLP library, and a slackbased UI, student queries were input to the chatbot, to get the sought explanation as the answer. Through a two-stage testing process, the chatbot’s NLP extraction and information retrieval performance were evaluated. The testing results showed that the ontology modelling for such a learning assistant was done relatively accurately, and shows its potential to be pursued as a cloud-based solution in future.


Information ◽  
2018 ◽  
Vol 9 (7) ◽  
pp. 179
Author(s):  
Tamás Mészáros ◽  
Margit Kiss

Critical annotations are important knowledge sources when researching one’s oeuvre. They describe literary, historical, cultural, linguistic and other kinds of information written in natural languages. Acquiring knowledge from these notes is a complex task due to the limited natural language understanding capability of computerized tools. The aim of the research was to extract knowledge from existing annotations, and to develop new authoring methods to facilitate the knowledge acquisition. After structural and semantic analysis of critical annotations, authors developed a software tool that transforms existing annotations into a structured form that encodes referral and factual knowledge. Authors also propose a new method for authoring annotations based on controlled natural languages. This method ensures that annotations are semantically processable by computer programs and the authoring process remains simple for non-technical users.


Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2300
Author(s):  
Rade Matic ◽  
Milos Kabiljo ◽  
Miodrag Zivkovic ◽  
Milan Cabarkapa

In recent years, gradual improvements in communication and connectivity technologies have enabled new technical possibilities for the adoption of chatbots across diverse sectors such as customer services, trade, and marketing. The chatbot is a platform that uses natural language processing, a subset of artificial intelligence, to find the right answer to all users’ questions and solve their problems. Advanced chatbot architecture that is extensible, scalable, and supports different services for natural language understanding (NLU) and communication channels for interactions of users has been proposed. The paper describes overall chatbot architecture and provides corresponding metamodels as well as rules for mapping between the proposed and two commonly used NLU metamodels. The proposed architecture could be easily extended with new NLU services and communication channels. Finally, two implementations of the proposed chatbot architecture are briefly demonstrated in the case study of “ADA” and “COVID-19 Info Serbia”.


2021 ◽  
Author(s):  
Brandon Bennett

The Winograd Schema Challenge is a general test for Artificial Intelligence, based on problems of pronoun reference resolution. I investigate the semantics and interpretation of Winograd Schemas, concentrating on the original and most famous example. This study suggests that a rich ontology, detailed commonsense knowledge as well as special purpose inference mechanisms are all required to resolve just this one example. The analysis supports the view that a key factor in the interpretation and disambiguation of natural language is the preference for coherence. This preference guides the resolution of co-reference in relation to both explicitly mentioned entities and also implicit entities that are required to form an interpretation of what is being described. I suggest that assumed identity of implicit entities arises from the expectation of coherence and provides a key mechanism that underpins natural language understanding. I also argue that conceptual ontologies can play a decisive role not only in directly determining pronoun references but also in identifying implicit entities and implied relationships that bind together components of a sentence.


Reusing the code with or without modification is common process in building all the large codebases of system software like Linux, gcc , and jdk. This process is referred to as software cloning or forking. Developers always find difficulty of bug fixes in porting large code base from one language to other native language during software porting. There exist many approaches in identifying software clones of same language that may not contribute for the developers involved in porting hence there is a need for cross language clone detector. This paper uses primary Natural Language Processing (NLP) approach using latent semantic analysis to find the cross language clones of other neighboring languages in terms of all 4 types of clones using latent semantic analysis algorithm that uses Singular value decomposition. It takes input as code(C, C++ or Java) and matches all the neighboring code clones in the static repository in terms of frequency of lines matched


2010 ◽  
Vol 11 (2) ◽  
pp. 444-449
Author(s):  
Hamid Darvish

United Nation published Human Rights Declaration in 1948. The most important part of the Human Right Declaration is that, everyone has the right to search and receive information at any time. To this respect, libraries play a significant task in disseminating information (Knowledge) to each individual. An exploratory approach is applied to selected discourses from organizations such as IFLA (International Federation of Library Associations and Institutions), ALA (American Library Association) and TLA (Turkish Librarians’ Association) to find out if there is a coherent relation among texts, by applying Latent Semantic Analysis (LSA) technique. Results yield that there existed a positive relation among discourses.


Author(s):  
Annie Zaenen

Hearers and readers make inferences on the basis of what they hear or read. These inferences are partly determined by the linguistic form that the writer or speaker chooses to give to her utterance. The inferences can be about the state of the world that the speaker or writer wants the hearer or reader to conclude are pertinent, or they can be about the attitude of the speaker or writer vis-à-vis this state of affairs. The attention here goes to the inferences of the first type. Research in semantics and pragmatics has isolated a number of linguistic phenomena that make specific contributions to the process of inference. Broadly, entailments of asserted material, presuppositions (e.g., factive constructions), and invited inferences (especially scalar implicatures) can be distinguished. While we make these inferences all the time, they have been studied piecemeal only in theoretical linguistics. When attempts are made to build natural language understanding systems, the need for a more systematic and wholesale approach to the problem is felt. Some of the approaches developed in Natural Language Processing are based on linguistic insights, whereas others use methods that do not require (full) semantic analysis. In this article, I give an overview of the main linguistic issues and of a variety of computational approaches, especially those stimulated by the RTE challenges first proposed in 2004.


Jezikoslovlje ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 77-98
Author(s):  
Joško Žanić

In this paper Gärdenfors’s geometric approach to meaning in natural language is compared to Jackendoff's algebraic one, and this is done against the backdrop of formal semantics. Ultimately, the paper tries to show that Jackendoff's framework is to be preferred to all others. The paper proceeds as follows. In Section 2, the common theoretical commitments of Gärdenfors and Jackendoff are outlined, and it is attempted to argue briefly that they are on the right track. In Section 3, the basics of the two frameworks to be compared are laid out, and it is assessed how they deal with some central issues in semantic theory, namely reference and truth, lexical decomposition, and compositionality. In Section 4, we get into the nitty-gritty of how Gärdenfors and Jackendoff actually proceed in semantic analysis, using an example of a noun and a verb (embedded in a sentence). In Section 5, the merits of Gärdenfors's empiricism when it comes to word learning and concept acquisition are assessed and compared to the moderate nativism of Jackendoff, and it is argued that Jackendoff's nativism is to be preferred. In the sixth section, the semantic internalism common to both frameworks is commented on.


Sign in / Sign up

Export Citation Format

Share Document