scholarly journals Supplemental materials for preprint: KONTROL PERILAKU AGEN MENGGUNAKAN FUZZY LOGIC BERBASIS SEMANTIK

2021 ◽  
Author(s):  
akuwan saleh

?atural language merupakan proses pembuatan model komputasi dari bahasa, sehingga dapat terjadi interaksi antara manusia dengan komputer dengan perantaraan bahasa alami. Model komputasi ini dapat berguna untuk keperluan ilmiah seperti meneliti sifat-sifat dari suatu bentuk bahasa alami maupun untuk keperluan sehari-hari. Bidang-bidang pengetahuan yang berhubungan dengan natural language processing meliputi : Fonetik dan fonologi, morfologi, sintaksis, semantik, pragmatik, discourse knowledge, dan world knowledge. Definisi dari semantik yaitu pemetaan bentuk struktur sintaksis dengan memanfaatkan tiap kata ke dalam bentuk yang lebih mendasar dan tidak tergantung struktur kalimat. Semantik mempelajari arti suatu kata dan bagaimana dari arti kata-arti kata tersebut membentuk suatu arti dari kalimat yang utuh. Proses analisa semantik digunakan untuk mengenali kata-kata yang mendahului dan berhubungan dengan kata yang ada dalam domain. Proses ini dilakukan dengan menghubungkan struktur sintak mulai dari kata, frasa, kalimat, hingga paragraf. Dalam penelitian sebelumnya berkaitan dengan semantic mapping, pemetaan semantik dilakukan berdasarkan pada tampilan fisik dan selanjutnya peran dari suatu model/karakter dalam suatu cerita. Inti dari obyek permainan tidak harus dimunculkan dari tampilan fisik dari suatu karakter saja, tetapi juga dapat dihubungkan dengan parameter penting lain seperti pakaian, alat, benda, senjata yang dibawa oleh masing-masing karakternya.

Author(s):  
TIAN-SHUN YAO

With the word-based theory of natural language processing, a word-based Chinese language understanding system has been developed. In the light of psychological language analysis and the features of the Chinese language, this theory of natural language processing is presented with the description of the computer programs based on it. The heart of the system is to define a Total Information Dictionary and the World Knowledge Source used in the system. The purpose of this research is to develop a system which can understand not only Chinese sentences but also the whole text.


Author(s):  
L.A. Zadeh

<p>I feel honored by the dedication of the Special Issue of IJCCC to me. I should like to express my deep appreciation to the distinguished Co-Editors and my good friends, Professors Balas, Dzitac and Teodorescu, and to distinguished contributors, for honoring me. The subjects which are addressed in the Special Issue are on the frontiers of fuzzy logic.<br /> <br /> The Foreword gives me an opportunity to share with the readers of the Journal my recent thoughts regarding a subject which I have been pondering about for many years - fuzzy logic and natural languages. The first step toward linking fuzzy logic and natural languages was my 1973 paper," Outline of a New Approach to the Analysis of Complex Systems and Decision Processes." Two key concepts were introduced in that paper. First, the concept of a linguistic variable - a variable which takes words as values; and second, the concept of a fuzzy if- then rule - a rule in which the antecedent and consequent involve linguistic variables. Today, close to forty years later, these concepts are widely used in most applications of fuzzy logic.<br /> <br /> The second step was my 1978 paper, "PRUF - a Meaning Representation Language for Natural Languages." This paper laid the foundation for a series of papers in the eighties in which a fairly complete theory of fuzzy - logic-based semantics of natural languages was developed. My theory did not attract many followers either within the fuzzy logic community or within the linguistics and philosophy of languages communities. There is a reason. The fuzzy logic community is largely a community of engineers, computer scientists and mathematicians - a community which has always shied away from semantics of natural languages. Symmetrically, the linguistics and philosophy of languages communities have shied away from fuzzy logic.<br /> <br /> In the early nineties, a thought that began to crystallize in my mind was that in most of the applications of fuzzy logic linguistic concepts play an important, if not very visible role. It is this thought that motivated the concept of Computing with Words (CW or CWW), introduced in my 1996 paper "Fuzzy Logic = Computing with Words." In essence, Computing with Words is a system of computation in which the objects of computation are words, phrases and propositions drawn from a natural language. The same can be said about Natural Language Processing (NLP.) In fact, CW and NLP have little in common and have altogether different agendas.<br /> <br /> In large measure, CW is concerned with solution of computational problems which are stated in a natural language. Simple example. Given: Probably John is tall. What is the probability that John is short? What is the probability that John is very short? What is the probability that John is not very tall? A less simple example. Given: Usually Robert leaves office at about 5 pm. Typically it takes Robert about an hour to get home from work. What is the probability that Robert is home at 6:l5 pm.? What should be noted is that CW is the only system of computation which has the capability to deal with problems of this kind. The problem-solving capability of CW rests on two key ideas. First, employment of so-called restriction-based semantics (RS) for translation of a natural language into a mathematical language in which the concept of a restriction plays a pivotal role; and second, employment of a calculus of restrictions - a calculus which is centered on the Extension Principle of fuzzy logic.<br /> <br /> What is thought-provoking is that neither traditional mathematics nor standard probability theory has the capability to deal with computational problems which are stated in a natural language. Not having this capability, it is traditional to dismiss such problems as ill-posed. In this perspective, perhaps the most remarkable contribution of CW is that it opens the door to empowering of mathematics with a fascinating capability - the capability to construct mathematical solutions of computational problems which are stated in a natural language. The basic importance of this capability derives from the fact that much of human knowledge, and especially world knowledge, is described in natural language.<br /> <br /> In conclusion, only recently did I begin to realize that the formalism of CW suggests a new and challenging direction in mathematics - mathematical solution of computational problems which are stated in a natural language. For mathematics, this is an unexplored territory.</p>


Author(s):  
PASCUAL JULIÁN-IRANZO ◽  
FERNANDO SÁENZ-PÉREZ

Abstarct This paper introduces techniques to integrate WordNet into a Fuzzy Logic Programming system. Since WordNet relates words but does not give graded information on the relation between them, we have implemented standard similarity measures and new directives allowing the proximity equations linking two words to be generated with an approximation degree. Proximity equations are the key syntactic structures which, in addition to a weak unification algorithm, make a flexible query-answering process possible in this kind of programming language. This addition widens the scope of Fuzzy Logic Programming, allowing certain forms of lexical reasoning, and reinforcing Natural Language Processing (NLP) applications.


Triangle ◽  
2020 ◽  
pp. 73
Author(s):  
Adrià Torrens Urrutia

Defining the natural language and its gradient phenomena force us to look for formal tools that can represent the bases of a grammar with degrees of grammaticality. The mathematical and formal models are often used in linguistics. And yet, fuzzy logic has not received all the attention it deserves as a tool to explain the natural language processing. Here, we show the theoretical bases that have led us to treat the natural language (NL) inputs gradually. The basis of fuzzy logic for NL are explained here as a tool capable of defining non-discrete values, therefore gradual or fuzzy. A Property Grammar will give the rules of the fuzzy grammar.


2021 ◽  
Vol 23 (10) ◽  
pp. 81-92
Author(s):  
Dr. ASHISH KUMAR TAMRAKAR ◽  

Natural Language Processing (NLP) is the electronic tactic to analyzing text that is depends on both a set of ideas and a set of technologies. Natural Language Processing (NLP) is a subfield of artificial intelligence and etymology it thinks about the issues of computerized era and comprehension of regular human dialects. Common dialect era frameworks change over data from PC databases into ordinary sounding human dialect, and normal dialect understanding frameworks change over specimens of human dialect into more formal representations that are less demanding for PC projects to control. The Fuzzy logic-based approach provides another alternative for effective natural language analysis. It is commonly recognized that many phenomena in natural language lend themselves to descriptions by Fuzzy mathematics, including Fuzzy sets, Fuzzy relations and Fuzzy logic. By defining a Fuzzy logic system and acquiring proper rules, we hope that difficulties in analysis of speech can be alleviated. The goal of NLP is to enable communication between people and computers without resorting to memorization of complex commands and procedures.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Nicholas A. I. Omoregbe ◽  
Israel O. Ndaman ◽  
Sanjay Misra ◽  
Olusola O. Abayomi-Alli ◽  
Robertas Damaševičius

The use of natural language processing (NLP) methods and their application to developing conversational systems for health diagnosis increases patients’ access to medical knowledge. In this study, a chatbot service was developed for the Covenant University Doctor (CUDoctor) telehealth system based on fuzzy logic rules and fuzzy inference. The service focuses on assessing the symptoms of tropical diseases in Nigeria. Telegram Bot Application Programming Interface (API) was used to create the interconnection between the chatbot and the system, while Twilio API was used for interconnectivity between the system and a short messaging service (SMS) subscriber. The service uses the knowledge base consisting of known facts on diseases and symptoms acquired from medical ontologies. A fuzzy support vector machine (SVM) is used to effectively predict the disease based on the symptoms inputted. The inputs of the users are recognized by NLP and are forwarded to the CUDoctor for decision support. Finally, a notification message displaying the end of the diagnosis process is sent to the user. The result is a medical diagnosis system which provides a personalized diagnosis utilizing self-input from users to effectively diagnose diseases. The usability of the developed system was evaluated using the system usability scale (SUS), yielding a mean SUS score of 80.4, which indicates the overall positive evaluation.


2018 ◽  
Vol 132 ◽  
pp. 1375-1384 ◽  
Author(s):  
Charu Gupta ◽  
Amita Jain ◽  
Nisheeth Joshi

2015 ◽  
Vol 8 (2) ◽  
Author(s):  
Nisa Kurniasih Wangsanegara ◽  
Beki Subaeki

Ejaan  yang Disempurnakan  merupakan salah satu aspek penting dalam penulisan suatu dokumen. Penggunaan ejaan harus sesuai dengan yang tertera pada aturan baku yang dikeluarkan oleh Kementerian Pendidikan Nasional. Kesalahan yang banyak terjadi yaitu dalam penulisan kata, tanda baca, dan huruf kapital. Aplikasi ini akan mengidentifikasi dan menghitung jumlah kesalahan penulisan huruf kapital/ kata dan tanda baca. Pengukur penggunaan ketepatan EYD ini dibuat menggunakan metode Fuzzy Logic Tsukamoto. Proses yang dilakukan dalam metode Tsukamoto yaitu: fuzzifikasi, pembentukan rule, mesin inferensi, dan defuzzifikasi. Jumlah kata yang saat ini tersedia dalam aplikasi ini yaitu 31.759 kata yang sebagian besar diambil dari Kamus Besar Bahasa Indonesia. Aplikasi ini dibangun menggunakan MySQL sebagai database dan menggunakan tools PHP yang berbasis website. Berdasarkan pengujian yang dilakukan terhadap 20 abstrak skripsi, presentase kesesuaian hasil identifikasi yaitu 70% sesuai dengan hasil identifikasi secara manual. Kata kunci: EYD, penulisan, Fuzzy Logic Tsukamoto, PHP.


2018 ◽  
Vol 19 (1) ◽  
pp. 61-79
Author(s):  
Yu-Yun Chang ◽  
Shu-Kai Hsieh

Abstract In Generative Lexicon Theory (glt) (Pustejovsky 1995), co-composition is one of the generative devices proposed to explain the cases of verbal polysemous behavior where more than one function application is allowed. The English baking verbs were used as examples to illustrate how their arguments co-specify the verb with qualia unification. Some studies (Blutner 2002; Carston 2002; Falkum 2007) stated that the information of pragmatics and world knowledge need to be considered as well. Therefore, this study would like to examine whether glt could be practiced in a real-world Natural Language Processing (nlp) application using collocations. We have conducted a fine-grained logical polysemy disambiguation task, taking the open-sourced Leiden Weibo Corpus as resource and computing with Support Vector Machine (svm) classifier. Within the classifier, we have taken collocated verbs under glt as main features. In addition, measure words and syntactic patterns are extracted as additional features for comparison. Our study investigates the logical polysemy of the Chinese verb kao ‘bake’. We find that glt could help in identifying logically polysemous cases; additional features would help the classifier achieve a higher performance.


Sign in / Sign up

Export Citation Format

Share Document