scholarly journals A Study on the Pivoted Inverse Document Frequency Weighting Method

2003 ◽  
Vol 20 (4) ◽  
pp. 233-248 ◽  
Author(s):  
Yuchi Kanzawa ◽  

In this study, a maximizing model of Bezdek-like spherical fuzzyc-means clustering is proposed, which is based on the regularization of the maximizing model of spherical hardc-means. Such a maximizing model was unclear in Bezdek-like method, whereas other types of methods have been investigated well both in minimizing and maximizing model. Using theoretical analysis and numerical experiments, the classification rule of the proposed method is shown. Using numerical examples, the proposed method is shown to be valid for document clustering, because documents are represented as spherical data via term document-inverse document frequency weighting and normalization processing.


2018 ◽  
Vol 4 (2) ◽  
pp. 56
Author(s):  
Moch. Zawaruddin Abdullah ◽  
Chastine Fatichah

News Feature Scoring (NeFS) merupakan metode pembobotan kalimat yang sering digunakan untuk melakukan pembobotan kalimat pada peringkasan dokumen berdasarkan fitur berita. Beberapa fitur berita diantaranya seperti word frequency, sentence position, Term Frequency-Inverse Document Frequency (TF-IDF), dan kemiripan kalimat terhadap judul. Metode NeFS mampu memilih kalimat penting dengan menghitung frekuensi kata dan mengukur similaritas kata antara kalimat dengan judul. Akan tetapi pembobotan dengan metode NeFS tidak cukup, karena metode tersebut mengabaikan kata informatif yang terkandung dalam kalimat. Kata-kata informatif yang terkandung pada kalimat dapat mengindikasikan bahwa kalimat tersebut penting. Penelitian ini bertujuan untuk melakukan pembobotan kalimat pada peringkasan multi-dokumen berita dengan pendekatan fitur berita dan informasi gramatikal (NeFGIS). Informasi gramatikal yang dibawa oleh part of speech tagging (POS Tagging) dapat menunjukkan adanya konten informatif. Pembobotan kalimat dengan pendekatan fitur berita dan informasi gramatikal diharapkan mampu memilih kalimat representatif secara lebih baik dan mampu meningkatkan kualitas hasil ringkasan. Pada penelitian ini terdapat 4 tahapan yang dilakukan antara lain seleksi berita, text preprocessing, sentence scoring, dan penyusunan ringkasan. Untuk mengukur hasil ringkasan menggunakan metode evaluasi Recall-Oriented Understudy for Gisting Evaluation (ROUGE) dengan empat varian fungsi yaitu ROUGE-1, ROUGE-2, ROUGE-L, dan ROUGE-SU4. Hasil ringkasan menggunakan metode yang diusulkan (NeFGIS) dibandingkan dengan hasil ringkasan menggunakan metode pembobotan dengan pendekatan fitur berita dan trending issue (NeFTIS). Metode NeFGIS memberikan hasil yang lebih baik dengan peningkatan nilai untuk fungsi recall pada ROUGE-1, ROUGE-2, ROUGE-L, dan ROUGE-SU4 secara berturut-turut adalah 20,37%, 33,33%, 1,85%, 23,14%.   News Feature Scoring (NeFS) is a sentence weighting method that used to weight the sentences in document summarization based on news features. There are several news features including word frequency, sentence position, Term Frequency-Inverse Document Frequency (TF-IDF), and sentences resemblance to the title. The NeFS method is able to select important sentences by calculating the frequency of words and measuring the similarity of words between sentences and titles. However, NeFS weighting method is not enough, because the method ignores the informative word in the sentence. The informative words contained in the sentence can indicate that the sentence is important. This study aims to weight the sentence in news multi-document summarization with news feature and grammatical information approach (NeFGIS). Grammatical information carried by part of speech tagging (POS Tagging) can indicate the presence of informative content. Sentence weighting with news features and grammatical information approach is expected to be able to determine sentence representatives better and be able to improve the quality of the summary results. In this study, there are 4 stages that are carried out including news selection, text preprocessing, sentence scoring, and compilation of summaries. Recall-Oriented Understanding for Gisting Evaluation (ROUGE) is used to measure the summary results with four variants of function; ROUGE-1, ROUGE-2, ROUGE-L, and ROUGE-SU4. Summary results using the proposed method (NeFGIS) are compared with summary results using sentence weighting methods with news feature and trending issue approach (NeFTIS). The NeFGIS method provides better results with increased value for recall functions in ROUGE-1, ROUGE-2, ROUGE-L, and ROUGE-SU4 respectively 20.37%, 33.33%, 1.85%, 23.14%. 


2020 ◽  
Vol 4 (5) ◽  
pp. 775-781
Author(s):  
Yoan Maria Vianny ◽  
Erwin Budi Setiawan

The existence of rumors on Twitter has caused a lot of unrest among Indonesians. Unrecognized validity confuses users for that information. In this study, an Indonesian rumor detection system is built by using J48 Algorithm in collaboration with Term Frequency Inverse Document Frequency (TF-IDF) weighting method. Dataset contains 47.449 tweets that have been manually labeled. This study offers new features, namely the number of emoticons in display name, the number of digits in display name, and the number of digits in username. These three new features are used to maximize information about information sources. The highest accuracy is obtained by 75.76% using 90% training data and 1.000 TF-IDF features in 1-gram to 3-gram combinations.  


Rekayasa ◽  
2020 ◽  
Vol 13 (2) ◽  
pp. 172-180
Author(s):  
Ana Tsalitsatun Ni'mah ◽  
Agus Zainal Arifin

Hadis adalah sumber rujukan agama Islam kedua setelah Al-Qur’an. Teks Hadis saat ini diteliti dalam bidang teknologi untuk dapat ditangkap nilai-nilai yang terkandung di dalamnya secara pegetahuan teknologi. Dengan adanya penelitian terhadap Kitab Hadis, pengambilan informasi dari Hadis tentunya membutuhkan representasi teks ke dalam vektor untuk mengoptimalkan klasifikasi otomatis. Klasifikasi Hadis diperlukan untuk dapat mengelompokkan isi Hadis menjadi beberapa kategori. Ada beberapa kategori dalam Kitab Hadis tertentu yang sama dengan Kitab Hadis lainnya. Ini menunjukkan bahwa ada beberapa dokumen Kitab Hadis tertentu yang memiliki topik yang sama dengan Kitab Hadis lain. Oleh karena itu, diperlukan metode term weighting yang dapat memilih kata mana yang harus memiliki bobot tinggi atau rendah dalam ruang Kitab Hadis untuk optimalisasi hasil klasifikasi dalam Kitab-kitab Hadis. Penelitian ini mengusulkan sebuah perbandingan beberapa metode term weighting, yaitu: Term Frequency Inverse Document Frequency (TF-IDF), Term Frequency Inverse Document Frequency Inverse Class Frequency (TF-IDF-ICF), Term Frequency Inverse Document Frequency Inverse Class Space Density Frequency (TF-IDF-ICSδF), dan Term Frequency Inverse Document Frequency Inverse Class Space Density Frequency Inverse Hadith Space Density Frequency (TF-IDF-ICSδF-IHSδF). Penelitian ini melakukan perbandingan hasil term weighting terhadap dataset Terjemahan 9 Kitab Hadis yang diterapkan pada mesin klasifikasi Naive Bayes dan SVM. 9 Kitab Hadis yang digunakan, yaitu: Sahih Bukhari, Sahih Muslim, Abu Dawud, at-Turmudzi, an-Nasa'i, Ibnu Majah, Ahmad, Malik, dan Darimi. Hasil uji coba menunjukkan bahwa hasil klasifikasi menggunakan metode term weighting TF-IDF-ICSδF-IHSδF mengungguli term weighting lainnya, yaitu mendapatkan Precission sebesar 90%, Recall sebesar 93%, F1-Score sebesar 92%, dan Accuracy sebesar 83%.Comparison of a term weighting method for the text classification in Indonesian hadithHadith is the second source of reference for Islam after the Qur’an. Currently, hadith text is researched in the field of technology for capturing the values of technology knowledge. With the research of the Book of Hadith, retrieval of information from the hadith certainly requires the representation of text into vectors to optimize automatic classification. The classification of the hadith is needed to be able to group the contents of the hadith into several categories. There are several categories in certain Hadiths that are the same as other Hadiths. Shows that there are certain documents of the hadith that have the same topic as other Hadiths. Therefore, a term weighting method is needed that can choose which words should have high or low weights in the Hadith Book space to optimize the classification results in the Hadith Books. This study proposes a comparison of several term weighting methods, namely: Term Frequency Inverse Document Frequency (TF-IDF), Term Frequency Inverse Document Frequency Inverse Class Frequency (TF-IDF-ICF), Term Frequency Inverse Document Frequency Inverse Class Space Density Frequency (TF-IDF-ICSδF) and Term Frequency Inverse Document Frequency Inverse Class Space Density Frequency Inverse Hadith Space Density Frequency (TF-IDF-ICSδF-IHSδF). This research compares the term weighting results to the 9 Hadith Book Translation dataset applied to the Naive Bayes classification engine and SVM. 9 Books of Hadith are used, namely: Sahih Bukhari, Sahih Muslim, Abu Dawud, at-Turmudzi, an-Nasa’i, Ibn Majah, Ahmad, Malik, and Darimi. The trial results show that the classification results using the TF-IDF-ICSδF-IHSδF term weighting method outperformed another term weighting, namely getting a Precession of 90%, Recall of 93%, F1-Score of 92%, and Accuracy of 83%.


Database ◽  
2019 ◽  
Vol 2019 ◽  
Author(s):  
Peter Brown ◽  
Aik-Choon Tan ◽  
Mohamed A El-Esawi ◽  
Thomas Liehr ◽  
Oliver Blanck ◽  
...  

Abstract Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.


1995 ◽  
Vol 1 (2) ◽  
pp. 163-190 ◽  
Author(s):  
Kenneth W. Church ◽  
William A. Gale

AbstractShannon (1948) showed that a wide range of practical problems can be reduced to the problem of estimating probability distributions of words and ngrams in text. It has become standard practice in text compression, speech recognition, information retrieval and many other applications of Shannon's theory to introduce a “bag-of-words” assumption. But obviously, word rates vary from genre to genre, author to author, topic to topic, document to document, section to section, and paragraph to paragraph. The proposed Poisson mixture captures much of this heterogeneous structure by allowing the Poisson parameter θ to vary over documents subject to a density function φ. φ is intended to capture dependencies on hidden variables such genre, author, topic, etc. (The Negative Binomial is a well-known special case where φ is a Г distribution.) Poisson mixtures fit the data better than standard Poissons, producing more accurate estimates of the variance over documents (σ2), entropy (H), inverse document frequency (IDF), and adaptation (Pr(x ≥ 2/x ≥ 1)).


Author(s):  
Saud Altaf ◽  
Sofia Iqbal ◽  
Muhammad Waseem Soomro

This paper focuses on capturing the meaning of Natural Language Understanding (NLU) text features to detect the duplicate unsupervised features. The NLU features are compared with lexical approaches to prove the suitable classification technique. The transfer-learning approach is utilized to train the extraction of features on the Semantic Textual Similarity (STS) task. All features are evaluated with two types of datasets that belong to Bosch bug and Wikipedia article reports. This study aims to structure the recent research efforts by comparing NLU concepts for featuring semantics of text and applying it to IR. The main contribution of this paper is a comparative study of semantic similarity measurements. The experimental results demonstrate the Term Frequency–Inverse Document Frequency (TF-IDF) feature results on both datasets with reasonable vocabulary size. It indicates that the Bidirectional Long Short Term Memory (BiLSTM) can learn the structure of a sentence to improve the classification.


Author(s):  
Mariani Widia Putri ◽  
Achmad Muchayan ◽  
Made Kamisutara

Sistem rekomendasi saat ini sedang menjadi tren. Kebiasaan masyarakat yang saat ini lebih mengandalkan transaksi secara online dengan berbagai alasan pribadi. Sistem rekomendasi menawarkan cara yang lebih mudah dan cepat sehingga pengguna tidak perlu meluangkan waktu terlalu banyak untuk menemukan barang yang diinginkan. Persaingan antar pelaku bisnis pun berubah sehingga harus mengubah pendekatan agar bisa menjangkau calon pelanggan. Oleh karena itu dibutuhkan sebuah sistem yang dapat menunjang hal tersebut. Maka dalam penelitian ini, penulis membangun sistem rekomendasi produk menggunakan metode Content-Based Filtering dan Term Frequency Inverse Document Frequency (TF-IDF) dari model Information Retrieval (IR). Untuk memperoleh hasil yang efisien dan sesuai dengan kebutuhan solusi dalam meningkatkan Customer Relationship Management (CRM). Sistem rekomendasi dibangun dan diterapkan sebagai solusi agar dapat meningkatkan brand awareness pelanggan dan meminimalisir terjadinya gagal transaksi di karenakan kurang nya informasi yang dapat disampaikan secara langsung atau offline. Data yang digunakan terdiri dari 258 kode produk produk yang yang masing-masing memiliki delapan kategori dan 33 kata kunci pembentuk sesuai dengan product knowledge perusahaan. Hasil perhitungan TF-IDF menunjukkan nilai bobot 13,854 saat menampilkan rekomendasi produk terbaik pertama, dan memiliki keakuratan sebesar 96,5% dalam memberikan rekomendasi pena.


Author(s):  
Kranti Vithal Ghag ◽  
Ketan Shah

<span>Bag-of-words approach is popularly used for Sentiment analysis. It maps the terms in the reviews to term-document vectors and thus disrupts the syntactic structure of sentences in the reviews. Association among the terms or the semantic structure of sentences is also not preserved. This research work focuses on classifying the sentiments by considering the syntactic and semantic structure of the sentences in the review. To improve accuracy, sentiment classifiers based on relative frequency, average frequency and term frequency inverse document frequency were proposed. To handle terms with apostrophe, preprocessing techniques were extended. To focus on opinionated contents, subjectivity extraction was performed at phrase level. Experiments were performed on Pang &amp; Lees, Kaggle’s and UCI’s dataset. Classifiers were also evaluated on the UCI’s Product and Restaurant dataset. Sentiment Classification accuracy improved from 67.9% for a comparable term weighing technique, DeltaTFIDF, up to 77.2% for proposed classifiers. Inception of the proposed concept based approach, subjectivity extraction and extensions to preprocessing techniques, improved the accuracy to 93.9%.</span>


Sign in / Sign up

Export Citation Format

Share Document