exact match
Recently Published Documents


TOTAL DOCUMENTS

123
(FIVE YEARS 48)

H-INDEX

15
(FIVE YEARS 2)

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Jaydeep Kumar Basak ◽  
Debarshi Basu ◽  
Vinay Malvimat ◽  
Himanshu Parihar ◽  
Gautam Sengupta

We advance two alternative proposals for the island contributions to the entanglement negativity of various pure and mixed state configurations in quantum field theories coupled to semiclassical gravity. The first construction involves the extremization of an algebraic sum of the generalized Renyi entropies of order half. The second proposal involves the extremization of the sum of the effective entanglement negativity of quantum matter fields and the backreacted area of a cosmic brane spanning the entanglement wedge cross section which also extremizes the generalized Renyi reflected entropy of order half. These proposals are utilized to obtain the island contributions to the entanglement negativity of various pure and mixed state configurations involving the bath systems coupled to extremal and non-extremal black holes in JT gravity demonstrating an exact match with each other. Furthermore, the results from both the proposals match precisely with the island contribution to half the Renyi reflected entropy of order half providing a strong consistency check. We then allude to a possible doubly holographic picture of our island proposals and provide a derivation of the first proposal by determining the corresponding replica wormhole contributions.


2021 ◽  
Vol 11 (24) ◽  
pp. 12116
Author(s):  
Shanza Abbas ◽  
Muhammad Umair Khan ◽  
Scott Uk-Jin Lee ◽  
Asad Abbas

Natural language interfaces to databases (NLIDB) has been a research topic for a decade. Significant data collections are available in the form of databases. To utilize them for research purposes, a system that can translate a natural language query into a structured one can make a huge difference. Efforts toward such systems have been made with pipelining methods for more than a decade. Natural language processing techniques integrated with data science methods are researched as pipelining NLIDB systems. With significant advancements in machine learning and natural language processing, NLIDB with deep learning has emerged as a new research trend in this area. Deep learning has shown potential for rapid growth and improvement in text-to-SQL tasks. In deep learning NLIDB, closing the semantic gap in predicting users’ intended columns has arisen as one of the critical and fundamental problems in this research field. Contributions toward this issue have consisted of preprocessed feature inputs and encoding schema elements afore of and more impactful to the targeted model. Various significant work contributed towards this problem notwithstanding, this has been shown to be one of the critical issues for the task of developing NLIDB. Working towards closing the semantic gap between user intention and predicted columns, we present an approach for deep learning text-to-SQL tasks that includes previous columns’ occurrences scores as an additional input feature. Overall exact match accuracy can also be improved by emphasizing the improvement of columns’ prediction accuracy, which depends significantly on column prediction itself. For this purpose, we extract the query fragments from previous queries’ data and obtain the columns’ occurrences and co-occurrences scores. Column occurrences and co-occurrences scores are processed as input features for the encoder–decoder-based text to the SQL model. These scores contribute, as a factor, the probability of having already used columns and tables together in the query history. We experimented with our approach on the currently popular text-to-SQL dataset Spider. Spider is a complex data set containing multiple databases. This dataset includes query–question pairs along with schema information. We compared our exact match accuracy performance with a base model using their test and training data splits. It outperformed the base model’s accuracy, and accuracy was further boosted in experiments with the pretrained language model BERT.


2021 ◽  
Vol 13 (1) ◽  
pp. 63
Author(s):  
Muhammad Saipul Rohman

Proses rekrutmen karyawan baru merupakan hal yang penting bagi perusahaan. Dengan memilih kandidat yang tepat sebagai karyawan maka kegiatan operasional perusahaan bisa berjalan dengan lancar sehingga bisa bersaing dengan perusahaan lain. Bagian Human Resource merupakan salah satu bagian yang vital karena melakukan proses rekrutmen karyawan baru. Kemungkinan terjadinya resiko human error, faktor penilaian subjektivitas dan pemeriksaan berkas yang masuk satu persatu  sehingga membutuhkan waktu yang tidak sebentar merupakan permasalahan utama saat proses seleksi penerimaan karyawan baru di Verint System.Metode AHP adalah salah satu metode sistem pendukung keputusan untuk menghitung bobot setiap kriteria dan metode SAW adalah salah satu metode sistem pendukung keputusan untuk merangking setiap alternatif berdasarkan setiap kriteria. Metode Exact Match digunakan untuk mengecek ketepatan kata jika ada yang sama maka kata tersebut tepat dan jika tidak ada yang sama maka kata tersebut Not Match, sedangkan Flag adalah penanda apabila benar diberi flag 1 (true) Exact Match jika tidak benar maka diberi flag / tanda 0 (not match). Metode yang digunakan dalam penelitian ini yaitu metode AHP dan optimasi metode AHP dengan SAW serta metode menghitung persentase akurasi dari metode tersebut menggunakan exact match. Berdasarkan hasil pengujian yang sudah di olah dan telah di analisis, hasil penelitian optimasi metode AHP dengan SAW menghasilkan persentase akurasi sebesar 90% lebih baik dari persentase akurasi metode AHP sebesar 10% yang membuktikan bahwa optimasi metode AHP dengan SAW membuat sistem menjadi lebih baik daripada dengan menggunakan metode AHP


Symmetry ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2394
Author(s):  
Teo Poh Kuang ◽  
Hamidah Ibrahim ◽  
Fatimah Sidi ◽  
Nur Izura Udzir ◽  
Ali A. Alwan

Policy evaluation is a process to determine whether a request submitted by a user satisfies the access control policies defined by an organization. Naming heterogeneity between the attribute values of a request and a policy is common due to syntactic variations and terminological variations, particularly among organizations of a distributed environment. Existing policy evaluation engines employ a simple string equal matching function in evaluating the similarity between the attribute values of a request and a policy, which are inaccurate, since only exact match is considered similar. This work proposes several matching functions which are not limited to the string equal matching function that aim to resolve various types of naming heterogeneity. Our proposed solution is also capable of supporting symmetrical architecture applications, in which the organization can negotiate with the users for the release of their resources and properties that raise privacy concerns. The effectiveness of the proposed matching functions on real XACML policies, designed for universities, conference management, and the health care domain, is evaluated. The results show that the proposed solution has successfully achieved higher percentages of Recall and F-measure compared with the standard Sun’s XACML implementation, with our improvement, these measures gained up to 70% and 57%, respectively.


2021 ◽  
Vol 2062 (1) ◽  
pp. 012027
Author(s):  
Poonam Gupta ◽  
Ruchi Garg ◽  
Amandeep Kaur

Abstract In the present scenario COVID-19 pandemic has ruined the entire world. This situation motivates the researchers to resolve the query raised by the people around the world in an efficient manner. However, less number of resources available in order to gain the information and knowledge about COVID-19 arises a need to evaluate the existing Question Answering (QA) systems on COVID-19. In this paper, we compare the various QA systems available in order to answer the questions raised by the people like doctors, medical researchers etc. related to corona virus. QA systems process the queries submitted in natural language to find the best relevant answer among all the candidate answers for the COVID-19 related questions. These systems utilize the text mining and information retrieval on COVID-19 literature. This paper describes the survey of QA systems-CovidQA, CAiRE (Center for Artificial Intelligence Research)-COVID system, CO-search semantic search engine, COVIDASK, RECORD (Research Engine for COVID Open Research Dataset) available for COVID-19. All these QA systems are also compared in terms of their significant parameters-like Precision at rank 1 (P@1), Recall at rank 3(R@3), Mean Reciprocal Rank(MRR), F1-Score, Exact Match(EM), Mean Average Precision, Score metric etc.; on which efficiency of these systems relies.


2021 ◽  
Vol 11 (21) ◽  
pp. 10267
Author(s):  
Puri Phakmongkol ◽  
Peerapon Vateekul

Question Answering (QA) is a natural language processing task that enables the machine to understand a given context and answer a given question. There are several QA research trials containing high resources of the English language. However, Thai is one of the languages that have low availability of labeled corpora in QA studies. According to previous studies, while the English QA models could achieve more than 90% of F1 scores, Thai QA models could obtain only 70% in our baseline. In this study, we aim to improve the performance of Thai QA models by generating more question-answer pairs with Multilingual Text-to-Text Transfer Transformer (mT5) along with data preprocessing methods for Thai. With this method, the question-answer pairs can synthesize more than 100 thousand pairs from provided Thai Wikipedia articles. Utilizing our synthesized data, many fine-tuning strategies were investigated to achieve the highest model performance. Furthermore, we have presented that the syllable-level F1 is a more suitable evaluation measure than Exact Match (EM) and the word-level F1 for Thai QA corpora. The experiment was conducted on two Thai QA corpora: Thai Wiki QA and iApp Wiki QA. The results show that our augmented model is the winner on both datasets compared to other modern transformer models: Roberta and mT5.


2021 ◽  
Vol 12 (3) ◽  
Author(s):  
Vanessa Souza ◽  
Jeferson Nobre ◽  
Karin Becker

The use of social networks to expose personal difficulties has enabled works on the automatic identification of specific mental conditions, particularly depression. Depression is the most incapacitating disease worldwide, and it has an alarming comorbidity rate with anxiety. In this paper, we explore deep learning techniques to develop a stacking ensemble to automatically identify depression, anxiety, and comorbidity, using data extracted from Reddit. The stacking is composed of specialized single-label binary classifiers that distinguish between specific disorders and control users. A meta-learner explores these base classifiers as a context for reaching a multi-label, multi-class decision. We developed extensive experiments using alternative architectures (LSTM, CNN, and their combination), word embeddings, and ensemble topologies. All base classifiers and ensembles outperformed the baselines. The CNN-based binary classifiers achieved the best performance, with f-measures of 0.79 for depression, 0.78 for anxiety, and 0.78 for comorbidity. The ensemble topology with best performance (Hamming Loss of 0.29 and Exact Match Ratio of 0.47) combines base classifiers according to three architectures, and do not include comorbidity classifiers. Using SHAP, we confirmed the influential features are related to symptoms of these disorders.


Author(s):  
Г.Ш. Григорян

В годы Первой мировой войны происходили массовые миграции беженцев из прифронтовых районов вглубь России, что оказало существенное влияние на изменение этнического состава Москвы. В городе резко возросла доля этнических групп, ранее преобладавших в западных губерниях империи, это было нехарактерно для предвоенного периода. Однако недостаток сведений в изданной статистике усложняет для исследователей анализ влияния мигрантов военного времени на долю таких наиболее многочисленных этнических групп Москвы как: русские, евреи, немцы, поляки, литовцы, латыши, эстонцы, белорусы, украинцы, татары, армяне. Также определенную сложность представляет недоучет всех военных мигрантов, так как значительная их часть не регистрировалась в качестве беженцев, не попадая в статистику. Ввод в научный оборот ведомостей о числе жителей Москвы по вероисповеданиям за 1908–1916 гг., позволяет частично восполнить такой пробел и оценить динамику этнического состава населения города. Кроме того, сделана попытка выяснить общую численность всех военных мигрантов, находившихся в Москве перед революцией 1917 г., а также выделить беженцев-белорусов из общей численности православных и католиков. Предложенный метод сопоставления конфессиональной и этнической принадлежности не дает точного совпадения, однако с учетом привлечения дополнительных источников, позволяет выявить тенденции в изменениях этнического состава населения города, а также проанализировать факторы, влиявшие на динамику численности отдельных групп этнических меньшинств. During the First World War, there were massive migrations of refugees from the front-line areas deep into Russia, which significantly affected the ethnic structure of the Moscow population. In the city, the share of ethnic groups that predominated in the empire’s western provinces increased sharply compared with the pre-war period. However, the lack of information in the published statistics makes it difficult for researchers to analyze how wartime migrants influenced the share of numerous ethnic groups in Moscow as Russians, Jews, Germans, Poles, Lithuanians, Latvians, Estonians, Belarusians, Ukrainians, Tatars, and Armenians. The author analyzed statements of the Moscow residents by religion for 1908–1916, which allowed him to partially fill this knowledge gap and assess the dynamics of the ethnic structure of the city. The proposed method of equating confession with ethnicity does not give an exact match. However, along with additional sources, it allows us to identify trends in the ethnic structure of the city and analyze the factors that influenced the size of certain ethnic minorities.


2021 ◽  
Vol 25 (Special) ◽  
pp. 1-157-1-166
Author(s):  
Nabaa I. Abed ◽  
◽  
Ghanim A.AL Rubaye ◽  

The phenomenal increase in the usage of mobile devices and wireless networking tools in recent years has resulted in the communication industry needing higher data speeds for connections and bandwidth. As a result, multi-carrier modulation has been suggested as a reliable and effective method of transmitting data over difficult communication networks such as selective fading channels. Orthogonal Frequency Division Multiplexing (OFDM) is a highly effective multi-carrier technique that can meet users' high demands. Many studies have looked into this technique, mostly as a way to counteract fading and Additive White Gaussian Noise (AWGN). As a result, the performance evaluation of the QAM-OFDM system in the presence of multi-path Rayleigh fading in Weibull noise is examined in this article. Furthermore, bit error rate performance (BER) is computed using the optimal derivation of the real system contaminated by compound Gaussian and non-Gaussian (Weibull) noise distributions at the OFDM demodulator output. The derived result is an exact match to the simulated result over various scenarios introduced by the MATLAB software package.


2021 ◽  
Author(s):  
Youngmok Jung ◽  
Dongsu Han

The growing use of next-generation sequencing and enlarged sequencing throughput require efficient short-read alignment, where seeding is one of the major performance bottlenecks. The key challenge in the seeding phase is searching for exact matches of substrings of short reads in the reference DNA sequence. Existing algorithms, however, present limitations in performance due to their frequent memory accesses. This paper presents BWA-MEME, the first full-fledged short read alignment software that leverages learned indices for solving the exact match search problem for efficient seeding. BWA-MEME is a practical and efficient seeding algorithm based on a suffix array search algorithm that solves the challenges in utilizing learned indices for SMEM search which is extensively used in the seeding phase. Our evaluation shows that BWA-MEME achieves up to 3.45x speedup in seeding throughput over BWA-MEM2 by reducing the number of instructions by 4.60x, memory accesses by 8.77x, and LLC misses by 2.21x, while ensuring the identical SAM output to BWA-MEM2.


Sign in / Sign up

Export Citation Format

Share Document