scholarly journals Human-Centered A.I. and Security Primitives

2020 ◽  
Vol 2 (4) ◽  
Author(s):  
Alex Mathew

The paper reviews how human-centered artificial intelligence and security primitive have influenced life in the modern world and how it’s useful in the future. Human-centered A.I. has enhanced our capabilities by the way of intelligence, human informed technology. It has created a technology that has made machines and computer intelligently carry their function. The security primitive has enhanced the safety of the data and increased accessibility of data from anywhere regardless of the password is known. This has improved personalized customer activities and filled the gap between the human-machine. This has been successful due to the usage of heuristics which solve belowems by experimental, support vector machine which evaluates and group the data, natural language processing systems which change speech to language. The results of this will lead to image recognition, games, speech recognition, translation, and answering questions. In conclusion, human-centered A.I. and security primitives is an advanced mode of technology that uses statistical mathematical models that provides tools to perform certain work. The results keep on advancing and spreading with years and it will be common in our lives.

2017 ◽  
Vol 2 (1) ◽  
pp. 249-257
Author(s):  
Daniel Morariu ◽  
Lucian Vințan ◽  
Radu Crețulescu

Abstract In this paper, we will present experiments that try to integrate the power of Word Embedding representation in real problems for documents classification. Word Embedding is a new tendency used in the natural language processing domain that tries to represent each word from the document in a vector format. This representation embeds the semantically context in that the word occurs more frequently. We include this new representation in a classical VSM document representation and evaluate it using a learning algorithm based on the Support Vector Machine. This new added information makes the classification to be more difficult because it increases the learning time and the memory needed. The obtained results are slightly weaker comparatively with the classical VSM document representation. By adding the WE representation to the classical VSM representation we want to improve the current educational paradigm for the computer science students which is generally limited to the VSM representation.


Author(s):  
Oksana Chulanova

The article discusses the capabilities of artificial intelligence technologies - technologies based on the use of artificial intelligence, including natural language processing, intellectual decision support, computer vision, speech recognition and synthesis, and promising methods of artificial intelligence. The results of the author's study and the analysis of artificial intelligence technologies and their capabilities for optimizing work with staff are presented. A study conducted by the author allowed us to develop an author's concept of integrating artificial intelligence technologies into work with personnel in the digital paradigm.


Author(s):  
Chaudhary Jashubhai Rameshbhai ◽  
Joy Paulose

<p>Opinion Mining also known as Sentiment Analysis, is a technique or procedure which uses Natural Language processing (NLP) to classify the outcome from text. There are various NLP tools available which are used for processing text data. Multiple research have been done in opinion mining for online blogs, Twitter, Facebook etc. This paper proposes a new opinion mining technique using Support Vector Machine (SVM) and NLP tools on newspaper headlines. Relative words are generated using Stanford CoreNLP, which is passed to SVM using count vectorizer. On comparing three models using confusion matrix, results indicate that Tf-idf and Linear SVM provides better accuracy for smaller dataset. While for larger dataset, SGD and linear SVM model outperform other models.</p>


Author(s):  
Jesús Fernández-Avelino ◽  
Giner Alor-Hernández ◽  
Mario Andrés Paredes-Valverde ◽  
Laura Nely Sánchez-Morales

A chatbot is a software agent that mimics human conversation using artificial intelligence technologies. Chatbots help to accomplish tasks ranging from answering questions, playing music, to managing smart home devices. The adoption of this kind of agent is increasing since people are discovering the benefits of them, such as saving time and money, higher customer satisfaction, customer base growing, among others. However, developing a chatbot is a challenging task that requires addressing several issues such as pattern matching, natural language understanding, and natural language processing, as well as to design a knowledge base that encapsulates the intelligence of the system. This chapter describes the design and implementation of a text/speech chatbot for supporting health self-management. This chatbot is currently based on Spanish. The main goal of this chapter is to clearly describe the main components and phases of the chatbot development process, the methods, and tools used for this purpose, as well as to describe and discuss our findings from the practice side of things.


2021 ◽  
pp. 142-147
Author(s):  
M Muliyono ◽  
S Sumijan

Chatbot is a software with artificial intelligence that can imitate human conversations through text messages or voice messages. This chatbot can convey information, according to the knowledge that has been given previously. Helping the limitations of the academic section in answering questions posed by students. The method in this study was sourced from a questionnaire distributed to students at the Muhammadiyah University of West Sumatra. Based on the analysis of the questionnaire, there are 40 questions that are often asked by students to the academic section. Then it is processed using Natural Language Processing (NLP). Natural Language Processing is a branch of science from artificial intelligence that is able to study communication between humans and computers through natural language. The processing stage is to identify the intent, process the input and display the results according to the input. The results of the test using a questionnaire addressed to 227 students got a score of 3,55 with a very good predicate. Then do the test using 40 question and answer data. So, obtained 37 appropriate answers and 3 answers that are not in accordance with the percentage of answer accuracy generated from the chatbot is 92.5 percent. The results of this test have been able to respond to the questions asked by students. This chatbot can make it easier for students to get information with a very good level of accuracy


2021 ◽  
Vol 8 (6) ◽  
pp. 1265
Author(s):  
Muhammad Alkaff ◽  
Andreyan Rizky Baskara ◽  
Irham Maulani

<p>Sebuah sistem layanan untuk menyampaikan aspirasi dan keluhan masyarakat terhadap layanan pemerintah Indonesia, bernama Lapor! Pemerintah sudah lama memanfaatkan sistem tersebut untuk menjawab permasalahan masyarakat Indonesia terkait permasalahan birokrasi. Namun, peningkatan volume laporan dan pemilahan laporan yang dilakukan oleh operator dengan membaca setiap keluhan yang masuk melalui sistem menyebabkan sering terjadi kesalahan dimana operator meneruskan laporan tersebut ke instansi yang salah. Oleh karena itu, diperlukan suatu solusi yang dapat menentukan konteks laporan secara otomatis dengan menggunakan teknik Natural Language Processing. Penelitian ini bertujuan untuk membangun klasifikasi laporan secara otomatis berdasarkan topik laporan yang ditujukan kepada instansi yang berwenang dengan menggabungkan metode Latent Dirichlet Allocation (LDA) dan Support Vector Machine (SVM). Proses pemodelan topik untuk setiap laporan dilakukan dengan menggunakan metode LDA. Metode ini mengekstrak laporan untuk menemukan pola tertentu dalam dokumen yang akan menghasilkan keluaran dalam nilai distribusi topik. Selanjutnya, proses klasifikasi untuk menentukan laporan agensi tujuan dilakukan dengan menggunakan SVM berdasarkan nilai topik yang diekstraksi dengan metode LDA. Performa model LDA-SVM diukur dengan menggunakan confusion matrix dengan menghitung nilai akurasi, presisi, recall, dan F1 Score. Hasil pengujian menggunakan teknik split train-test dengan skor 70:30 menunjukkan bahwa model menghasilkan kinerja yang baik dengan akurasi 79,85%, presisi 79,98%, recall 72,37%, dan Skor F1 74,67%.</p><p> </p><p><em><strong>Abstract</strong></em></p><p><em>A service system to convey aspirations and complaints from the public against Indonesia's government services, named Lapor! The Government has used the Government for a long time to answer the problems of the Indonesian people related to bureaucratic problems. However, the increasing volume of reports and the sorting of reports carried out by operators by reading every complaint that comes through the system cause frequent errors where operators forward the reports to the wrong agencies. Therefore, we need a solution that can automatically determine the report's context using Natural Language Processing techniques. This study aims to build automatic report classifications based on report topics addressed to authorized agencies by combining Latent Dirichlet Allocation (LDA) and Support Vector Machine (SVM). The topic-modeling process for each report was carried out using the LDA method. This method extracts reports to find specific patterns in documents that will produce output in topic distribution values. Furthermore, the classification process to determine the report's destination agency carried out using the SVM based on the value of the topics extracted by the LDA method. The LDA-SVM model's performance is measured using a confusion matrix by calculating the value of accuracy, precision, recall, and F1 Score. The test results using the train-test split technique with a 70:30 show that the model produces good performance with 79.85% accuracy, 79.98% precision, 72.37% recall, and 74.67% F1 Score</em></p><p><em><strong><br /></strong></em></p>


Detecting the author of the sentence in a collective document can be done by choosing a suitable set of features and implementing using Natural Language Processing in Machine Learning. Training our machine is the basic idea to identify the author name of a specific sentence. This can be done by using 8 different NLP steps like applying stemming algorithm, finding stop-list words, preprocessing the data, and then applying it to a machine learning classifier-Support vector machine (SVM) which classify the dataset into a number of classes specifying the author of the sentence and defines the name of author for each and every sentence with an accuracy of 82%.This paper helps the readers who are interested in knowing the names of the authors who have written some specific words


SAINTEKBU ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 40-44
Author(s):  
Iin Kurniasari

Facebook adalah salah satu media sosial yang sering digunakan. Terutama pada pandemi co-19 saat ini. Banyak sekali sentimen publik yang beredar, terutama di Facebook dalam bentuk komentar atas informasi yang ada tentang covid-19 yang menantang untuk dianalisis untuk beberapa tujuan. Teknik NLP (Natural Language Processing) yang terdiri dari casefolding, tokenizing, filtering dan stemming dapat digunakan dalam kasus ini. Studi ini berfokus pada pengembangan analisis sentimen di Facebook menggunakan Lexicon dan Support Vector Machine. Data Lexicon yang diperoleh memiliki akurasi lebih rendah daripada menggunakan Support Vector Machine.


2020 ◽  
Vol 132 (4) ◽  
pp. 738-749 ◽  
Author(s):  
Michael L. Burns ◽  
Michael R. Mathis ◽  
John Vandervest ◽  
Xinyu Tan ◽  
Bo Lu ◽  
...  

Abstract Background Accurate anesthesiology procedure code data are essential to quality improvement, research, and reimbursement tasks within anesthesiology practices. Advanced data science techniques, including machine learning and natural language processing, offer opportunities to develop classification tools for Current Procedural Terminology codes across anesthesia procedures. Methods Models were created using a Train/Test dataset including 1,164,343 procedures from 16 academic and private hospitals. Five supervised machine learning models were created to classify anesthesiology Current Procedural Terminology codes, with accuracy defined as first choice classification matching the institutional-assigned code existing in the perioperative database. The two best performing models were further refined and tested on a Holdout dataset from a single institution distinct from Train/Test. A tunable confidence parameter was created to identify cases for which models were highly accurate, with the goal of at least 95% accuracy, above the reported 2018 Centers for Medicare and Medicaid Services (Baltimore, Maryland) fee-for-service accuracy. Actual submitted claim data from billing specialists were used as a reference standard. Results Support vector machine and neural network label-embedding attentive models were the best performing models, respectively, demonstrating overall accuracies of 87.9% and 84.2% (single best code), and 96.8% and 94.0% (within top three). Classification accuracy was 96.4% in 47.0% of cases using support vector machine and 94.4% in 62.2% of cases using label-embedding attentive model within the Train/Test dataset. In the Holdout dataset, respective classification accuracies were 93.1% in 58.0% of cases and 95.0% among 62.0%. The most important feature in model training was procedure text. Conclusions Through application of machine learning and natural language processing techniques, highly accurate real-time models were created for anesthesiology Current Procedural Terminology code classification. The increased processing speed and a priori targeted accuracy of this classification approach may provide performance optimization and cost reduction for quality improvement, research, and reimbursement tasks reliant on anesthesiology procedure codes. Editor’s Perspective What We Already Know about This Topic What This Article Tells Us That Is New


2019 ◽  
Vol 8 (10) ◽  
pp. 1677 ◽  
Author(s):  
Franca Dipaola ◽  
Mauro Gatti ◽  
Veronica Pacetti ◽  
Anna Giulia Bottaccioli ◽  
Dana Shiffer ◽  
...  

Background: Enrollment of large cohorts of syncope patients from administrative data is crucial for proper risk stratification but is limited by the enormous amount of time required for manual revision of medical records. Aim: To develop a Natural Language Processing (NLP) algorithm to automatically identify syncope from Emergency Department (ED) electronic medical records (EMRs). Methods: De-identified EMRs of all consecutive patients evaluated at Humanitas Research Hospital ED from 1 December 2013 to 31 March 2014 and from 1 December 2015 to 31 March 2016 were manually annotated to identify syncope. Records were combined in a single dataset and classified. The performance of combined multiple NLP feature selectors and classifiers was tested. Primary Outcomes: NLP algorithms’ accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F3 score. Results: 15,098 and 15,222 records from 2013 and 2015 datasets were analyzed. Syncope was present in 571 records. Normalized Gini Index feature selector combined with Support Vector Machines classifier obtained the best F3 value (84.0%), with 92.2% sensitivity and 47.4% positive predictive value. A 96% analysis time reduction was computed, compared with EMRs manual review. Conclusions: This artificial intelligence algorithm enabled the automatic identification of a large population of syncope patients using EMRs.


Sign in / Sign up

Export Citation Format

Share Document