Py_ape: Text Data Acquiring, Extracting, Cleaning and Schema Matching in Python

Author(s):  
Bich-Ngan T. Nguyen ◽  
Phuong N. H. Phạm ◽  
Vu Thanh Nguyen ◽  
Phan Quoc Viet ◽  
Le Dinh Tuan ◽  
...  
1976 ◽  
Vol 15 (01) ◽  
pp. 21-28 ◽  
Author(s):  
Carmen A. Scudiero ◽  
Ruth L. Wong

A free text data collection system has been developed at the University of Illinois utilizing single word, syntax free dictionary lookup to process data for retrieval. The source document for the system is the Surgical Pathology Request and Report form. To date 12,653 documents have been entered into the system.The free text data was used to create an IRS (Information Retrieval System) database. A program to interrogate this database has been developed to numerically coded operative procedures. A total of 16,519 procedures records were generated. One and nine tenths percent of the procedures could not be fitted into any procedures category; 6.1% could not be specifically coded, while 92% were coded into specific categories. A system of PL/1 programs has been developed to facilitate manual editing of these records, which can be performed in a reasonable length of time (1 week). This manual check reveals that these 92% were coded with precision = 0.931 and recall = 0.924. Correction of the readily correctable errors could improve these figures to precision = 0.977 and recall = 0.987. Syntax errors were relatively unimportant in the overall coding process, but did introduce significant error in some categories, such as when right-left-bilateral distinction was attempted.The coded file that has been constructed will be used as an input file to a gynecological disease/PAP smear correlation system. The outputs of this system will include retrospective information on the natural history of selected diseases and a patient log providing information to the clinician on patient follow-up.Thus a free text data collection system can be utilized to produce numerically coded files of reasonable accuracy. Further, these files can be used as a source of useful information both for the clinician and for the medical researcher.


2016 ◽  
Vol 15 ◽  
pp. 163-171
Author(s):  
M. G. Shcherbakovskiy

The article discusses the reasonsfor an expert to participate in legal proceedings. The gnoseological reason for that consists of the bad quality of materials subject to examination that renders the examination either completely impossible or compromises objective, reasoned and reliable assessment of the findings. The procedural reason consists ofa proscription for an expert to collect evidence himself or herself. The author investigates into the ways of how an expert can participate in legal proceedings. If the defense invites an expert to participate in the proceedings, then it is recommended that his or her involvement should be in the presence of attesting witnesses and recorded in the protocol. In the course of the legal proceedings an expert has the following tasks: adding initial data, acquiring new initial data, understanding the situation of the incident, acquiring new objects to be studied, including samples for examination. An expert’s participation in legal proceedings differs from the participation of a specialist or an examination on the scene of the incident. The author describes the tasks that an expert solves in the course of legal proceedings, the peculiarities ofan investigation experiment practices, the selection of samples for an examination, inspection, interrogation.


Author(s):  
I. G. Zakharova ◽  
Yu. V. Boganyuk ◽  
M. S. Vorobyova ◽  
E. A. Pavlova

The article goal is to demonstrate the possibilities of the approach to diagnosing the level of IT graduates’ professional competence, based on the analysis of the student’s digital footprint and the content of the corresponding educational program. We describe methods for extracting student professional level indicators from digital footprint text data — courses’ descriptions and graduation qualification works. We show methods of comparing these indicators with the formalized requirements of employers, reflected in the texts of vacancies in the field of information technology. The proposed approach was applied at the Institute of Mathematics and Computer Science of the University of Tyumen. We performed diagnostics using a data set that included texts of courses’ descriptions for IT areas of undergraduate studies, 542 graduation qualification works in these areas, 879 descriptions of job requirements and information on graduate employment. The presented approach allows us to evaluate the relevance of the educational program as a whole and the level of professional competence of each student based on objective data. The results were used to update the content of some major courses and to include new elective courses in the curriculum.


Author(s):  
Aleksey Klokov ◽  
Evgenii Slobodyuk ◽  
Michael Charnine

The object of the research when writing the work was the body of text data collected together with the scientific advisor and the algorithms for processing the natural language of analysis. The stream of hypotheses has been tested against computer science scientific publications through a series of simulation experiments described in this dissertation. The subject of the research is algorithms and the results of the algorithms, aimed at predicting promising topics and terms that appear in the course of time in the scientific environment. The result of this work is a set of machine learning models, with the help of which experiments were carried out to identify promising terms and semantic relationships in the text corpus. The resulting models can be used for semantic processing and analysis of other subject areas.


2020 ◽  
Author(s):  
Pathikkumar Patel ◽  
Bhargav Lad ◽  
Jinan Fiaidhi

During the last few years, RNN models have been extensively used and they have proven to be better for sequence and text data. RNNs have achieved state-of-the-art performance levels in several applications such as text classification, sequence to sequence modelling and time series forecasting. In this article we will review different Machine Learning and Deep Learning based approaches for text data and look at the results obtained from these methods. This work also explores the use of transfer learning in NLP and how it affects the performance of models on a specific application of sentiment analysis.


2009 ◽  
Vol 20 (2) ◽  
pp. 234-245 ◽  
Author(s):  
Qiang LIU ◽  
Di ZHAO ◽  
Hua ZHONG ◽  
Tao HUANG
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document