A technique for automatically creating a list of terms based on ready-made lesson articles

Author(s):  
A. V. Filipov

When constructing a course of study in a discipline, a teacher periodically needs tools for analyzing and visualizing the structure of a future course, in particular, tools for highlighting a system of concepts on the basis of which a future course will be built, as well as for systematizing and structuring this system. The article discusses the problem of automatically compiling a list of concepts (terms) for the subsequent analysis of educational material when creating courses of disciplines. The choice of a system of concepts and methods of its presentation depends on the time frame of the course, the cognitive capabilities of students and their knowledge. The article discusses a method for constructing a thesaurus based on ready-made abstracts of lessons using linguistic methods for analyzing texts in natural language. With the help of graphematic analysis of the abstracts of lessons, the structural units of the course are determined. To compile the thesaurus, the syntactic analysis of the structural units of the text of the “sentence” type is performed for compliance with the template for entering the definition of the concept. To find relationships between concepts, terms from the thesaurus are reduced to all morphological forms and they are searched for in the definitions of other concepts. For the subsequent analysis of the training course, structural units, terms and their relationships are presented in the form of graph models.

2018 ◽  
pp. 35-38
Author(s):  
O. Hyryn

The article deals with natural language processing, namely that of an English sentence. The article describes the problems, which might arise during the process and which are connected with graphic, semantic, and syntactic ambiguity. The article provides the description of how the problems had been solved before the automatic syntactic analysis was applied and the way, such analysis methods could be helpful in developing new analysis algorithms. The analysis focuses on the issues, blocking the basis for the natural language processing — parsing — the process of sentence analysis according to their structure, content and meaning, which aims to analyze the grammatical structure of the sentence, the division of sentences into constituent components and defining links between them.


2021 ◽  
Vol 12 (5-2021) ◽  
pp. 50-56
Author(s):  
Boris M. Pileckiy ◽  

This paper describes one of the possible implementation options for the recognition of spatial data from natural language texts. The proposed option is based on the lexico-syntactic analysis of texts, which requires the use of special grammars and dictionaries. Spatial data recognition is carried out for their subsequent geocoding and visualization. The practical implementation of spatial data recognition is done using a free, freely distributed software tool. Also, some applications of spatial data are considered in the work and preliminary results of spatial data recognition are given.


2021 ◽  
Vol 16 (7-8) ◽  
pp. 106-109
Author(s):  
L.O. Malsteva ◽  
W.W. Nikonov ◽  
N.A. Kazimirova ◽  
A.A. Lopata

The review aims to present the chronological sequence of developing universal definitions of myocardial infarction, new ideas for improving the screening of post-infectious and sepsis-associated myocardial infarction (MI) (casuistic masks of myocardial infarction). The stages of the development of the common and global definition of myocardial infarction are outlined: 1 — by WHO working groups based on ECG for epidemiological studies; 2 — by the European Society of Cardiology and the American College of Cardio-logy using clinical and biochemical approaches; 3 — the Global Task Force consensus document of universal definition with subsequent classification of MI into five subtypes (spontaneous, dissonance in oxygen delivery and consumption; lethal outcome before the rise of specific markers of myocardial damage; PCI-associated; CABG- associated); 4 — review by the Joint Task Force of the above document based on the inclusion of more sensitive markers — troponins; 5 — the allocation of 17 non-ischemic myocardial damage, accompanied by an increase in the level of troponin; 6 — characteristic of the atrial natriuretic peptide from the standpoint of its synthesis, storage, release, diagnostic value as a biomarker of acute myocardial dama­ge; 7 — a clinical definition of myocardial infarction, presented in materials of the III Consensus on myocardial infarction 2017. The diagnosis of myocardial infarction using the criteria set in this document requires the integration of clinical data, ECG patterns, laboratory data, imaging findings, and, in some cases, pathological results, which are considered in the context of the time frame of the suspec­ted event. K. Thygesen et al. consider the additional use of: 1) cardiovascular magnetic resonance to determine the etiology of myocardial damage; 2) computer coronary angiography with suspected myocardial infarction. Myocardial infarction is a combination of specific cardio markers with at least one of the symptoms listed above. The formation of myocardial infarction can occur during/after acute respiratory infection. Causal relationships between these two states are established. Post-infectious myocardial infarction is strongly recommended to be individualized as a separate diagnostic entity. In sepsis, global myocardial ischemia with ischemic myocardial damage arises as a result of humoral and cellular factors, accompanied by an increase in troponins, a decrease in the ejection fraction of the left ventricle by 45 % and an increase in the final diastolic size of the left ventricle, the development of sepsis-associated multiple organ fai­lure, which is an unfavourable prognosis factor.


2021 ◽  
Author(s):  
Carolinne Roque e Faria ◽  
Cinthyan Renata Sachs Camerlengo de Barb

Technology is becoming expressively popular among agribusiness producers and is progressing in all agricultural area. One of the difficulties in this context is to handle data in natural language to solve problems in the field of agriculture. In order to build up dialogs and provide rich researchers, the present work uses Natural Language Processing (NLP) techniques to develop an automatic and effective computer system to interact with the user and assist in the identification of pests and diseases in the soybean farming, stored in a database repository to provide accurate diagnoses to simplify the work of the agricultural professional and also for those who deal with a lot of information in this area. Information on 108 pests and 19 diseases that damage Brazilian soybean was collected from Brazilian bibliographic manuals with the purpose to optimize the data and improve production, using the spaCy library for syntactic analysis of NLP, which allowed the pre-process the texts, recognize the named entities, calculate the similarity between the words, verify dependency parsing and also provided the support for the development requirements of the CAROLINA tool (Robotized Agronomic Conversation in Natural Language) using the language belonging to the agricultural area.


Architects ◽  
2019 ◽  
pp. 216-220
Author(s):  
Thomas Yarrow

Contracts specify what must happen but also when. Architects must coordinate things in time as well as in space, making sure buildings are constructed “as planned” and “on time.”32 Each project is made up of a series of phases. The completion of phases is linked to the payment of fees. Intervals are prescribed in advance, limited by a fee proposal through which costs are fixed. Projects anticipate a series of known future outcomes that are worked toward: the definition of a brief; the development of a design; the detailed development of plans for planning permission and then for tendering; and the construction of the building. The time of the project is linear and sequential: each phase follows the next, one after the other. Keeping things “on time” is an important but difficult accomplishment....


Author(s):  
Hugo Farne ◽  
Edward Norris-Cervetto ◽  
James Warbrick-Smith

The definition of weakness is important, because many patients who self-describe a ‘weak limb’ will actually have a clumsy limb (ataxia), a numb limb (reduced sensation), or a limb that is too painful to move. The time course of the onset of the symptoms in general reflects the time course of the underlying pathology: • Sudden onset (seconds to minutes) usually implies either trauma (e.g. displaced vertebral fractures due to major trauma) or certain vascular insults (e.g. stroke, transient ischaemic attack (TIA)). • Subacute onset (hours to days) suggests a progressive demyelination (e.g. Guillain–Barre syndrome, multiple sclerosis) or a slowly expanding haematoma (e.g. subdural haematoma). • Chronic onset (weeks to months), is consistent with pathologies such as a slow-growing tumour or motor neuron disease (progressive degeneration of motor neurons). As only acute and subacute limb weakness will present acutely to generalists in hospital (chronic onset cases will most likely be referred to neurology from primary care), we have limited the chapter to these cases. Limb movement requires an intact pathway from the cerebral cortex, down the corona radiata, internal capsule, and pons, along the corticospinal tract of the spinal cord, out along a nerve root, and down a peripheral nerve to the neuromuscular junction and muscle itself. If a patient has limb weakness, there must be a lesion somewhere in this pathway. Figure 26.2 gives the differential diagnosis for limb weakness. Mr Walker has presented with rapid onset of left-sided arm weakness. Key clues in the history to elicit include: • Exact time of onset? This is critical in suspected strokes because the window of time in which to confirm the diagnosis and administer thrombolysis (if appropriate) is only 4.5 hours from onset of symptoms (after that, you risk doing more harm than good to the patient). If you suspect a stroke in a patient within that time frame, call the thrombolysis team immediately. In this case, all we can say is that the onset was at some point in the 7 hours between 11 p.m. (when he went to sleep) and 6 a.m. (when he woke up), so we cannot confidently say the onset was within 4.5 hours.


Author(s):  
Paula Estrella ◽  
Nikos Tsourakis

When it comes to the evaluation of natural language systems, it is well acknowledged that there is a lack of common evaluation methodologies, making the fair comparison of such systems a difficult task. Many attempts to standardize this process have used a quality model based on the ISO/IEC 9126 standards. The authors have also used these standards for the definition of a weighted quality model for the evaluation of a medical speech translator, showing the relative importance of the system's features depending on the potential user (patient or doctor, developer). More recently, ISO/IEC 9126 has been replaced by a new series of standards, the 25000 or SQuaRE series, indicating that the model should be migrated to the new series in order to maintain compliance adherence to current standards. This chapter demonstrates how to migrate from ISO/IEC 9126 to ISO 25000 by using the authors' previous work as a use case.


Author(s):  
John Carroll

This article introduces the concepts and techniques for natural language (NL) parsing, which signifies, using a grammar to assign a syntactic analysis to a string of words, a lattice of word hypotheses output by a speech recognizer or similar. The level of detail required depends on the language processing task being performed and the particular approach to the task that is being pursued. This article further describes approaches that produce ‘shallow’ analyses. It also outlines approaches to parsing that analyse the input in terms of labelled dependencies between words. Producing hierarchical phrase structure requires grammars that have at least context-free (CF) power. CF algorithms that are widely used in parsing of NL are described in this article. To support detailed semantic interpretation more powerful grammar formalisms are required, but these are usually parsed using extensions of CF parsing algorithms. Furthermore, this article describes unification-based parsing. Finally, it discusses three important issues that have to be tackled in real-world applications of parsing: evaluation of parser accuracy, parser efficiency, and measurement of grammar/parser coverage.


Author(s):  
Jens Steffek ◽  
Marcus Müller ◽  
Hartmut Behr

Abstract The disciplinary history of international relations (IR) is usually told as a succession of theories or “isms” that are connected to academic schools. Echoing the increasing criticism of this narrative, we present in this article a new perspective on the discipline. We introduce concepts from linguistics and its method of digital discourse analysis (DDA) to explore discursive shifts and terminological entrepreneurship in IR. DDA directs attention away from schools of thought and “heroic figures” who allegedly invented new theories. As we show exemplarily with the rise of “regime theory,” there were entire generations of IR scholars who (more or less consciously) developed new vocabularies to frame and address their common concerns. The terminological history of “international regime” starts in nineteenth century international law, in which French authors already used “régime” to describe transnational forms of governance that were more than a treaty but less than an international organization. Only in the 1980s, however, was an explicit definition of “international regime” forged in American IR, which combined textual elements already in use. We submit that such observations can change the way in which we understand, narrate, and teach the discipline of IR. DDA decenters IR theory from its traditional focus on schools and individuals and suggests unlearning established taxonomies of “isms.” The introduction of corpus linguistic methods to the study of academic IR can thus provide new epistemological directions for the field.


Sign in / Sign up

Export Citation Format

Share Document