scholarly journals Gramática de usuario final extendida para procesos de negocios

2020 ◽  
Author(s):  
◽  
Enrique Eduardo Aramayo

El concepto proceso de negocio está estrechamente vinculado a la forma en la que una organización gestiona sus operaciones. Conocer y comprender las operaciones de una organización es un punto clave que se debe tener presente dentro del proceso de desarrollo de software. A su vez, el enfoque de desarrollo dirigido por modelos denominado MockupDD captura requerimientos usando prototipos de interfaz de usuario denominados Mockups. Los usuarios finales pueden comprender fácilmente dichos prototipos y realizar anotaciones sobre los mismos. Este enfoque se basa en ésta característica principal y a partir de la misma genera valiosos modelos conceptuales que luego pueden ser aprovechados por todos los integrantes de un equipo de desarrollo de software. Utilizar el lenguaje natural para realizar anotaciones sobre los Mockups es un aspecto clave que puede ser aprovechado. En éste último aspecto una rama de la inteligencia artificial denominada “Natural Language Processing – Procesamiento del Lenguaje Natural” (NLP) viene realizando importantes aportes vinculados al uso y al aprovechamiento del lenguaje natural de las personas. El presente trabajo de tesis propone una nueva técnica denominada “End User Grammar Extended for Business Processes – Gramática de Usuario Final Extendida para Procesos de Negocios” (EUGEBP). La misma está compuesta por un conjunto de reglas de redacción diseñada para ser aplicada sobre Mockups, y por una serie de pasos que permiten procesar dichas anotaciones con el propósito derivar procesos de negocios desde las mismas. Esto se logra a través de la identificación de los elementos que componen los procesos de negocios y de las relaciones que existen entre ellos. En esencia el presente trabajo propone utilizar las anotaciones de usuario final realizadas sobre los Mockups en lenguaje natural y a partir de las mismas derivar procesos de negocio. Las anotaciones del usuario final tienen como objetivo ayudar a describir las interfaces de usuario, pero también pueden ayudarnos a identificar los procesos de negocio que el sistema debe soportar. Mientras en analista recopila información para el desarrollo de una aplicación, implícitamente también está describiendo los procesos de negocio de la organización.

Author(s):  
Iraj Mantegh ◽  
Nazanin S. Darbandi

Robotic alternative to many manual operations falls short in application due to the difficulties in capturing the manual skill of an expert operator. One of the main problems to be solved if robots are to become flexible enough for various manufacturing needs is that of end-user programming. An end-user with little or no technical expertise in robotics area needs to be able to efficiently communicate its manufacturing task to the robot. This paper proposes a new method for robot task planning using some concepts of Artificial Intelligence. Our method is based on a hierarchical knowledge representation and propositional logic, which allows an expert user to incrementally integrate process and geometric parameters with the robot commands. The objective is to provide an intelligent and programmable agent such as a robot with a knowledge base about the attributes of human behaviors in order to facilitate the commanding process. The focus of this work is on robot programming for manufacturing applications. Industrial manipulators work with low level programming languages. This work presents a new method based on Natural Language Processing (NLP) that allows a user to generate robot programs using natural language lexicon and task information. This will enable a manufacturing operator (for example for painting) who may be unfamiliar with robot programming to easily employ the agent for the manufacturing tasks.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Isis Truck ◽  
Mohammed-Amine Abchir

In the geolocation field where high-level programs and low-level devices coexist, it is often difficult to find a friendly user interface to configure all the parameters. The challenge addressed in this paper is to propose intuitive and simple, thus natural language interfaces to interact with low-level devices. Such interfaces contain natural language processing (NLP) and fuzzy representations of words that facilitate the elicitation of business-level objectives in our context. A complete methodology is proposed, from the lexicon construction to a dialogue software agent including a fuzzy linguistic representation, based on synonymy.


2019 ◽  
Vol 35 (21) ◽  
pp. 4372-4380 ◽  
Author(s):  
Jin-Dong Kim ◽  
Yue Wang ◽  
Toyofumi Fujiwara ◽  
Shujiro Okuda ◽  
Tiffany J Callahan ◽  
...  

Abstract Motivation Most currently available text mining tools share two characteristics that make them less than optimal for use by biomedical researchers: they require extensive specialist skills in natural language processing and they were built on the assumption that they should optimize global performance metrics on representative datasets. This is a problem because most end-users are not natural language processing specialists and because biomedical researchers often care less about global metrics like F-measure or representative datasets than they do about more granular metrics such as precision and recall on their own specialized datasets. Thus, there are fundamental mismatches between the assumptions of much text mining work and the preferences of potential end-users. Results This article introduces the concept of Agile text mining, and presents the PubAnnotation ecosystem as an example implementation. The system approaches the problems from two perspectives: it allows the reformulation of text mining by biomedical researchers from the task of assembling a complete system to the task of retrieving warehoused annotations, and it makes it possible to do very targeted customization of the pre-existing system to address specific end-user requirements. Two use cases are presented: assisted curation of the GlycoEpitope database, and assessing coverage in the literature of pre-eclampsia-associated genes. Availability and implementation The three tools that make up the ecosystem, PubAnnotation, PubDictionaries and TextAE are publicly available as web services, and also as open source projects. The dictionaries and the annotation datasets associated with the use cases are all publicly available through PubDictionaries and PubAnnotation, respectively.


2020 ◽  
pp. 3-17
Author(s):  
Peter Nabende

Natural Language Processing for under-resourced languages is now a mainstream research area. However, there are limited studies on Natural Language Processing applications for many indigenous East African languages. As a contribution to covering the current gap of knowledge, this paper focuses on evaluating the application of well-established machine translation methods for one heavily under-resourced indigenous East African language called Lumasaaba. Specifically, we review the most common machine translation methods in the context of Lumasaaba including both rule-based and data-driven methods. Then we apply a state of the art data-driven machine translation method to learn models for automating translation between Lumasaaba and English using a very limited data set of parallel sentences. Automatic evaluation results show that a transformer-based Neural Machine Translation model architecture leads to consistently better BLEU scores than the recurrent neural network-based models. Moreover, the automatically generated translations can be comprehended to a reasonable extent and are usually associated with the source language input.


Diabetes ◽  
2019 ◽  
Vol 68 (Supplement 1) ◽  
pp. 1243-P
Author(s):  
JIANMIN WU ◽  
FRITHA J. MORRISON ◽  
ZHENXIANG ZHAO ◽  
XUANYAO HE ◽  
MARIA SHUBINA ◽  
...  

Author(s):  
Pamela Rogalski ◽  
Eric Mikulin ◽  
Deborah Tihanyi

In 2018, we overheard many CEEA-AGEC members stating that they have "found their people"; this led us to wonder what makes this evolving community unique. Using cultural historical activity theory to view the proceedings of CEEA-ACEG 2004-2018 in comparison with the geographically and intellectually adjacent ASEE, we used both machine-driven (Natural Language Processing, NLP) and human-driven (literature review of the proceedings) methods. Here, we hoped to build on surveys—most recently by Nelson and Brennan (2018)—to understand, beyond what members say about themselves, what makes the CEEA-AGEC community distinct, where it has come from, and where it is going. Engaging in the two methods of data collection quickly diverted our focus from an analysis of the data themselves to the characteristics of the data in terms of cultural historical activity theory. Our preliminary findings point to some unique characteristics of machine- and human-driven results, with the former, as might be expected, focusing on the micro-level (words and language patterns) and the latter on the macro-level (ideas and concepts). NLP generated data within the realms of "community" and "division of labour" while the review of proceedings centred on "subject" and "object"; both found "instruments," although NLP with greater granularity. With this new understanding of the relative strengths of each method, we have a revised framework for addressing our original question.  


2020 ◽  
Author(s):  
Vadim V. Korolev ◽  
Artem Mitrofanov ◽  
Kirill Karpov ◽  
Valery Tkachenko

The main advantage of modern natural language processing methods is a possibility to turn an amorphous human-readable task into a strict mathematic form. That allows to extract chemical data and insights from articles and to find new semantic relations. We propose a universal engine for processing chemical and biological texts. We successfully tested it on various use-cases and applied to a case of searching a therapeutic agent for a COVID-19 disease by analyzing PubMed archive.


Sign in / Sign up

Export Citation Format

Share Document