scholarly journals A Survey about various Generations of Lexical Analyzer

2019 ◽  
Vol 8 (2) ◽  
pp. 50
Author(s):  
Zakiya Ali Nayef

Lexical analysis helps the interactivity and visualization for active learning that can improve difficult concepts in automata. This study gives a view on different lexical analyzer generators that has been implemented for different purposes in finite automata. It also intends to give a general idea on the lexical analyzer process, which will cover the automata model that is used in the various reviews. Some concepts that will be described are finite automata model, regular expression and other related components. Also, the advantages and disadvantages of lexical analyzer will be discussed. 

2019 ◽  
Vol 8 (4) ◽  
pp. 415
Author(s):  
Nisreen L. Abdulnabi ◽  
Hawar B. Ahmad

Lexical analysis helps the interactivity and visualization for active learning that can improve difficult concepts in automata. This study gives an implementation of two frequently used model, NFA for combination of Real and Integer data type and DFA for Double Data Type in Java this chosen model will be implemented using JFLAP. The model will also be tested using JFLAP that will accept at least FIVE (5) inputs and rejected FIVE (5) inputs. These two models are some of the different lexical analyzer generators that have been implemented for different purposes in finite automata.


2019 ◽  
Vol 8 (2) ◽  
pp. 119-128
Author(s):  
Takudzwa Fadziso

In cognitive science, understanding language by humans starts with recognition. Without the phase, understanding languages become a very cumbersome task. The task of the lexical analyzer is to read the various input characters grouping them into lexemes and producing an output of a sequence of tokens. But before we discuss lexical analysis further, we should have an overview of this research. Lexical analysis is best described as tokenization that converts a sequence of characters (program) into tokens with identifiable meanings. This study aims to look at the various terms or words related to lexical structure, purpose, and how they are applied to get the required result. The lexical analysis offers researchers an idea of the structural aspect of computer language and its semantic content. The work also talks about the advantages and disadvantages of lexical analysis.


Author(s):  
Daniela Glavaničová

Abstract Role realism is a promising realist theory of fictional names. Different versions of this theory have been suggested by Gregory Currie, Peter Lamarque, Stein Haugom Olsen, and Nicholas Wolterstorff. The general idea behind the approach is that fictional characters are to be analysed in terms of roles, which in turn can be understood as sets of properties (or alternatively as kinds or functions from possible worlds to individuals). I will discuss several advantages and disadvantages of this approach. I will then propose a novel hyperintensional version of role realism (which I will call impossibilism), according to which fictional names are analysed in terms of individual concepts that cannot be matched by a reference (a full-blooded individual). I will argue that this account avoids the main disadvantages of standard role realism.


2009 ◽  
Vol 2009 ◽  
pp. 1-10 ◽  
Author(s):  
Yi-Hua E. Yang ◽  
Viktor K. Prasanna

We present a software toolchain for constructing large-scaleregular expression matching(REM) on FPGA. The software automates the conversion of regular expressions into compact and high-performance nondeterministic finite automata (RE-NFA). Each RE-NFA is described as an RTL regular expression matching engine (REME) in VHDL for FPGA implementation. Assuming a fixed number of fan-out transitions per state, ann-statem-bytes-per-cycle RE-NFA can be constructed inO(n×m)time andO(n×m)memory by our software. A large number of RE-NFAs are placed onto a two-dimensionalstaged pipeline, allowing scalability to thousands of RE-NFAs with linear area increase and little clock rate penalty due to scaling. On a PC with a 2 GHz Athlon64 processor and 2 GB memory, our prototype software constructs hundreds of RE-NFAs used by Snort in less than 10 seconds. We also designed a benchmark generator which can produce RE-NFAs with configurable pattern complexity parameters, including state count, state fan-in, loop-back and feed-forward distances. Several regular expressions with various complexities are used to test the performance of our RE-NFA construction software.


Author(s):  
A S Fedorenko ◽  
A T Burbello ◽  
M V Pokladova ◽  
M A Ivanova

The article presents possible approaches to assessing the financial costs of medicines. The results of the ABC/VEN and ATC DDD analyzes recommended by the Ministry of Health of the Russian Federation and the World Health Organization (WHO) in assessing the financial costs of medicines in a large multidisciplinary hospital are described. The evaluation of ABC/VEN and ATC/DDD analyzes, their advantages and disadvantages is given. It is shown that the ABC/VEN analysis gives only a general idea of planning financial expenditures and ATC/DDD about real drug consumption in the treatment of one patient. The financial costs of treating one patient vary significantly and depend on many factors: disease nosology, severity, division profile, etc. It was determined which factors should be taken into account both in estimating the cost of medicines and in planning financial expenditures for the next year. (For citation: Fedorenko AS, Burbello AT, Pokladova MV, Ivanova MA. What factors need to be considered when assessing the financial costs of medicines. Herald of North-Western State Medical University named after I.I. Mechnikov. 2018;10(2):64-72. doi: 10.17816/mechnikov201810264-72).


Example coordinating assumes a key job in different parcel payload identification applications, for example, interruption location, which is utilized in distinguishing the malware content in system frameworks. Various calculations and instruments have been created to improve reality complexities of distinguishing regex principles and subsequently empower profound bundle review at line rate. In this paper, a novel quickening plan is introduced to determine speed and space wasteful aspects of the customary automata and the DFA called multi-walk Finite automata that confirms more than one byte that expands the general execution of not just design matching but additionally string coordinating


2018 ◽  
Vol 8 (1) ◽  
pp. 68-82
Author(s):  
Swagat Kumar Jena ◽  
Satyabrata Das ◽  
Satya Prakash Sahoo

Future of computing is rapidly moving towards massively multi-core architecture because of its power and cost advantages. Almost everywhere Multi-core processors are being used now-a-days and number of cores per chip is also relatively increasing. To exploit full potential offered by multi-core architecture, the system software like compilers should be designed for parallelized execution. In the past, various significant works have been made to change the design of traditional compiler to take advantages of the future multi-core platform. This paper focuses on adapting parallelism in the lexical analysis phase of the compilation process. The main objective of our proposal is to do the lexical analysis i.e., finding the tokens in an input stream in parallel. We use the parallel constructs available in OpenMP to achieve parallelism in the lexical analysis process for multi-core machines. The experimental result of our proposal shows a significant performance improvement in the parallel lexical analysis phase as compared to sequential version in terms of time of execution.


Author(s):  
Irina Marshakova-Shaikevich

This chapter is devoted to directions in algorithmic classificatory procedures: co-citation analysis as an example of citation network and lexical analysis of keywords in the titles. The chapter gives the results of bibliometric analysis of the international scientific collaboration of EU countries. The three approaches are based on the same general idea of normalization of deviations of the observed data from the mathematical expectation. The application of the same formula leads to discovery of statistically significant links between objects (publication, journals, keywords, etc.) reflected in the maps. Material for this analysis is drawn from DBs presented in ISI Thomson Reuters (at present Clarivate Analytics).


Sign in / Sign up

Export Citation Format

Share Document