scholarly journals English Myanmar Dictionary System by Using Linear and Binary Search Techniques

Author(s):  
Myo Ma Ma ◽  
Yin Myo KKhine Thaw ◽  
Lai Lai Yee

This paper is aimed to develop a searching method based on binary search and linear search as well as to understand the finding of search methods. The system searches the desired word for English to English and English to Myanmar. The system may help the English may help the English Language user enable to know the desired word of English and Myanmar meaning. The system output is searching word of English meaning, Myanmar meaning, part of speech, searching time and step. And also, the system finds cross reference and user's unknown word by using binary search and linear search of searching algorithm. This system is implemented by using ASP.NET platform.

2018 ◽  
Vol 28 (7) ◽  
pp. 2245-2249
Author(s):  
Suzana Ejupi ◽  
Lindita Skenderi

Working with English learners for many years, gives you the opportunity to encounter linguistic obstacles that they face while learning English language as a foreign language. Additionally, teaching for 13 years and observing the learning process, it enables you to recognize the students’ needs and at the same time, detect linguistic mistakes that they make, while practicing the target language. During my experience as a teacher, in terms of teaching and learning verbs in general and its grammatical categories in specific, it is noticed that Albanian learners find it relatively difficult the correct use of verbs in context and even more confusing the equivalent use of verbs in Albanian. Since verbs present an important part of speech, this study aims to investigate several differences and similarities between grammatical categories of verbs in English and Albanian. As a result, the Albanian learners of English language will be able to identify some of the major differences and similarities between the grammatical categories of verbs in English and Albanian; overcome the usual mistakes; gain the necessary knowledge regarding verbs and use them properly in English and Albanian.


Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1686 ◽  
Author(s):  
Nancy Agarwal ◽  
Mudasir Ahmad Wani ◽  
Patrick Bours

This work focuses on designing a grammar detection system that understands both structural and contextual information of sentences for validating whether the English sentences are grammatically correct. Most existing systems model a grammar detector by translating the sentences into sequences of either words appearing in the sentences or syntactic tags holding the grammar knowledge of the sentences. In this paper, we show that both these sequencing approaches have limitations. The former model is over specific, whereas the latter model is over generalized, which in turn affects the performance of the grammar classifier. Therefore, the paper proposes a new sequencing approach that contains both information, linguistic as well as syntactic, of a sentence. We call this sequence a Lex-Pos sequence. The main objective of the paper is to demonstrate that the proposed Lex-Pos sequence has the potential to imbibe the specific nature of the linguistic words (i.e., lexicals) and generic structural characteristics of a sentence via Part-Of-Speech (POS) tags, and so, can lead to a significant improvement in detecting grammar errors. Furthermore, the paper proposes a new vector representation technique, Word Embedding One-Hot Encoding (WEOE) to transform this Lex-Pos into mathematical values. The paper also introduces a new error induction technique to artificially generate the POS tag specific incorrect sentences for training. The classifier is trained using two corpora of incorrect sentences, one with general errors and another with POS tag specific errors. Long Short-Term Memory (LSTM) neural network architecture has been employed to build the grammar classifier. The study conducts nine experiments to validate the strength of the Lex-Pos sequences. The Lex-Pos -based models are observed as superior in two ways: (1) they give more accurate predictions; and (2) they are more stable as lesser accuracy drops have been recorded from training to testing. To further prove the potential of the proposed Lex-Pos -based model, we compare it with some well known existing studies.


The software development procedure begins with identifying the requirement analysis. The process levels of the requirements start from analysing the requirements to sketch the design of the program, which is very critical work for programmers and software engineers. Moreover, many errors will happen during the requirement analysis cycle transferring to other stages, which leads to the high cost of the process more than the initial specified process. The reason behind this is because of the specifications of software requirements created in the natural language. To minimize these errors, we can transfer the software requirements to the computerized form by the UML diagram. To overcome this, a device has been designed, which plans can provide semi-automatized aid for designers to provide UML class version from software program specifications using natural Language Processing techniques. The proposed technique outlines the class diagram in a well-known configuration and additionally facts out the relationship between instructions. In this research, we propose to enhance the procedure of producing the UML diagrams by utilizing the Natural Language, which will help the software development to analyze the software requirements with fewer errors and efficient way. The proposed approach will use the parser analyze and Part of Speech (POS) tagger to analyze the user requirements entered by the user in the English language. Then, extract the verbs and phrases, etc. in the user text. The obtained results showed that the proposed method got better results in comparison with other methods published in the literature. The proposed method gave a better analysis of the given requirements and better diagrams presentation, which can help the software engineers. Key words: Part of Speech,UM


2009 ◽  
pp. 1595-1607
Author(s):  
Guohong Fu ◽  
Kang-Kwong Luke

This article presents a lexicalized HMM-based approach to Chinese part-of-speech (POS) disambiguation and unknown word guessing (UWG). In order to explore word-internal morphological features for Chinese POS tagging, four types of pattern tags are defined to indicate the way lexicon words are used in a segmented sentence. Such patterns are combined further with POS tags. Thus, Chinese POS disambiguation and UWG can be unified as a single task of assigning each known word to input a proper hybrid tag. Furthermore, a uniformly lexicalized HMM-based tagger also is developed to perform this task, which can incorporate both internal word-formation patterns and surrounding contextual information for Chinese POS tagging under the framework of HMMs. Experiments on the Peking University Corpus indicate that the tagging precision can be improved with efficiency by the proposed approach.


2020 ◽  
Vol 26 (1) ◽  
pp. 66-72
Author(s):  
DAN GEORGE PRUTICA ◽  
GHEORGHE BRABIE ◽  
BOGDAN CHIRITA

Optimization of cutting parameters in machining operations is a complex task requiring extensive knowledge and experience to reach the maximum potential for cost reductions in manufacturing. Through the work presented in this paper a cutting parameters optimization algorithm for roughing and finishing turning has been realized. The proposed optimization algorithm is based on a combination of linear search and binary search methods, heaving as objective criterion the minimization of the machining time. Examples for roughing and finishing turning have been presented to illustrate the application of the proposed algorithm and analyse the results.


Author(s):  
M. Vanyagina

The article deals with the  efficiency of  mnemonics application for studying the  English language. Mnemonics is  one of  active training technologies which rely on  lexical and semantic communications and associative thinking. Since ancient times scientists studied properties of  memory and offered the  ways of simplification the process of storing information by means of various techniques. Modern psychologists and teachers agree that coding of  information by  means of images and associations accelerates the process of information storing. The article considers various methods of  mnemonics helping to  perceive and reproduce necessary educational information, including foreign words and phrases: associations, mnemo-rhymes, method of  a  chain, and mnemo-cards. Examples of  mnemonics application in the author's practice of teaching a foreign language when training cadets and post-graduate military students are given. The pedagogical experiment on mnemo-cards application in teaching English the cadets of higher educational military organizations for the  intensification of  educational process is  described. A mnemo-card represents the structured schematic visualization of basic elements of information on the paper or via electronic medium. The word, the name of a part of  speech, a  transcription, variants of  translation, the  picture which is  associated with a word and a mnemo-rhyme are applied on it. Mnemo-cards are repeatedly shown and pronounced during lessons and self study. As a result of a pedagogical experiment the  groups of  cadets using mnemo-cards have higher percentage of storing of English-language lexis, that allows to draw a conclusion on expediency and efficiency of the offered technique.


Author(s):  
Boichuk M.I.

The article outlines the concept of “conversion”, which is defined as an affixless, derivational way of word formation, in which a new word formed from another part of the language does not acquire an external word-forming rearrangement. The concept of “word formation” has also been analyzed and the phonetic component of compounds of religious vocabulary characterized. The structural classification has been distinguished taking into account the structure of compoundings. It has been found that among the layer of religious vocabulary derivational connections of conversion occur between two, three or more words, and the main ways of direction of this process have been identified. Five main models of conversion of lexical units of the religious sphere have been determined, such as: Noun – Verb, which further is divided into three categories, Verb – Noun, Adjective – Noun, Noun – Adjective, Adjective – Verb. The process of substantivization of religious vocabulary as a variant of conversion has also been analyzed. Under substantivization we understand the process of changing the paradigm of the basic word and a part of speech. Analysis of religious vocabulary shows that the transition is from adjectives to nouns, the first acquires the characteristic features of the latter.The article presents an analysis of religious vocabulary based on the dictionary of O. O. Azarov “Comprehensive English-Russian dictionary of religious terminology” which allows to identify such productive models of word formation of religious vocabulary in English: Noun + Noun, Noun + Participle, Adjective + Noun, Noun + Preposition + Noun, Participle + Noun, Pronoun + Noun, Adjective + Participle. These models are most actively involved in the creation of religious vocabulary in English, as they have the largest number of words in their structure. Compounds of religious lexis are divided into root compounds and compound derivatives, the structural integrity of which allows to distinguish them from phrases. Considering the components of compound words, the main element can be both the first and second part. According to the relationship between the components, compounds are divided into endocentric and exocentric types. The first is expressed by a compound word, the meaning of which is derived from the sum of the meanings of the compound’s components, the latter includes complex words, the meaning of which is not determined by any of its constituent elements. Among the layer of religious vocabulary of the English language we distinguish the following endocentric models: Adj + N = N, V + N = N, Part I + N = N, Ger + N = N, N + N = N and exocentric models: Participle + N = Adj, N+Pro.=Adj, V+Prep.=N, Adv+Participle=Adj.Key words:compounding, endocentric and exocentric compound words, substantivization, conversion. У статті обґрунтовано поняття «конверсія», яке визначається як безафіксальний, дериваційний спосіб словотвору, за якого нове слово, що утворюється з іншої частини мови, не набуває зовнішньої словотвірної перебудови. Також у роботі проаналізовано поняття «словоскладання», охарактеризовано фонетичний складник композитів релігійної лексики та виділено структурну класифікацію з урахуванням структури композитів складених слів. З’ясовано, що серед пласту релігійної лексики конверсивні дериваційні зв’язки відбуваються між двома, трьома та більшою кількістю слів, та визначено основні способи спрямованості цього процесу. Виділяємо п’ять основних моделей конверсії лексичних одиниць релігійної сфери: Noun – Verb, яка своєю чергою поділяється на три категорії, Verb – Noun, Adjective – Noun, Noun – Adjective, Adjective – Verb. Також проаналізовано процес субстантивації релігійної лексики як варіант конверсії. Під субстантивацією розуміємо процес зміни парадигми твірного слова й частини мови. Аналіз релігійної лексики показує, що перехід відбувається від прикметників у іменники, прикметник набуває характерних ознак іменника. У статті представлено аналіз релігійної лексики на основі словника О.О. Азарова «Большой англо-русский словарь религиозной лексики», який дає змогу виокремити такі продуктивні моделі словоскладання релігійної лексики в англійській мові: Noun + Noun, Noun + Participle, Adjective + Noun, Noun + Preposition + Noun, Participle + Noun, Pronoun + Noun, Adjective + Participle.Ці моделі беруть найактивнішу участь у творенні релігійної лексики в англійській мові, оскільки налічують найбільшу кількість слів у своїй структурі. Композити релігійної лексики поділяються на власне складні та склад-нопохідні, структурна цілісність яких дозволяє відмежувати їх від словосполучень. Щодо компонентів складних слів, то головним елементом може бути як перша, так і друга частина. Відповідно до відносин між компонентами складні слова поділяються на ендоцентричний та екзоцентричний типи. Перший виражається складним словом, значення якого виводиться із суми значень компонентів композита, до останнього відносяться складні слова, значення яких не визначається жодним із його складових елементів. Серед пласту релігійної лексики англійської мови виокремлюємо такі ендоцентричні моделі: Adj + N = N, V + N = N, Part I + N = N, Ger + N = N, N + N = N та екзоцентричні моделі: Participle + N = Adj, N+Pro.=Adj, V+Prep.=N, Adv+Participle=Adj.Ключові слова:словоскладання, ендоцентричні та екзоцентричні складні слова, субстантивація, конверсія.


2019 ◽  
Vol 8 (4) ◽  
pp. 2684-2686

This paper is based on an approach to implement Binary Search in Linked List. Binary Search is divide and conquer approach to search an element from the list of sorted element. In Linked List we can do binary search but it has time complexity O(n) that is same as what we have for linear search which makes Binary Search inefficient to use in Linked List. The main problem that binary search takes O(n) time in Linked List due to fact that in linked list we are not able to do indexing which led traversing of each element in Linked list take O(n) time. In this paper a method is implemented through which binary search can be done with time complexity of O(log2n). This is done with the help of auxiliary array. Auxiliary array helps in indexing of linked list through which one can traverse a node in O(1) complexity hence reducing the complexity of binary search to O(log2n) hence increasing efficiency of binary search in linked


2019 ◽  
Vol 41 (1) ◽  
pp. 109-120 ◽  
Author(s):  
Sorayya Kheirouri ◽  
Mohammad Alizadeh

Abstract Nutrition and diet have been suggested to enhance or inhibit cognitive performance and the risk of several neurodegenerative diseases. We conducted a systematic review to elucidate the relationship between the inflammatory capacity of a person’s diet and the risk of incident neurodegenerative diseases. We searched major medical databases for articles published through June 30, 2018. Original, full-text, English-language articles on studies with human participants which investigated the link between dietary inflammatory potential and risk of developing neurodegenerative diseases were included. Duplicate and irrelevant studies were removed, and data were compiled through critical analysis. Initially, 457 articles were collected via the searching method, of which 196 studies remained after removal of duplicates. Fourteen articles were screened and found to be relevant to the scope of the review. After critical analysis, 10 were included in the final review. In all studies but one, a higher dietary inflammatory index (DII) was related to higher risk of developing neurodegenerative disease symptoms, including memory and cognition decline and multiple sclerosis. Of 3 studies that assessed the association of DII with levels of circulating inflammation markers, 2 indicated that DII was positively correlated with inflammatory marker levels. Low literacy, an unhealthy lifestyle, and individual nutritional status were the factors involved in a diet with inflammatory potential. These findings enhance confidence that DII is an appropriate tool for measurement of dietary inflammatory potential and validate the role of diets with inflammatory potential in the pathophysiology of neurodegenerative diseases. DII may be correlated with levels of circulating inflammatory markers.


Sign in / Sign up

Export Citation Format

Share Document