language question
Recently Published Documents


TOTAL DOCUMENTS

278
(FIVE YEARS 68)

H-INDEX

14
(FIVE YEARS 3)

2021 ◽  
Vol 12 (1) ◽  
pp. 369
Author(s):  
Da Ma ◽  
Xingyu Chen ◽  
Ruisheng Cao ◽  
Zhi Chen ◽  
Lu Chen ◽  
...  

Generating natural language descriptions for structured representation (e.g., a graph) is an important yet challenging task. In this work, we focus on SQL-to-text, a task that maps a SQL query into the corresponding natural language question. Previous work represents SQL as a sparse graph and utilizes a graph-to-sequence model to generate questions, where each node can only communicate with k-hop nodes. Such a model will degenerate when adapted to more complex SQL queries due to the inability to capture long-term and the lack of SQL-specific relations. To tackle this problem, we propose a relation-aware graph transformer (RGT) to consider both the SQL structure and various relations simultaneously. Specifically, an abstract SQL syntax tree is constructed for each SQL to provide the underlying relations. We also customized self-attention and cross-attention strategies to encode the relations in the SQL tree. Experiments on benchmarks WikiSQL and Spider demonstrate that our approach yields improvements over strong baselines.


2021 ◽  
Author(s):  
Wicharn Rueangkhajorn ◽  
Jonathan H. Chan

Nowadays, Question Answering is one of the challenge applications in the Natural language processing domain. There are plenty of English language Question Answering model distributed on the model sharing website such as Hugging face hub. Unlike Thai language, there is on a few Thai language Question Answering model distributed on the model sharing website. So, we decided to fine-tune a multilingual Question Answering model to a specify language which is Thai language. The datasets that we will use for training is a Thai Wikipedia dataset from iApp Technology. We have tried to fine-tune on two multilingual model. We also create another dataset to evaluate adaptivity of the model. The result came out to be as satisfy. Both fine-tuned models perform better than base model on evaluation score. We have published Question Answering model to Hugging face hub that will allow people to using these models for others application later.


2021 ◽  
Author(s):  
Wicharn Rueangkhajorn ◽  
Jonathan H. Chan

Nowadays, Question Answering is one of the challenge applications in the Natural language processing domain. There are plenty of English language Question Answering model distributed on the model sharing website such as Hugging face hub. Unlike Thai language, there is on a few Thai language Question Answering model distributed on the model sharing website. So, we decided to fine-tune a multilingual Question Answering model to a specify language which is Thai language. The datasets that we will use for training is a Thai Wikipedia dataset from iApp Technology. We have tried to fine-tune on two multilingual model. We also create another dataset to evaluate adaptivity of the model. The result came out to be as satisfy. Both fine-tuned models perform better than base model on evaluation score. We have published Question Answering model to Hugging face hub that will allow people to using these models for others application later.


2021 ◽  
Author(s):  
Patrick Studer ◽  
Aisha Siddiqa

This chapter reviews the current discourses surrounding English in higher education, focusing on the impact Englishization has had on education and language policy-planning in Switzerland. While English is in direct competition with national languages at the obligatory school levels, and the debate about the status of English is evident in national language policymaking, higher education institutes (henceforth HEIs) have taken a pragmatic approach, broadening their educational offerings to include English-medium courses and programmes at all levels. Taking legal, strategy and policy documents as its basis, this chapter discusses themes that impact thinking about language in higher education in a small multilingual nation and reviews how the language question has been addressed by policymakers at the national and institutional levels.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Xu Zhang ◽  
DeZhi Han ◽  
Chin-Chen Chang

Visual question answering (VQA) is the natural language question-answering of visual images. The model of VQA needs to make corresponding answers according to specific questions based on understanding images, the most important of which is to understand the relationship between images and language. Therefore, this paper proposes a new model, Representation of Dense Multimodality Fusion Encoder Based on Transformer, for short, RDMMFET, which can learn the related knowledge between vision and language. The RDMMFET model consists of three parts: dense language encoder, image encoder, and multimodality fusion encoder. In addition, we designed three types of pretraining tasks: masked language model, masked image model, and multimodality fusion task. These pretraining tasks can help to understand the fine-grained alignment between text and image regions. Simulation results on the VQA v2.0 data set show that the RDMMFET model can work better than the previous model. Finally, we conducted detailed ablation studies on the RDMMFET model and provided the results of attention visualization, which proves that the RDMMFET model can significantly improve the effect of VQA.


Author(s):  
Xinmeng Li ◽  
Mamoun Alazab ◽  
Qian Li ◽  
Keping Yu ◽  
Quanjun Yin

AbstractKnowledge graph question answering is an important technology in intelligent human–robot interaction, which aims at automatically giving answer to human natural language question with the given knowledge graph. For the multi-relation question with higher variety and complexity, the tokens of the question have different priority for the triples selection in the reasoning steps. Most existing models take the question as a whole and ignore the priority information in it. To solve this problem, we propose question-aware memory network for multi-hop question answering, named QA2MN, to update the attention on question timely in the reasoning process. In addition, we incorporate graph context information into knowledge graph embedding model to increase the ability to represent entities and relations. We use it to initialize the QA2MN model and fine-tune it in the training process. We evaluate QA2MN on PathQuestion and WorldCup2014, two representative datasets for complex multi-hop question answering. The result demonstrates that QA2MN achieves state-of-the-art Hits@1 accuracy on the two datasets, which validates the effectiveness of our model.


Author(s):  
Stessi Athini

Marinos Papadopoulos Vretos (Corfu, 1828–Paris, 1871) represents a remarkable case of a conscious cultural mediator between Greece and France, during a critical time (1850–1870). Through a variety of print media (Greek, French or bilingual), he sought to inform the French-language public about the cultural identity of modern Greeks and to confute the distorted image provided by travel literature. Thanks to his excellent education in French, he managed to penetrate the French press, writing about Greek issues. He mobilised around him a network of French philhellenes, Hellenists and journalists who rebroadcasted his positions. Through his Greek-language Εθνικόν Ημερολόγιον [National Almanac], he ‘coordinated’ an important discussion on the language question, preparing the road for the foundation of the Association pour l’encouragement des études grecques.


2021 ◽  
pp. 255-266
Author(s):  
Michael Llewellyn-Smith

Another important debate in the constitutional revision was about the Greek language - Venizelos's aim being to find in time a single national language fit for all purposes. By 1911 the disputed status of katharevousa (the purist form of Greek used in education, the public services, legislation etc) and dimotiki (demotic Greek, the language of poetry and much literature) had become an acute issue. It was brought into the constitutional debate by the so-called language defenders who opposed the so-called 'hairy ones', the proponents of extreme forms of demotic, and wished to entrench katharevousa as the official language of the state. The debate spread as much heat as light. Venizelos was sympathetic to demotic Greek but used katharevousa in official contexts. His speech set out the issues well. He accepted that the language of Holy Writ should be protected by the constitution. He was forced to disappoint some of the demoticists, his natural allies, by accepting a clause in the new constitution stating that the official language of the state was the language of the constitution itself, and of legislation. The passion aroused in these debates derived from the integral connection of the national language with Greek national identity.


Sign in / Sign up

Export Citation Format

Share Document