high level language
Recently Published Documents


TOTAL DOCUMENTS

418
(FIVE YEARS 19)

H-INDEX

26
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Md Rafid Muttaki ◽  
Roshanak Mohammadivojdan ◽  
Mark Tehranipoor ◽  
Farimah Farahmandi

2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-30
Author(s):  
Haoran Xu ◽  
Fredrik Kjolstad

Fast compilation is important when compilation occurs at runtime, such as query compilers in modern database systems and WebAssembly virtual machines in modern browsers. We present copy-and-patch, an extremely fast compilation technique that also produces good quality code. It is capable of lowering both high-level languages and low-level bytecode programs to binary code, by stitching together code from a large library of binary implementation variants. We call these binary implementations stencils because they have holes where missing values must be inserted during code generation. We show how to construct a stencil library and describe the copy-and-patch algorithm that generates optimized binary code. We demonstrate two use cases of copy-and-patch: a compiler for a high-level C-like language intended for metaprogramming and a compiler for WebAssembly. Our high-level language compiler has negligible compilation cost: it produces code from an AST in less time than it takes to construct the AST. We have implemented an SQL database query compiler on top of this metaprogramming system and show that on TPC-H database benchmarks, copy-and-patch generates code two orders of magnitude faster than LLVM -O0 and three orders of magnitude faster than higher optimization levels. The generated code runs an order of magnitude faster than interpretation and 14% faster than LLVM -O0. Our WebAssembly compiler generates code 4.9X-6.5X faster than Liftoff, the WebAssembly baseline compiler in Google Chrome. The generated code also outperforms Liftoff's by 39%-63% on the Coremark and PolyBenchC WebAssembly benchmarks.


2021 ◽  
Author(s):  
Leonardo Kaplan ◽  
Roberto Ierusalimschy

2021 ◽  
Author(s):  
Tamar I Regev ◽  
Josef Affourtit ◽  
Xuanyi Chen ◽  
Abigail E Schipper ◽  
Leon Bergen ◽  
...  

A network of left frontal and temporal brain regions supports 'high-level' language processing-including the processing of word meanings, as well as word-combinatorial processing-across presentation modalities. This 'core' language network has been argued to store our knowledge of words and constructions as well as constraints on how those combine to form sentences. However, our linguistic knowledge additionally includes information about sounds (phonemes) and how they combine to form clusters, syllables, and words. Is this knowledge of phoneme combinatorics also represented in these language regions? Across five fMRI experiments, we investigated the sensitivity of high-level language processing brain regions to sub-lexical linguistic sound patterns by examining responses to diverse nonwords-sequences of sounds/letters that do not constitute real words (e.g., punes, silory, flope). We establish robust responses in the language network to visually (Experiment 1a, n=605) and auditorily (Experiments 1b, n=12, and 1c, n=13) presented nonwords relative to baseline. In Experiment 2 (n=16), we find stronger responses to nonwords that obey the phoneme-combinatorial constraints of English. Finally, in Experiment 3 (n=14) and a post-hoc analysis of Experiment 2, we provide suggestive evidence that the responses in Experiments 1 and 2 are not due to the activation of real words that share some phonology with the nonwords. The results suggest that knowledge of phoneme combinatorics and representations of sub-lexical linguistic sound patterns are stored within the same fronto-temporal network that stores higher-level linguistic knowledge and supports word and sentence comprehension.


Queue ◽  
2021 ◽  
Vol 19 (2) ◽  
pp. 21-28
Author(s):  
George V. Neville-Neil

When you're starting out you want to be able to hold the entire program in your head if at all possible. Once you're conversant with your first, simple assembly language and the machine architecture you're working with, it will be completely possible to look at a page or two of your assembly and know not only what it is supposed to do but also what the machine will do for you step by step. When you look at a high-level language, you should be able to understand what you mean it to do, but often you have no idea just how your intent will be translated into action. Assembly and machine code is where the action is.


2021 ◽  
Vol 30 (1) ◽  
pp. 24-32
Author(s):  
Isai Amutan Krishnan ◽  
Selvajothi Ramalingam ◽  
Narentheren Kaliappen ◽  
Sathiswaran Uthamaputhran ◽  
Puspalata C Suppiah ◽  
...  

The purpose of this qualitative study was to investigate the words and phrases used by student graduates in job interviews. Twenty-Seven Malaysian graduates participated in the study. “How to face challenges” was the focal theme chosen for analysis of the data. The findings indicated that successful interviewees covered six out of seven important employability skills, while interviewees on the reserve list covered only four of the employability skills, and the unsuccessful interviewees covered only three of the seven skills. Successful interviewees were deemed able to portray high level proficiency by using the most salient words and phrases to express their employability skills in the interviews. It is expected that this study will encourage current undergraduates to develop high level language proficiency regarding their employability and foster training in this area by educational institutions so as to benefit their students.


Author(s):  
Symphorien Monsia ◽  
Sami Faiz

In recent years, big data has become a major concern for many organizations. An essential component of big data is the spatio-temporal data dimension known as geospatial big data, which designates the application of big data issues to geographic data. One of the major aspects of the (geospatial) big data systems is the data query language (i.e., high-level language) that allows non-technical users to easily interact with these systems. In this chapter, the researchers explore high-level languages focusing in particular on the spatial extensions of Hadoop for geospatial big data queries. Their main objective is to examine three open source and popular implementations of SQL on Hadoop intended for the interrogation of geospatial big data: (1) Pigeon of SpatialHadoop, (2) QLSP of Hadoop-GIS, and (3) ESRI Hive of GIS Tools for Hadoop. Along the same line, the authors present their current research work toward the analysis of geospatial big data.


2020 ◽  
Vol 13 (12) ◽  
pp. 6265-6284
Author(s):  
Emmanuel Wyser ◽  
Yury Alkhimenkov ◽  
Michel Jaboyedoff ◽  
Yury Y. Podladchikov

Abstract. We present an efficient MATLAB-based implementation of the material point method (MPM) and its most recent variants. MPM has gained popularity over the last decade, especially for problems in solid mechanics in which large deformations are involved, such as cantilever beam problems, granular collapses and even large-scale snow avalanches. Although its numerical accuracy is lower than that of the widely accepted finite element method (FEM), MPM has proven useful for overcoming some of the limitations of FEM, such as excessive mesh distortions. We demonstrate that MATLAB is an efficient high-level language for MPM implementations that solve elasto-dynamic and elasto-plastic problems. We accelerate the MATLAB-based implementation of the MPM method by using the numerical techniques recently developed for FEM optimization in MATLAB. These techniques include vectorization, the use of native MATLAB functions and the maintenance of optimal RAM-to-cache communication, among others. We validate our in-house code with classical MPM benchmarks including (i) the elastic collapse of a column under its own weight; (ii) the elastic cantilever beam problem; and (iii) existing experimental and numerical results, i.e. granular collapses and slumping mechanics respectively. We report an improvement in performance by a factor of 28 for a vectorized code compared with a classical iterative version. The computational performance of the solver is at least 2.8 times greater than those of previously reported MPM implementations in Julia under a similar computational architecture.


2020 ◽  
pp. 1-20
Author(s):  
Kevin McManus ◽  
Yingying Liu

Abstract We closely replicated Wu and Ortega (2013), who found that an elicited imitation test (EIT) reliably distinguished low-level from high-level language abilities among instructed second language (L2) learners of Mandarin Chinese. The original study sampled learners (1) from second-level courses to represent low-level language abilities and (2) from third-, fourth- and graduate-level courses to represent high-level language abilities. Results showed high-level learners outperformed low-level learners on the Mandarin EIT. Our close replication used Wu and Ortega's (2013) materials and procedures in order to understand (1) the extent to which this EIT can additionally distinguish between finer-grained language abilities and (2) the ways in which the broad grouping of language abilities in the high group may have contributed to the findings. Sixty-five instructed L2 learners from four instructional levels were assigned to one of three groups: Beginner (first-level courses), Low (second-level courses), High (third- and fourth-level courses). Consistent with the original study, our results showed clear between-group differences, indicating that the EIT can distinguish between both broad (beginner vs high) and finer-grained (beginner vs low, low vs high) language abilities. These results are discussed in light of the original study's findings with implications for proficiency assessment in second language acquisition (SLA) research.


Sign in / Sign up

Export Citation Format

Share Document