compilation process
Recently Published Documents


TOTAL DOCUMENTS

71
(FIVE YEARS 27)

H-INDEX

6
(FIVE YEARS 2)

Author(s):  
K Pooja ◽  
◽  
Dr Shailaja S ◽  

Multiple applications of cloud servicing can be seen in the field of logical programming as well as IT industries. Complex computations over local machines may demand for plenty of system resources thereby delaying the data processing operations. In order to achieve speed in processing one must opt for cloud computing techniques. Extensive maneuver of cloud services is desirable for scientific computation of user data and application. This will require a platform designed in a way to meet the specific requirements of individual users, providing an ease for moving their data and applications over different devices. Symbolic-Numeric Computation using cloud service platform is presented in the paper. In this approach user tasks are presented in the form of symbolic expressions using languages like Java, C/C++, APIs etc. Proposed work employs Python programming for carrying out compilation process.


2021 ◽  
Vol 70 (3) ◽  
pp. 267-280
Author(s):  
Márton Pál ◽  
◽  
Gáspár Albert ◽  
◽  

The use of thematic cartography in earth sciences is a frequent task for researchers when publishing. When creating a map, researchers intend to communicate important spatial information that enhances, supplements or replaces textual content. Not only visual but substantial requirements exist for those who create maps. Cartographic visualisation has several well-established rules that must be taken into account during compilation, but not all researchers apply them correctly. The present study aims to identify the factors determining the quality of geoscientific maps and what needs to be improved during a map compilation process. To get to know the tendencies, we have investigated maps in designated journals – one Hungarian and one international per earth science branch: geography, cartography, geology, geophysics, and meteorology. A system of criteria was set up for evaluating the maps objectively; basic rules of cartography, quality of visual representation, and copyright rules were investigated. The results show that better map quality is connected to journals with strict editorial rules and higher impact factors. This assessment method is suitable for analysing any kind of spatial visual representation, and individual map-composing authors can use it for evaluating their maps before submission and publication.


2021 ◽  
Vol 5 (3) ◽  
pp. 1-20
Author(s):  
Hamza Bourbouh ◽  
Pierre-Loïc Garoche ◽  
Christophe Garion ◽  
Xavier Thirioux

Model-based design is now unavoidable when building embedded systems and, more specifically, controllers. Among the available model languages, the synchronous dataflow paradigm, as implemented in languages such as MATLAB Simulink or ANSYS SCADE, has become predominant in critical embedded system industries. Both of these frameworks are used to design the controller itself but also provide code generation means, enabling faster deployment to target and easier V&V activities performed earlier in the design process, at the model level. Synchronous models also ease the definition of formal specification through the use of synchronous observers, attaching requirements to the model in the very same language, mastered by engineers and tooled with simulation means or code generation. However, few works address the automatic synthesis of MATLAB Simulink annotations from lower-level models or code. This article presents a compilation process from Lustre models to genuine MATLAB Simulink, without the need to rely on external C functions or MATLAB functions. This translation is based on the modular compilation of Lustre to imperative code and preserves the hierarchy of the input Lustre model within the generated Simulink one. We implemented the approach and used it to validate a compilation toolchain, mapping Simulink to Lustre and then C, thanks to equivalence testing and checking. This backward compilation from Lustre to Simulink also provides the ability to produce automatically Simulink components modeling specification, proof arguments, or test cases coverage criteria.


Publications ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 27
Author(s):  
Yaniasih Yaniasih ◽  
Indra Budi

Classifying citations according to function has many benefits when it comes to information retrieval tasks, scholarly communication studies, and ranking metric developments. Many citation function classification schemes have been proposed, but most of them have not been systematically designed for an extensive literature-based compilation process. Many schemes were also not evaluated properly before being used for classification experiments utilizing large datasets. This paper aimed to build and evaluate new citation function categories based upon sufficient scientific evidence. A total of 2153 citation sentences were collected from Indonesian journal articles for our dataset. To identify the new categories, a literature survey was conducted, analyses and groupings of category meanings were carried out, and then categories were selected based on the dataset’s characteristics and the purpose of the classification. The evaluation used five criteria: coherence, ease, utility, balance, and coverage. Fleiss’ kappa and automatic classification metrics using machine learning and deep learning algorithms were used to assess the criteria. These methods resulted in five citation function categories. The scheme’s coherence and ease of use were quite good, as indicated by an inter-annotator agreement value of 0.659 and a Long Short-Term Memory (LSTM) F1-score of 0.93. According to the balance and coverage criteria, the scheme still needs to be improved. This research data was limited to journals in food science published in Indonesia. Future research will involve classifying the citation function using a massive dataset collected from various scientific fields and published from some representative countries, as well as applying improved annotation schemes and deep learning methods.


Author(s):  
Süleyman Yıldız

The process of compiling the Qur'ānic verses (jamʿ al-Qur'ān) between two covers into a single musḥaf, and hence reproducing copies whereof constitutes one of the most important stages in regard to both history and recitation (qirāʾāh) of Qur’an. From the earlier periods, it is possible to see individual works written on the subject as well as works that are produced within different fields of expertise such as hadīth, tafseer, qirāʾāt and history. This issue which has grabbed the attention of orientalists and was used as a manipulating tool against the Qur’an for so long, still stands to be contemporary too. Therefore, answering the question of the authenticity of the Quran by taking current questions and problems into consideration is both a necessity and begs for continuity. Zurqānī, who is among the authors of ‘ulūm al-Qur’ān of the modern period, has expressed opinions on the issue at stake. His work is attention-grabbing in the sense that it deals with the of compilation process of Qur’ān with a contemporary edge in order to prevent and overcome any current doubts. This study examines Zurqānī’s approaches to the compilation process of Qur’ān in the context of jamʿ al-Qur'ān. Also, it seeks to touch upon the allegation of distortion in the musḥaf since it is related to the subject.


2021 ◽  
Vol 55 (1) ◽  
pp. 21-37
Author(s):  
Daniel Mawhirter ◽  
Sam Reinehr ◽  
Connor Holmes ◽  
Tongping Liu ◽  
Bo Wu

Subgraph matching is a fundamental task in many applications which identifies all the embeddings of a query pattern in an input graph. Compilation-based subgraph matching systems generate specialized implementations for the provided patterns and often substantially outperform other systems. However, the generated code causes significant computation redundancy and the compilation process incurs too much overhead to be used online, both due to the inherent symmetry in the structure of the query pattern. In this paper, we propose an optimizing query compiler, named GraphZero, to completely address these limitations through symmetry breaking based on group theory. GraphZero implements three novel techniques. First, its schedule explorer efficiently prunes the schedule space without missing any high-performance schedule. Second, it automatically generates and enforces a set of restrictions to eliminate computation redundancy. Third, it generalizes orientation, a surprisingly effective optimization that was only used for clique patterns, to apply to arbitrary patterns. Evaluation on multiple query patterns shows that GraphZero outperforms two state-of-the-art compilation and non-compilation based systems by up to 40X and 2654X, respectively.


Author(s):  
Ahmad Tarmizi ◽  
Nurfitriana Nurfitriana ◽  
Moris Adidi Yogia ◽  
Teuku Afrizal ◽  
Ari Subowo

This research focuses on the Formulation of Regional Regulation Policy Number 3 of 2017 concerning the Riau Malay Customary Institution, Dumai City. This study aims to examine the process of formulating a Perda policy by the Dumai City DPRD and related institutions. This study used a descriptive qualitative approach, in which the informants were determined by purposive sampling from various related agencies. The results of the study found that the process of determining Perda policies follows the direction of the prevailing laws, but the compilation process has not been carried out in a holistic manner, it is lacking in depth, especially in the phase of setting the policy agenda. In this context, it was found that the identification of the problem was not very deep related to the problems and peculiarities of the local cultural, customary and historical conditions. Apart from the Perda, there is also no clear framework for protecting, defending and fighting for the fate of indigenous peoples and local communities. Furthermore, in the drafting phase, it has not yet mobilized public participation, including providing public space and public discussions. In fact, this shows the openness in the formulation phase of the Perda. The interesting thing here is that the concept of the Perda in broad terms refers too much to the Perda of the Riau Provincial Malay Customary Institution. So it is advisable to clarify the concept and frame work "so that it is affective in achieving the mission and goals of the Riau Malay Customary Institution. This study concludes that the Perda is more oriented towards micro interests, namely internal organizations, besides the need to have a legal basis, it can also be used as a basis for obtaining grant allocation from the Dumai City APBD.


This chapter summarizes the main research results on the functioning of human memory and how cognitive instructional models integrate these findings into their proposals for optimizing learning. It also covers some of the main cognitive theories of instruction where we highlight the cognitive theory of multimedia learning and the cognitive load theory. These theories appeared alongside an emerging framework called the “cognitive revolution” in the 1950s. In this framework, human cognition can be compared to a biological computer that represents and processes information that comes from the outside world through various sensory systems. This information must be recorded in memory and then retrieved so that any biological or digital system can perform the activities that are expected in various situations. Learning in this framework is to form new mental schemes in long-term memory, to integrate simple and already formed schemes into more complex ones, and to automate some schemes through a compilation process. The cognitive theories of instruction take the way human memory works very seriously.


2020 ◽  
Vol 3 (5) ◽  
pp. 1280-1297
Author(s):  
L. I. Golovacheva

The article examines the views of the outstanding British Sinologist James Legge (1815–1897) on the textual history of the Lun Yu’. Based on the methodological approach as adapted in the textual historical studies the author, Lidia Golovacheva studies the views by J. Legg on the phenomena as follows. 1. The Qing views on the destruction of books and the killing of scholars, which took place during the Qin dynasty and on the targeted collection of ancient books in the Han era. 2. The compilation process of Lun Yu’s text by the Han dynasty scholars. 3. When and by whom the Lun Yu was written. 4. Who left comments on Lun Yu. 5. Variant readings in Lun Yu. The views on Lun Yu by Legge to a significant part are influenced by those of traditional Chinese scholars. They reflect the general height, reached by the Lun yu’ textology in the 19th cent. This builds a solid basis for future research on the development of Lun Yu studies in the Sinological studies in China, Western Europe and worldwide.


Sign in / Sign up

Export Citation Format

Share Document