scholarly journals The semantics of shared memory in Intel CPU/FPGA systems

2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-28
Author(s):  
Dan Iorga ◽  
Alastair F. Donaldson ◽  
Tyler Sorensen ◽  
John Wickerson

Heterogeneous CPU/FPGA devices, in which a CPU and an FPGA can execute together while sharing memory, are becoming popular in several computing sectors. In this paper, we study the shared-memory semantics of these devices, with a view to providing a firm foundation for reasoning about the programs that run on them. Our focus is on Intel platforms that combine an Intel FPGA with a multicore Xeon CPU. We describe the weak-memory behaviours that are allowed (and observable) on these devices when CPU threads and an FPGA thread access common memory locations in a fine-grained manner through multiple channels. Some of these behaviours are familiar from well-studied CPU and GPU concurrency; others are weaker still. We encode these behaviours in two formal memory models: one operational, one axiomatic. We develop executable implementations of both models, using the CBMC bounded model-checking tool for our operational model and the Alloy modelling language for our axiomatic model. Using these, we cross-check our models against each other via a translator that converts Alloy-generated executions into queries for the CBMC model. We also validate our models against actual hardware by translating 583 Alloy-generated executions into litmus tests that we run on CPU/FPGA devices; when doing this, we avoid the prohibitive cost of synthesising a hardware design per litmus test by creating our own 'litmus-test processor' in hardware. We expect that our models will be useful for low-level programmers, compiler writers, and designers of analysis tools. Indeed, as a demonstration of the utility of our work, we use our operational model to reason about a producer/consumer buffer implemented across the CPU and the FPGA. When the buffer uses insufficient synchronisation -- a situation that our model is able to detect -- we observe that its performance improves at the cost of occasional data corruption.

Author(s):  
Irfan Uddin

The microthreaded many-core architecture is comprised of multiple clusters of fine-grained multi-threaded cores. The management of concurrency is supported in the instruction set architecture of the cores and the computational work in application is asynchronously delegated to different clusters of cores, where the cluster is allocated dynamically. Computer architects are always interested in analyzing the complex interaction amongst the dynamically allocated resources. Generally a detailed simulation with a cycle-accurate simulation of the execution time is used. However, the cycle-accurate simulator for the microthreaded architecture executes at the rate of 100,000 instructions per second, divided over the number of simulated cores. This means that the evaluation of a complex application executing on a contemporary multi-core machine can be very slow. To perform efficient design space exploration we present a co-simulation environment, where the detailed execution of instructions in the pipeline of microthreaded cores and the interactions amongst the hardware components are abstracted. We present the evaluation of the high-level simulation framework against the cycle-accurate simulation framework. The results show that the high-level simulator is faster and less complicated than the cycle-accurate simulator but with the cost of losing accuracy.


2021 ◽  
Vol 2 (3) ◽  
pp. 16-21
Author(s):  
Saeed Abbassi

Noise pollution caused by vehicle traffic is one of the major problems in urban areas with road expansion. Due to the increase in the cost of construction and installation of sound walls to deal with noise pollution, to deal with this problem should look for methods that do not have additional costs and operating costs. Improving the pavement texture is one of the most effective ways to reduce tire noise and pavement and reduce the asphalt surface’s sound. To evaluate the slip resistance of asphalt, the English pendulum test according to ASTM E303-74 standard was performed on wet parts of asphalt in wet conditions. This device is used to examine the fine texture of the pavement. The number of pavement friction with a negative coefficient of 0.1469 has an inverse ratio with the intensity of sound level increases the number of pavement friction aligned with the amount of sound level created decreases. On the other hand, the depth of pavement texture, which is determined as the size of pavement materials, with a coefficient of 0.2810, has a direct ratio with the amount of noise pollution, and the smaller the number of fine-grained materials used will increase the sound level. According to the results of the coefficients estimated from the equation, it can be concluded that the preparation of pavements with an amount of friction can reduce the amount of noise pollution emitted by the movement of vehicles, especially in urban areas and sensitive areas. Therefore, it is recommended that in acoustically sensitive areas, in preparing pavements, arrangements be made to use coarser materials and maintain proper pavement resistance. For this purpose, in this article, the pavement’s texture is examined in the amount of noise created due to the tire’s interaction and the pavement.


Author(s):  
Liqiong Chen ◽  
Shilong Song ◽  
Can Wang

Just-in-time software defect prediction (JIT-SDP) is a fine-grained software defect prediction technology, which aims to identify the defective code changes in software systems. Effort-aware software defect prediction is a software defect prediction technology that takes into consideration the cost of code inspection, which can find more defective code changes in limited test resources. The traditional effort-aware defect prediction model mainly measures the effort based on the number of lines of code (LOC) and rarely considers additional factors. This paper proposes a novel effort measure method called Multi-Metric Joint Calculation (MMJC). When measuring the effort, MMJC takes into account not only LOC, but also the distribution of modified code across different files (Entropy), the number of developers that changed the files (NDEV) and the developer experience (EXP). In the simulation experiment, MMJC is combined with Linear Regression, Decision Tree, Random Forest, LightGBM, Support Vector Machine and Neural Network, respectively, to build the software defect prediction model. Several comparative experiments are conducted between the models based on MMJC and baseline models. The results show that indicators ACC and [Formula: see text] of the models based on MMJC are improved by 35.3% and 15.9% on average in the three verification scenarios, respectively, compared with the baseline models.


Author(s):  
Vladimir Vlassov ◽  
Oscar Sierra Merino ◽  
Csaba Andras Moritz ◽  
Konstantin Popov

Sensors ◽  
2019 ◽  
Vol 19 (4) ◽  
pp. 847 ◽  
Author(s):  
Xiayan Fan ◽  
Taiping Cui ◽  
Chunyan Cao ◽  
Qianbin Chen ◽  
Kyung Sup Kwak

In this paper, we study the offloading decision of collaborative task execution between platoon and Mobile Edge Computing (MEC) server. The mobile application is represented by a series of fine-grained tasks that form a linear topology, each of which is either executed on a local vehicle, offloaded to other members of the platoon, or offloaded to a MEC server. The objective of the design is to minimize the cost of tasks offloading and meets the deadline of tasks execution. The cost minimized task decision problem is transformed into the shortest path problem, which is limited by the deadline of the tasks on a directed acyclic graph. The classical Lagrangian Relaxation-based Aggregated Cost (LARAC) algorithm is adopted to solve the problem approximately. Numerical analysis shows that the scheduling method of the tasks decision can be well applied to the platoon scenario and execute the tasks in cooperation with the MEC server. In addition, compared with task local execution, platoon execution and MEC server execution, the optimal offloading decision for collaborative task execution can significantly reduce the cost of task execution and meet deadlines.


2019 ◽  
Vol 973 ◽  
pp. 41-48
Author(s):  
Igor L. Gonik ◽  
Lyubov V. Palatkina ◽  
Dmitriy N. Gurulev ◽  
Dmitriy M. Shilikhin

The paper shows that in the conditions of a deficit of high-quality metal charge in steels melting production, the effective use of waste from abrasive grinding in the composition of the metal charge is a promising method for reducing the cost of steel. The most rational solution for the disposal of metallurgical wastes may be the agglomeration of fine-grained and fine-dispersed materials using briquetting technology widely used in many countries around the world to produce multi-purpose briquettes.


Author(s):  
Bin Guo ◽  
Zhen Wang

An increasing number of new venture firms are internationalising their business operations early in their lifecycles to achieve superior performance. Taking the perspective of dynamic capability theory, our study sheds light on the effect that heterogeneity in experiential learning has on international new venture (INV) growth in terms of a curvilinear relationship. Specifically, we introduce the concept of internationalisation path heterogeneity to capture the path-specific features of INV experiential learning and capability building and explore the relationship between internationalisation path heterogeneity and INV firm growth. We also argue that this relationship will be moderated by environmental munificence because the cost and benefit of path heterogeneity is bounded. We test the hypotheses based on empirical analysis of a longitudinal dataset of 1054 INVs from 58 countries. Overall, this study provides a dynamic and fine-grained view of the role played by internationalisation path heterogeneity in driving the growth of INVs.


2013 ◽  
Vol 824 ◽  
pp. 44-50
Author(s):  
B.I.O. Dahunsi ◽  
N.A. Sulymon

This paper discusses the perception of gravel suppliers in the six states situated in South-western Nigeria. It reported findings from a research on gravel supply studies in the study area. Major gravel pits, together with their perceived technical characteristics in the states were identified through a structured questionnaire designed to solicit response from truck drivers and association of gravel suppliers. Based on this, factors affecting gravel supply and usage were measured by random variables devised for the purpose. The observed outcomes of the variables from the survey carried out constituted the research data. The collated data was analyzed based on quantitative method through the use of simple percentage method of data analysis. The paper posits that geological location of Lagos state is responsible for the absence of any gravel pit in the state, hence the prohibitive cost of gravel in the state when compared to other states in the zone. In all the states, more than 99% of gravel suppliers attribute transport as a major factor affecting the cost of gravels. The technical characteristics of gravels from South-western Nigeria are also perceived to be good in construction, though these assertions need to be empirically proved.


Sign in / Sign up

Export Citation Format

Share Document