Efficient Implementation of Application-Aware Spinlock Control in MPSoCs

Author(s):  
Diandian Zhang ◽  
Li Lu ◽  
Jeronimo Castrillon ◽  
Torsten Kempf ◽  
Gerd Ascheid ◽  
...  

Spinlocks are a common technique in Multi-Processor Systems-on-Chip (MPSoCs) to protect shared resources and prevent data corruption. Without a priori application knowledge, the control of spinlocks is often highly random which can degrade the system performance significantly. To improve this, a centralized control mechanism for spinlocks is proposed in this paper, which utilizes application-specific information during spinlock control. The complete control flow is presented, which starts from integrating high-level user-defined information down to a low-level realization of the control. An Application-Specific Instruction-set Processor (ASIP) called OSIP, which was originally designed for task scheduling and mapping, is extended to support this mechanism. The case studies demonstrate the high efficiency of the proposed approach and at the same time highlight the efficiency and flexibility advantages of using an ASIP as the system controller in MPSoCs.

Author(s):  
Nanxin Wang ◽  
Jie Cheng

Abstract More and more applications in today’s automotive industry call for integration of existing product design/analysis programs into packages to perform a higher level of system functionality, such as total engine analysis, model-based engine mapping, and powertrain system or vehicle optimization. The functional and procedural specifications for these integrations are often referred to as engineering methodologies. To enable the rapid prototyping of these methodologies, a generic software integration framework, EMAT (Engineering Methodology Application Tool) has been developed. EMAT consists of a high-level language environment MDL (Methodology Description Language) and a program for process execution scheduling and monitoring based on an artificial intelligence technology called Blackboard. Under the EMAT framework, a user can easily specify the control flow and data flow for any methodology in a declarative manner. Such a specification only needs to contain logical orders in which individual component programs will be executed (such as sequence, branching, or looping), and the input/output connections between the programs. EMAT will then dynamically interpret this specification into procedures that actually carry out the execution. In contrast to the conventional integration practices such as developing application specific scripts, EMAT provides a generic and high level means for integration, which improves hot only the efficiency of programing, but also the modularity, maintainability, and reusability of software. EMAT is currently being applied to the integration of multiple engine simulation programs to prototype complicated engineering methodologies for a wide range of applications within FORD.


2014 ◽  
Vol 2014 ◽  
pp. 1-14
Author(s):  
Sharad Sinha ◽  
Thambipillai Srikanthan

Multiplication is a common operation in many applications and there exist various types of multiplication operations. Current high level synthesis (HLS) flows generally treat all multiplication operations equally and indistinguishable from each other leading to inefficient mapping to resources. This paper proposes algorithms for automatically identifying the different types of multiplication operations and investigates the ensemble of these different types of multiplication operations. This distinguishes it from previous works where mapping strategies for an individual type of multiplication operation have been investigated and the type of multiplication operation is assumed to be knowna priori. A new cost model, independent of device and synthesis tools, for establishing priority among different types of multiplication operations for mapping to on-chip DSP blocks is also proposed. This cost model is used by a proposed analysis and priority ordering based mapping strategy targeted at making efficient use of hard DSP blocks on FPGAs while maximizing the operating frequency of designs. Results show that the proposed methodology could result in designs which were at least 2× faster in performance than those generated by commercial HLS tool: Vivado-HLS.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-30
Author(s):  
Yann Herklotz ◽  
James D. Pollard ◽  
Nadesh Ramanathan ◽  
John Wickerson

High-level synthesis (HLS), which refers to the automatic compilation of software into hardware, is rapidly gaining popularity. In a world increasingly reliant on application-specific hardware accelerators, HLS promises hardware designs of comparable performance and energy efficiency to those coded by hand in a hardware description language such as Verilog, while maintaining the convenience and the rich ecosystem of software development. However, current HLS tools cannot always guarantee that the hardware designs they produce are equivalent to the software they were given, thus undermining any reasoning conducted at the software level. Furthermore, there is mounting evidence that existing HLS tools are quite unreliable, sometimes generating wrong hardware or crashing when given valid inputs. To address this problem, we present the first HLS tool that is mechanically verified to preserve the behaviour of its input software. Our tool, called Vericert, extends the CompCert verified C compiler with a new hardware-oriented intermediate language and a Verilog back end, and has been proven correct in Coq. Vericert supports most C constructs, including all integer operations, function calls, local arrays, structs, unions, and general control-flow statements. An evaluation on the PolyBench/C benchmark suite indicates that Vericert generates hardware that is around an order of magnitude slower (only around 2× slower in the absence of division) and about the same size as hardware generated by an existing, optimising (but unverified) HLS tool.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258869
Author(s):  
Manja D. Jensen ◽  
Kasper M. Hansen ◽  
Volkert Siersma ◽  
John Brodersen

Balancing the benefits and harms of mammography screening is difficult and involves a value judgement. Screening is both a medical and a social intervention, therefore public opinion could be considered when deciding if mammography screening programmes should be implemented and continued. Opinion polls have revealed high levels of public enthusiasm for cancer screening, however, the public tends to overestimate the benefits and underestimate the harms. In the search for better public decision on mammography screening, this study investigated the quality of public opinion arising from a Deliberative Poll. In a Deliberative Poll a representative group of people is brought together to deliberate with each other and with experts based on specific information. Before, during and after the process, the participants’ opinions are assessed. In our Deliberative Poll a representative sample of the Danish population aged between 18 and 70 participated. They studied an online video and took part in five hours of intense online deliberation. We used survey data at four timepoints during the study, from recruitment to one month after the poll, to estimate the quality of decisions by the following outcomes: 1) Knowledge; 2) Ability to form opinions; 3) Opinion stability, and 4) Opinion consistency. The proportion of participants with a high level of knowledge increased from 1% at recruitment to 56% after receiving video information. More people formed an opinion regarding the effectiveness of the screening programme (12%), the economy of the programme (27%), and the ethical dilemmas of screening (10%) due to the process of information and deliberation. For 11 out of 14 opinion items, the within-item correlations between the first two inquiry time points were smaller than the correlations between later timepoints. This indicates increased opinion stability. The correlations between three pairs of opinion items deemed theoretically related a priori all increased, indicating increased opinion consistency. Overall, the combined process of online information and deliberation increased opinion quality about mammography screening by increasing knowledge and the ability to form stable and consistent opinions.


2021 ◽  
Vol 13 (8) ◽  
pp. 4113
Author(s):  
Valeria Superti ◽  
Cynthia Houmani ◽  
Ralph Hansmann ◽  
Ivo Baur ◽  
Claudia R. Binder

With increasing urbanisation, new approaches such as the Circular Economy (CE) are needed to reduce resource consumption. In Switzerland, Construction & Demolition (C&D) waste accounts for the largest portion of waste (84%). Beyond limiting the depletion of primary resources, implementing recycling strategies for C&D waste (such as using recycled aggregates to produce recycled concrete (RC)), can also decrease the amount of landfilled C&D waste. The use of RC still faces adoption barriers. In this research, we examined the factors driving the adoption of recycled products for a CE in the C&D sector by focusing on RC for structural applications. We developed a behavioural framework to understand the determinants of architects’ decisions to recommend RC. We collected and analysed survey data from 727 respondents. The analyses focused on architects’ a priori beliefs about RC, behavioural factors affecting their recommendations of RC, and project-specific contextual factors that might play a role in the recommendation of RC. Our results show that the factors that mainly facilitate the recommendation of RC by architects are: a senior position, a high level of RC knowledge and of the Minergie label, beliefs about the reduced environmental impact of RC, as well as favourable prescriptive social norms expressed by clients and other architects. We emphasise the importance of a holistic theoretical framework in approaching decision-making processes related to the adoption of innovation, and the importance of the agency of each involved actor for a transition towards a circular construction sector.


2002 ◽  
Vol 70 (9) ◽  
pp. 4880-4891 ◽  
Author(s):  
Julia Eitel ◽  
Petra Dersch

ABSTRACT The YadA protein is a major adhesin of Yersinia pseudotuberculosis that promotes tight adhesion to mammalian cells by binding to extracellular matrix proteins. In this study, we first addressed the possibility of competitive interference of YadA and the major invasive factor invasin and found that expression of YadA in the presence of invasin affected neither the export nor the function of invasin in the outer membrane. Furthermore, expression of YadA promoted both bacterial adhesion and high-efficiency invasion entirely independently of invasin. Antibodies against fibronectin and β1 integrins blocked invasion, indicating that invasion occurs via extracellular-matrix-dependent bridging between YadA and the host cell β1 integrin receptors. Inhibitor studies also demonstrated that tyrosine and Ser/Thr kinases, as well as phosphatidylinositol 3-kinase, are involved in the uptake process. Further expression studies revealed that yadA is regulated in response to several environmental parameters, including temperature, ion and nutrient concentrations, and the bacterial growth phase. In complex medium, YadA production was generally repressed but could be induced by addition of Mg2+. Maximal expression of yadA was obtained in exponential-phase cells grown in minimal medium at 37°C, conditions under which the invasin gene is repressed. These results suggest that YadA of Y. pseudotuberculosis constitutes another independent high-level uptake pathway that might complement other cell entry mechanisms (e.g., invasin) at certain sites or stages during the infection process.


2021 ◽  
Vol 15 (2) ◽  
pp. 1-25
Author(s):  
Amal Alhosban ◽  
Zaki Malik ◽  
Khayyam Hashmi ◽  
Brahim Medjahed ◽  
Hassan Al-Ababneh

Service-Oriented Architectures (SOA) enable the automatic creation of business applications from independently developed and deployed Web services. As Web services are inherently a priori unknown, how to deliver reliable Web services compositions is a significant and challenging problem. Services involved in an SOA often do not operate under a single processing environment and need to communicate using different protocols over a network. Under such conditions, designing a fault management system that is both efficient and extensible is a challenging task. In this article, we propose SFSS, a self-healing framework for SOA fault management. SFSS is predicting, identifying, and solving faults in SOAs. In SFSS, we identified a set of high-level exception handling strategies based on the QoS performances of different component services and the preferences articled by the service consumers. Multiple recovery plans are generated and evaluated according to the performance of the selected component services, and then we execute the best recovery plan. We assess the overall user dependence (i.e., the service is independent of other services) using the generated plan and the available invocation information of the component services. Due to the experiment results, the given technique enhances the service selection quality by choosing the services that have the highest score and betters the overall system performance. The experiment results indicate the applicability of SFSS and show improved performance in comparison to similar approaches.


Author(s):  
Umar Ibrahim Minhas ◽  
Roger Woods ◽  
Georgios Karakonstantis

AbstractWhilst FPGAs have been used in cloud ecosystems, it is still extremely challenging to achieve high compute density when mapping heterogeneous multi-tasks on shared resources at runtime. This work addresses this by treating the FPGA resource as a service and employing multi-task processing at the high level, design space exploration and static off-line partitioning in order to allow more efficient mapping of heterogeneous tasks onto the FPGA. In addition, a new, comprehensive runtime functional simulator is used to evaluate the effect of various spatial and temporal constraints on both the existing and new approaches when varying system design parameters. A comprehensive suite of real high performance computing tasks was implemented on a Nallatech 385 FPGA card and show that our approach can provide on average 2.9 × and 2.3 × higher system throughput for compute and mixed intensity tasks, while 0.2 × lower for memory intensive tasks due to external memory access latency and bandwidth limitations. The work has been extended by introducing a novel scheduling scheme to enhance temporal utilization of resources when using the proposed approach. Additional results for large queues of mixed intensity tasks (compute and memory) show that the proposed partitioning and scheduling approach can provide higher than 3 × system speedup over previous schemes.


Author(s):  
L. S. Pioro ◽  
I. L. Pioro

It is well known that high-level radioactive wastes (HLRAW) are usually vitrified inside electric furnaces. Disadvantages of electric furnaces are their low melting capacity and restrictions on charge preparation. Therefore, a new concept for a high efficiency combined aggregate – submerged combustion melter (SCM)–electric furnace was developed for vitrification of HLRAW. The main idea of this concept is to use the SCM as the primary high-capacity melting unit with direct melt drainage into an electric furnace. The SCM employs a single-stage method for vitrification of HLRAW. The method includes concentration (evaporation), calcination, and vitrification of HLRAW in a single-stage process inside a melting chamber of the SCM. Specific to the melting process is the use of a gas-air or gas-oxygen-air mixture with direct combustion inside a melt. Located inside the melt are high-temperature zones with increased reactivity of the gas phase, the existence of a developed interface surface, and intensive mixing, leading to intensification of the charge melting and vitrification process. The electric furnace clarifies molten glass, thus preparing the high-quality melt for subsequent melt pouring into containers for final storage.


2016 ◽  
Vol 2016 ◽  
pp. 1-17 ◽  
Author(s):  
Erkhembayar Jadamba ◽  
Miyoung Shin

Drug repositioning offers new clinical indications for old drugs. Recently, many computational approaches have been developed to repurpose marketed drugs in human diseases by mining various of biological data including disease expression profiles, pathways, drug phenotype expression profiles, and chemical structure data. However, despite encouraging results, a comprehensive and efficient computational drug repositioning approach is needed that includes the high-level integration of available resources. In this study, we propose a systematic framework employing experimental genomic knowledge and pharmaceutical knowledge to reposition drugs for a specific disease. Specifically, we first obtain experimental genomic knowledge from disease gene expression profiles and pharmaceutical knowledge from drug phenotype expression profiles and construct a pathway-drug network representing a priori known associations between drugs and pathways. To discover promising candidates for drug repositioning, we initialize node labels for the pathway-drug network using identified disease pathways and known drugs associated with the phenotype of interest and perform network propagation in a semisupervised manner. To evaluate our method, we conducted some experiments to reposition 1309 drugs based on four different breast cancer datasets and verified the results of promising candidate drugs for breast cancer by a two-step validation procedure. Consequently, our experimental results showed that the proposed framework is quite useful approach to discover promising candidates for breast cancer treatment.


Sign in / Sign up

Export Citation Format

Share Document