scholarly journals An Optimization Framework for Codes Classification and Performance Evaluation of RISC Microprocessors

Symmetry ◽  
2019 ◽  
Vol 11 (7) ◽  
pp. 938
Author(s):  
Syed Rameez Naqvi ◽  
Ali Roman ◽  
Tallha Akram ◽  
Majed M. Alhaisoni ◽  
Muhammad Naeem ◽  
...  

Pipelines, in Reduced Instruction Set Computer (RISC) microprocessors, are expected to provide increased throughputs in most cases. However, there are a few instructions, and therefore entire assembly language codes, that execute faster and hazard-free without pipelines. It is usual for the compilers to generate codes from high level description that are more suitable for the underlying hardware to maintain symmetry with respect to performance; this, however, is not always guaranteed. Therefore, instead of trying to optimize the description to suit the processor design, we try to determine the more suitable processor variant for the given code during compile time, and dynamically reconfigure the system accordingly. In doing so, however, we first need to classify each code according to its suitability to a different processor variant. The latter, in turn, gives us confidence in performance symmetry against various types of codes—this is the primary contribution of the proposed work. We first develop mathematical performance models of three conventional microprocessor designs, and propose a symmetry-improving nonlinear optimization method to achieve code-to-design mapping. Our analysis is based on four different architectures and 324,000 different assembly language codes, each with between 10 and 1000 instructions with different percentages of commonly seen instruction types. Our results suggest that in the sub-micron era, where execution time of each instruction is merely in a few nanoseconds, codes accumulating as low as 5% (or above) hazard causing instructions execute more swiftly on processors without pipelines.

2019 ◽  
Vol 8 (2S8) ◽  
pp. 1463-1468

Software program optimization for improved execution speed can be achieved through modifying the program. Programs are usually written in high level languages then translated into low level assembly language. More coverage of optimization and performance analysis can be performed on low level than high level language. Optimization improvement is measured in the difference in program execution performance. Several methods are available for measuring program performance are classified into static approaches and dynamic approaches. This paper presents an alternative method of more accurately measuring code performance statically than commonly used code analysis metrics. New metrics proposed are designed to expose effectiveness of optimization performed on code, specifically unroll optimizations. An optimization method, loop unroll is used to demonstrate the effectiveness of the increased accuracy of the proposed metric. The results of the study show that measuring Instructions Performed and Instruction Latency is a more accurate static metric than Instruction Count and subsequently those based on it.


2020 ◽  
Vol 12 (2) ◽  
pp. 19-50 ◽  
Author(s):  
Muhammad Siddique ◽  
Shandana Shoaib ◽  
Zahoor Jan

A key aspect of work processes in service sector firms is the interconnection between tasks and performance. Relational coordination can play an important role in addressing the issues of coordinating organizational activities due to high level of interdependence complexity in service sector firms. Research has primarily supported the aspect that well devised high performance work systems (HPWS) can intensify organizational performance. There is a growing debate, however, with regard to understanding the “mechanism” linking HPWS and performance outcomes. Using relational coordination theory, this study examines a model that examine the effects of subsets of HPWS, such as motivation, skills and opportunity enhancing HR practices on relational coordination among employees working in reciprocal interdependent job settings. Data were gathered from multiple sources including managers and employees at individual, functional and unit levels to know their understanding in relation to HPWS and relational coordination (RC) in 218 bank branches in Pakistan. Data analysis via structural equation modelling, results suggest that HPWS predicted RC among officers at the unit level. The findings of the study have contributions to both, theory and practice.


Author(s):  
Richard Stone ◽  
Minglu Wang ◽  
Thomas Schnieders ◽  
Esraa Abdelall

Human-robotic interaction system are increasingly becoming integrated into industrial, commercial and emergency service agencies. It is critical that human operators understand and trust automation when these systems support and even make important decisions. The following study focused on human-in-loop telerobotic system performing a reconnaissance operation. Twenty-four subjects were divided into groups based on level of automation (Low-Level Automation (LLA), and High-Level Automation (HLA)). Results indicated a significant difference between low and high word level of control in hit rate when permanent error occurred. In the LLA group, the type of error had a significant effect on the hit rate. In general, the high level of automation was better than the low level of automation, especially if it was more reliable, suggesting that subjects in the HLA group could rely on the automatic implementation to perform the task more effectively and more accurately.


Author(s):  
Mark O Sullivan ◽  
Carl T Woods ◽  
James Vaughan ◽  
Keith Davids

As it is appreciated that learning is a non-linear process – implying that coaching methodologies in sport should be accommodative – it is reasonable to suggest that player development pathways should also account for this non-linearity. A constraints-led approach (CLA), predicated on the theory of ecological dynamics, has been suggested as a viable framework for capturing the non-linearity of learning, development and performance in sport. The CLA articulates how skills emerge through the interaction of different constraints (task-environment-performer). However, despite its well-established theoretical roots, there are challenges to implementing it in practice. Accordingly, to help practitioners navigate such challenges, this paper proposes a user-friendly framework that demonstrates the benefits of a CLA. Specifically, to conceptualize the non-linear and individualized nature of learning, and how it can inform player development, we apply Adolph’s notion of learning IN development to explain the fundamental ideas of a CLA. We then exemplify a learning IN development framework, based on a CLA, brought to life in a high-level youth football organization. We contend that this framework can provide a novel approach for presenting the key ideas of a CLA and its powerful pedagogic concepts to practitioners at all levels, informing coach education programs, player development frameworks and learning environment designs in sport.


Author(s):  
Kersten Schuster ◽  
Philip Trettner ◽  
Leif Kobbelt

We present a numerical optimization method to find highly efficient (sparse) approximations for convolutional image filters. Using a modified parallel tempering approach, we solve a constrained optimization that maximizes approximation quality while strictly staying within a user-prescribed performance budget. The results are multi-pass filters where each pass computes a weighted sum of bilinearly interpolated sparse image samples, exploiting hardware acceleration on the GPU. We systematically decompose the target filter into a series of sparse convolutions, trying to find good trade-offs between approximation quality and performance. Since our sparse filters are linear and translation-invariant, they do not exhibit the aliasing and temporal coherence issues that often appear in filters working on image pyramids. We show several applications, ranging from simple Gaussian or box blurs to the emulation of sophisticated Bokeh effects with user-provided masks. Our filters achieve high performance as well as high quality, often providing significant speed-up at acceptable quality even for separable filters. The optimized filters can be baked into shaders and used as a drop-in replacement for filtering tasks in image processing or rendering pipelines.


2021 ◽  
Vol 13 (12) ◽  
pp. 2342
Author(s):  
Jin-Bong Sung ◽  
Sung-Yong Hong

A new method to design in-orbit synthetic aperture radar operational parameters has been implemented for the Korean Multi-purpose Satellite 6 mission. The implemented method optimizes the pulse repetition frequency when a satellite altitude changes from its nominal one, so it has the advantage that the synthetic aperture radar performances can satisfy the requirements for the in-orbit operation. Other commanding parameters have been designed to conduct trade-off between those parameters. This paper presents the new optimization method to maintain the synthetic aperture radar performances even in the case of an altitude variation. Design methodologies to determine operational parameters, respectively, at nominal altitude and in orbit are presented. In addition, numerical simulation is presented to validate the proposed optimization and the design methodologies.


2021 ◽  
Vol 22 (12) ◽  
pp. 6592
Author(s):  
Artur Seweryn ◽  
Tomasz Wasilewski ◽  
Anita Bocho-Janiszewska

The article shows that the type and concentration of inorganic salt can be translated into the structure of the bulk phase and the performance properties of ecological all-purpose cleaners (APC). A base APC formulation was developed. Thereafter, two types of salt (sodium chloride and magnesium chloride) were added at various concentrations to obtain different structures in the bulk phase. The salt addition resulted in the formation of spherical micelles and—upon addition of more electrolyte—of aggregates having a lamellar structure. The formulations had constant viscosities (ab. 500 mPa·s), comparable to those of commercial products. Essential physical-chemical and performance properties of the four formulations varying in salt types and concentrations were evaluated. It was found that the addition of magnesium salt resulted in more favorable characteristics due to the surface activity of the formulations, which translated into adequately high wettability of the investigated hydrophobic surfaces, and their ability to emulsify fat. A decreasing relationship was observed in foaming properties: higher salt concentrations lead to worse foaming properties and foam stability of the solutions. For the magnesium chloride composition, the effect was significantly more pronounced, as compared to the sodium chloride-based formulations. As far as safety of use is concerned, the formulations in which magnesium salt was used caused a much lesser irritation compared with the other investigated formulations. The zein value was observed to decrease with increasing concentrations of the given type of salt in the composition.


2003 ◽  
Vol 12 (03) ◽  
pp. 333-351 ◽  
Author(s):  
B. Mesman ◽  
Q. Zhao ◽  
N. Busa ◽  
K. Leijten-Nowak

In current System-on-Chip (SoC) design, the main engineering trade-off concerns hardware efficiency and design effort. Hardware efficiency traditionally regards cost versus performance (in high-volume electronics), but recently energy consumption emerged as a dominant criterion, even in products without batteries. "The" most effective way to increase HW efficiency is to exploit application characteristics in the HW. The traditional way of looking at HW design tends to consider it a time-consuming and tedious task, however. Given the current lack of HW designers, and the pressure of time-to-market, clearly a desire exists to fine-balance the merits and effort of tuning your HW to your application. This paper discusses methods and tool support for HW application-tuning at different levels of granularity. Furthermore we treat several ways of applying reconfigurable HW to allow both silicon reuse and the ability to tune the HW to the application after fabrication. Our main focus is on a methodology for application-tuning the architecture of DSP datapaths. Our primary contribution is on reusing and generalizing this methodology to application-tuning DSP instruction sets, and providing tool support for efficient compilation for these instruction sets. Furthermore, we propose an architecure for a reconfigurable instruction-decoder, enabling application-tuning of the instruction-set after fabrication.


1970 ◽  
Vol 116 (530) ◽  
pp. 39-43 ◽  
Author(s):  
F. M. McPherson ◽  
Valerie Barden ◽  
A. Joan Hay ◽  
D. W. Johnstone ◽  
A. W. Kushner

Affective flattening is a disorder of emotional expression, of which a good definition is ‘a gross lack of emotional response to the given situation’ (Fish, 1962). It is a clinical sign whose assessment depends upon the clinician's intepretation of the patient's facial expression, tone of voice and content of talk (Harris ' Metcalfe, 1956). Although these are subtle cues, it has been shown that experienced clinicians can assess the severity of affective flattening with a high level of inter-rater agreement (Miller et al., 1953; Harris ' Metcaife, 1956; Wing, 1961; Dixon, 1968). The disorder is usually associated with a diagnosis of schizophrenia, although it may occur in other conditions, such as the organic psychoses (Bullock et al., 1951).


Sign in / Sign up

Export Citation Format

Share Document