scholarly journals Library Faculty and Instructional Assessment: Creating a Culture of Assessment through the High Performance Programming Model of Organizational Transformation

2013 ◽  
Vol 5 (3) ◽  
pp. 177-188
Author(s):  
Meredith Farkas ◽  
Lisa Hinchliffe

In an environment in which libraries increasingly need to demonstrate their value to faculty and administrators, providing evidence of the library’s contribution to student learning through its instruction program is critical. However, building a culture of assessment can be a challenge, even if librarians recognize its importance. In order to lead change, coordinators of library instruction at institutions where librarians are also tenure-track faculty must build trust and collaboration, lead through influence, and garner support from administration for assessment initiatives. The purpose of this paper is to explore what it takes to build a culture of assessment in academic libraries where librarians are faculty through the High Performance Programming model of organizational change. The guidelines for building a culture of assessment will be exemplified by case studies at the authors’ libraries where instruction coordinators are using collaboration to build a culture of assessment with their colleagues.

Author(s):  
Venkat N Gudivada ◽  
Jagadeesh Nandigam ◽  
Jordan Paris

Availability of multiprocessor and multi-core chips and GPU accelerators at commodity prices is making personal supercomputers a reality. High performance programming models help apply this computational power to analyze and visualize massive datasets. Problems which required multi-million dollar supercomputers until recently can now be solved using personal supercomputers. However, specialized programming techniques are needed to harness the power of supercomputers. This chapter provides an overview of approaches to programming High Performance Computers (HPC). The programming paradigms illustrated include OpenMP, OpenACC, CUDA, OpenCL, shared-memory based concurrent programming model of Haskell, MPI, MapReduce, and message-based distributed computing model of Erlang. The goal is to provide enough detail on various paradigms to help the reader understand the fundamental differences and similarities among the paradigms. Example programs are chosen to illustrate the salient concepts that define these paradigms. The chapter concludes by providing research directions and future trends in programming high performance computers.


Author(s):  
Breno A. de Melo Menezes ◽  
Nina Herrmann ◽  
Herbert Kuchen ◽  
Fernando Buarque de Lima Neto

AbstractParallel implementations of swarm intelligence algorithms such as the ant colony optimization (ACO) have been widely used to shorten the execution time when solving complex optimization problems. When aiming for a GPU environment, developing efficient parallel versions of such algorithms using CUDA can be a difficult and error-prone task even for experienced programmers. To overcome this issue, the parallel programming model of Algorithmic Skeletons simplifies parallel programs by abstracting from low-level features. This is realized by defining common programming patterns (e.g. map, fold and zip) that later on will be converted to efficient parallel code. In this paper, we show how algorithmic skeletons formulated in the domain specific language Musket can cope with the development of a parallel implementation of ACO and how that compares to a low-level implementation. Our experimental results show that Musket suits the development of ACO. Besides making it easier for the programmer to deal with the parallelization aspects, Musket generates high performance code with similar execution times when compared to low-level implementations.


2015 ◽  
Vol 20 (4) ◽  
pp. 424-442 ◽  
Author(s):  
Mariella Miraglia ◽  
Guido Alessandri ◽  
Laura Borgogni

Purpose – Previous literature has recognized the variability of job performance, calling attention to the inter-individual differences in performance change. Building on Murphy’s (1989) theoretical model of performance, the purpose of this paper is to verify the existence of two distinct classes of performance, reflecting stable and increasing trends, and to investigate which personal conditions prompt the inclusion of individuals in one class rather than the other. Design/methodology/approach – Overall job performance was obtained from supervisory ratings for four consecutive years for 410 professionals of a large Italian company going through significant reorganization. Objective data were merged with employees’ organizational tenure and self-efficacy. Growth Mixture Modeling was used. Findings – Two main groups were identified: the first one started at higher levels of performance and showed a stable trajectory over time (stable class); the second group started at lower levels and reported an increasing trajectory (increasing class). Employees’ with stronger efficacy beliefs and lower tenure were more likely to belong to the stable class. Originality/value – Through a powerful longitudinal database, the nature, the structure and the inter-individual differences in job performance over time are clarified. The study extends Murphy’s (1989) model, showing how transition stages in job performance may occur also as a result of organizational transformation. Moreover, it demonstrates the essential role of self-efficacy in maintaining high performance levels over time.


2018 ◽  
Vol 6 (9) ◽  
pp. 116-122
Author(s):  
Godwin Nosakhare

Today’s organizations are confronted with the deepest downturn since the great depressions after The World War II. These distortions have necessitated the need for organizations to consider the issue of organizational change. The objective of this study is to examine the impact of strategic change and organizational transformation process in Nigerian organizations; using selected companies from the Telecommunications industry in Nigeria as a case of study. The data collection tool employed by the researcher was the questionnaire approach and the sampling technique employed by the researcher was the Taro Yemmane sampling technique while the hypothesis formulated was tested using the Z-test statistics. The results from the test of the hypothesis showed that strategic change in the Nigerian Telecommunications industry leads to successful organizational transformation. Based on this, the researcher concluded that organizational change can usher in a host of unwelcome and unavoidable side effects and result in the need for the organization to improve productivity, increase morale or re-define the culture of the organization. It was against this backdrop that the researcher recommended that employees need to be allowed to actively take part in meetings and workshops at the onset where the envisaged changes are discussed and management needs to solicit constant feedback from staff throughout the process and take their views into account when restructuring.


Author(s):  
Oscar D. Marcenaro-Gutierrez ◽  
Sandra Gonzalez-Gallardo ◽  
Mariano Luque

In this article, we carry out a combined econometric and multiobjective analysis using data from a representative sample of Andalusian schools. In particular, four econometric models are estimated in which the students’ academic performance (scores in math and reading, and percentage of students reaching a certain threshold in both subjects, respectively) are regressed against the satisfaction of students with different aspects of the teaching-learning process. From these estimates, four objective functions are defined which have been simultaneously maximized, subject to a set of constraints obtained by analyzing dependencies between explanatory variables. This multiobjective programming model is intended to optimize the students’ academic performance as a function of the students’ satisfaction. To solve this problem we use a decomposition-based evolutionary multiobjective algorithm called Global WASF-GA with different scalarizing functions which allows generating an approximation of the Pareto optimal front. In general, the results show the importance of promoting respect and closer interaction between students and teachers, as a way to increase the average performance of the students and the proportion of high performance students.


Author(s):  
Javier Conejero ◽  
Sandra Corella ◽  
Rosa M Badia ◽  
Jesus Labarta

Task-based programming has proven to be a suitable model for high-performance computing (HPC) applications. Different implementations have been good demonstrators of this fact and have promoted the acceptance of task-based programming in the OpenMP standard. Furthermore, in recent years, Apache Spark has gained wide popularity in business and research environments as a programming model for addressing emerging big data problems. COMP Superscalar (COMPSs) is a task-based environment that tackles distributed computing (including Clouds) and is a good alternative for a task-based programming model for big data applications. This article describes why we consider that task-based programming models are a good approach for big data applications. The article includes a comparison of Spark and COMPSs in terms of architecture, programming model, and performance. It focuses on the differences that both frameworks have in structural terms, on their programmability interface, and in terms of their efficiency by means of three widely known benchmarking kernels: Wordcount, Kmeans, and Terasort. These kernels enable the evaluation of the more important functionalities of both programming models and analyze different work flows and conditions. The main results achieved from this comparison are (1) COMPSs is able to extract the inherent parallelism from the user code with minimal coding effort as opposed to Spark, which requires the existing algorithms to be adapted and rewritten by explicitly using their predefined functions, (2) it is an improvement in terms of performance when compared with Spark, and (3) COMPSs has shown to scale better than Spark in most cases. Finally, we discuss the advantages and disadvantages of both frameworks, highlighting the differences that make them unique, thereby helping to choose the right framework for each particular objective.


Author(s):  
Olfa Hamdi-Larbi ◽  
Ichrak Mehrez ◽  
Thomas Dufaud

Many applications in scientific computing process very large sparse matrices on parallel architectures. The presented work in this paper is a part of a project where our general aim is to develop an auto-tuner system for the selection of the best matrix compression format in the context of high-performance computing. The target smart system can automatically select the best compression format for a given sparse matrix, a numerical method processing this matrix, a parallel programming model and a target architecture. Hence, this paper describes the design and implementation of the proposed concept. We consider a case study consisting of a numerical method reduced to the sparse matrix vector product (SpMV), some compression formats, the data parallel as a programming model and, a distributed multi-core platform as a target architecture. This study allows extracting a set of important novel metrics and parameters which are relative to the considered programming model. Our metrics are used as input to a machine-learning algorithm to predict the best matrix compression format. An experimental study targeting a distributed multi-core platform and processing random and real-world matrices shows that our system can improve in average up to 7% the accuracy of the machine learning.


Sign in / Sign up

Export Citation Format

Share Document