State-of-the-Art Instructional Computing Systems that Afford Instruction and Bootstrap Research

1992 ◽  
pp. 349-380 ◽  
Author(s):  
Philip H. Winne
2018 ◽  
Vol 12 (02) ◽  
pp. 191-213
Author(s):  
Nan Zhu ◽  
Yangdi Lu ◽  
Wenbo He ◽  
Hua Yu ◽  
Jike Ge

The sheer volume of contents generated by today’s Internet services is stored in the cloud. The effective indexing method is important to provide the content to users on demand. The indexing method associating the user-generated metadata with the content is vulnerable to the inaccuracy caused by the low quality of the metadata. While the content-based indexing does not depend on the error-prone metadata, the state-of-the-art research focuses on developing descriptive features and misses the system-oriented considerations when incorporating these features into the practical cloud computing systems. We propose an Update-Efficient and Parallel-Friendly content-based indexing system, called Partitioned Hash Forest (PHF). The PHF system incorporates the state-of-the-art content-based indexing models and multiple system-oriented optimizations. PHF contains an approximate content-based index and leverages the hierarchical memory system to support the high volume of updates. Additionally, the content-aware data partitioning and lock-free concurrency management module enable the parallel processing of the concurrent user requests. We evaluate PHF in terms of indexing accuracy and system efficiency by comparing it with the state-of-the-art content-based indexing algorithm and its variances. We achieve the significantly better accuracy with less resource consumption, around 37% faster in update processing and up to 2.5[Formula: see text] throughput speedup in a multi-core platform comparing to other parallel-friendly designs.


Author(s):  
JOST BERTHOLD ◽  
HANS-WOLFGANG LOIDL ◽  
KEVIN HAMMOND

AbstractOver time, several competing approaches to parallel Haskell programming have emerged. Different approaches support parallelism at various different scales, ranging from small multicores to massively parallel high-performance computing systems. They also provide varying degrees of control, ranging from completely implicit approaches to ones providing full programmer control. Most current designs assume a shared memory model at the programmer, implementation and hardware levels. This is, however, becoming increasingly divorced from the reality at the hardware level. It also imposes significant unwanted runtime overheads in the form of garbage collection synchronisation etc. What is needed is an easy way to abstract over the implementation and hardware levels, while presenting a simple parallelism model to the programmer. The PArallEl shAred Nothing runtime system design aims to provide a portable and high-level shared-nothing implementation platform for parallel Haskell dialects. It abstracts over major issues such as work distribution and data serialisation, consolidating existing, successful designs into a single framework. It also provides an optional virtual shared-memory programming abstraction for (possibly) shared-nothing parallel machines, such as modern multicore/manycore architectures or cluster/cloud computing systems. It builds on, unifies and extends, existing well-developed support for shared-memory parallelism that is provided by the widely used GHC Haskell compiler. This paper summarises the state-of-the-art in shared-nothing parallel Haskell implementations, introduces the PArallEl shAred Nothing abstractions, shows how they can be used to implement three distinct parallel Haskell dialects, and demonstrates that good scalability can be obtained on recent parallel machines.


Acta Numerica ◽  
2012 ◽  
Vol 21 ◽  
pp. 379-474 ◽  
Author(s):  
J. J. Dongarra ◽  
A. J. van der Steen

This article describes the current state of the art of high-performance computing systems, and attempts to shed light on near-future developments that might prolong the steady growth in speed of such systems, which has been one of their most remarkable characteristics. We review the different ways devised to speed them up, both with regard to components and their architecture. In addition, we discuss the requirements for software that can take advantage of existing and future architectures.


2021 ◽  
Author(s):  
Paul F. Baumeister ◽  
Lars Hoffmann

Abstract. Remote sensing observations in the mid-infrared spectral region (4–15 μm) play a key role in monitoring the composition of the Earth's atmosphere. Mid-infrared spectral measurements from satellite, aircraft, balloon and ground-based instruments provide information on pressure and temperature, trace gases as well as aerosols and clouds. As state-of-the-art instruments deliver a vast amount of data on a global scale, their analysis, however, may require advanced methods and high-performance computing capacities for data processing. A large amount of computing time is usually spent on evaluating the radiative transfer equation. Line-by-line calculations of infrared radiative transfer are considered to be most accurate, but they are also most time-consuming. Here, we discuss the emissivity growth approximation (EGA), which can accelerate infrared radiative transfer calculations by several orders of magnitude compared with line-by-line calculations. As future satellite missions will likely depend on Exascale computing systems to process their observational data in due time, we think that the utilization of graphical processing units (GPUs) for the radiative transfer calculations and satellite retrievals is a logical next step in further accelerating and improving the efficiency of data processing. Focusing on the EGA method, we first discuss the implementation of infrared radiative transfer calculations on GPU-based computing systems in detail. Second, we discuss distinct features of our implementation of the EGA method, in particular regarding the memory needs, performance, and scalability on state-of-the-art GPU systems. As we found our implementation to perform about an order of magnitude more energy-efficient on GPU-accelerated architectures compared to CPU, we conclude that our approach provides various future opportunities for this high-throughput problem.


2012 ◽  
Vol 4 (1) ◽  
pp. 52-66 ◽  
Author(s):  
Junaid Arshad ◽  
Paul Townend ◽  
Jie Xu ◽  
Wei Jie

The evolution of modern computing systems has lead to the emergence of Cloud computing. Cloud computing facilitates on-demand establishment of dynamic, large scale, flexible, and highly scalable computing infrastructures. However, as with any other emerging technology, security underpins widespread adoption of Cloud computing. This paper presents the state-of-the-art about Cloud computing along with its different deployment models. The authors also describe various security challenges that can affect an organization’s decision to adopt Cloud computing. Finally, the authors list recommendations to mitigate with these challenges. Such review of state-of-the-art about Cloud computing security can serve as a useful barometer for an organization to make an informed decision about Cloud computing adoption.


2021 ◽  
Vol 66 (2 supplement) ◽  
pp. 181-190
Author(s):  
Martina Properzi

" In this article I will address the issue of the embodiment of computing sys-tems from the point of view distinctive of the so-called Unconventional Computation, focusing on the paradigm known as Mor-phological Computation. As a first step, I will contextualize Morphological Computa-tion within the disciplinary field of Embod-ied Artificial Intelligence: broadly con-ceived, Embodied Artificial Intelligence may be characterized as embracing both conventional and unconventional ap-proaches to the artificial emulation of natu-ral intelligence. Morphological Computa-tion stands out from other paradigms of unconventional Embodied Artificial Intelli-gence in that it discloses a new, closer kind of connection between embodiment and computation. I will further my investigation by briefly reviewing the state-of-the-art in Morphological Computation: attention will be given to a very recent trend, whose core concept is that of “organic reconfigu-rability”. In this direction, as a final step, two advanced cases of study of organic or living morphological computers will be pre-sented and discussed. The prospect is to shed some light on our title question: what progress has been made in understanding the embodiment of computing systems? Keywords: Embodied Artificial Intelligence; Morphological Computation; Reservoir Compu-ting Systems; Organic Reconfigurability; 3D Bio-Printed Synthetic Corneas; Xenobots "


Author(s):  
Al Geist ◽  
Daniel A Reed

Commodity clusters revolutionized high-performance computing when they first appeared two decades ago. As scale and complexity have grown, new challenges in reliability and systemic resilience, energy efficiency and optimization and software complexity have emerged that suggest the need for re-evaluation of current approaches. This paper reviews the state of the art and reflects on some of the challenges likely to be faced when building trans-petascale computing systems, using insights and perspectives drawn from operational experience and community debates.


2019 ◽  
Vol 10 ◽  
pp. 1
Author(s):  
Raquel O. Prates ◽  
Heloísa Candello

This special issue of JIS presents the extended versions of the best full papers of the Brazilian Symposium on Human Factors in Computing Systems (IHC 2018). For this issue, the seven papers selected as best papers from the 42 presented at the conference were invited to submit an extended version of their work. It is worth noting that the extended versions contain original content or new contributions when compared to their original IHC 2018 version, and they were submitted to a new and independent review process. IHC 2018 highlighted the influence and importance of cultural issues in the design of computer systems, as well as the need to be creative and innovative in designing new forms of interaction, design, and system evaluations with its theme “Interaction, Culture, and Creativity”. The papers presented in this issue tap into IHC 2018’s theme and represent the broad range of topics being investigated in HCI in Brazil, as well as bring original and relevant contributions to the state-of-the art in each of their topics.


2021 ◽  
Author(s):  
Abdulqader Mahmoud ◽  
Florin Ciubotaru ◽  
Frederic Vanderveken ◽  
Andrii V. Chumak ◽  
Said Hamdioui ◽  
...  

This paper provides a tutorial overview over recent vigorous efforts to develop computing systems based on spin waves instead of charges and voltages. Spin-wave computing can be considered as a subfield of spintronics, which uses magnetic excitations for computation and memory applications. The tutorial combines backgrounds in spin-wave and device physics as well as circuit engineering to create synergies between the physics and electrical engineering communities to advance the field towards practical spin-wave circuits. After an introduction to magnetic interactions and spin-wave physics, all relevant basic aspects of spin-wave computing and individual spin-wave devices are reviewed. The focus is on spin-wave majority gates as they are the most prominently pursued device concept. Subsequently, we discuss the current status and the challenges to combine spin-wave gates and obtain circuits and ultimately computing systems, considering essential aspects such as gate interconnection, logic level restoration, input-output consistency, and fan-out achievement. We argue that spin-wave circuits need to be embedded in conventional CMOS circuits to obtain complete functional hybrid computing systems. The state of the art of benchmarking such hybrid spin-wave--CMOS systems is reviewed and the current challenges to realize such systems are discussed. The benchmark indicates that hybrid spin-wave--CMOS systems promise ultralow-power operation and may ultimately outperform conventional CMOS circuits in terms of the power-delay-area product. Current challenges to achieve this goal include low-power signal restoration in spin-wave circuits as well as efficient spin-wave transducers.


2021 ◽  
Vol 20 (5s) ◽  
pp. 1-21
Author(s):  
Hui Chen ◽  
Zihao Zhang ◽  
Peng Chen ◽  
Xiangzhong Luo ◽  
Shiqing Li ◽  
...  

Heterogeneous computing systems (HCSs), which consist of various processing elements (PEs) that vary in their processing ability, are usually facilitated by the network-on-chip (NoC) to interconnect its components. The emerging point-to-point NoCs which support single-cycle-multi-hop transmission, reduce or eliminate the latency dependence on distance, addressing the scalability concern raised by high latency for long-distance transmission and enlarging the design space of the routing algorithm to search the non-shortest paths. For such point-to-point NoC-based HCSs, resource management strategies which are managed by compilers, scheduler, or controllers, e.g., mapping and routing, are complicated for the following reasons: (i) Due to the heterogeneity, mapping and routing need to optimize computation and communication concurrently (for homogeneous computing systems, only communication). (ii) Conducting mapping and routing consecutively cannot minimize the schedule length in most cases since the PEs with high processing ability may locate in the crowded area and suffer from high resource contention overhead. (iii) Since changing the mapping selection of one task will reconstruct the whole routing design space, the exploration of mapping and routing design space is challenging. Therefore, in this work, we propose MARCO, the m apping a nd r outing co -optimization framework, to decrease the schedule length of applications on point-to-point NoC-based HCSs. Specifically, we revise the tabu search to explore the design space and evaluate the quality of mapping and routing. The advanced reinforcement learning (RL)algorithm, i.e., advantage actor-critic, is adopted to efficiently compute paths. We perform extensive experiments on various real applications, which demonstrates that the MARCO achieves a remarkable performance improvement in terms of schedule length (+44.94% ∼ +50.18%) when compared with the state-of-the-art mapping and routing co-optimization algorithm for homogeneous computing systems. We also compare MARCO with different combinations of state-of-the-art mapping and routing approaches.


Sign in / Sign up

Export Citation Format

Share Document