scholarly journals On clustering DAGs for task-hungry computing platforms

2011 ◽  
Vol 1 (1) ◽  
Author(s):  
Gennaro Cordasco ◽  
Arnold Rosenberg ◽  
Mark Sims

AbstractMany modern computing platforms are “task-hungry”: their performance is enhanced by always having as many tasks available for execution as possible. IC-scheduling, a master-worker framework for executing static computations that have intertask dependencies (modeled as dags), was developed with precisely the goal of rendering a computation-dag’s tasks eligible for execution at the maximum possible rate. The current paper addresses the problem of enhancing IC-scheduling so that it can accommodate the varying computational resources of different workers, by clustering a computation-dag’s tasks, while still producing eligible (now, clustered) tasks at the maximum possible rate. The task-clustering strategies presented exploit the structure of the computation being performed, ranging from a strategy that works for any dag, to ones that build increasingly on the explicit structure of the dagbeing scheduled.

Author(s):  
Venkat R Dasari ◽  
Mee Seong Im ◽  
Billy Geerhart

There is a subset of computational problems that existing algorithms may not complete due to a lack of adequate computational resources on tactical edge computing platforms. Although this subset is computable in polynomial time, many polynomial problems are not computable in mission time. Here, we define a subclass of deterministic polynomial time complexity called mission class, wherein the computations must complete in mission time. By focusing on this subclass of languages in the context of successful military applications, we discuss their computational and network constraints. We investigate feasible (non)linear models that will minimize energy and maximize memory, efficiency, and computational power, and also provide an approximate solution obtained within a pre-determined length of computation time using limited resources so that an optimal solution to a language could be determined.


2016 ◽  
Vol 9 (1) ◽  
pp. 90
Author(s):  
Sanjay P. Ahuja ◽  
Jesus Zambrano

<p class="zhengwen">The current proliferation of mobile systems, such as smart phones and tablets, has let to their adoption as the primary computing platforms for many users. This trend suggests that designers will continue to aim towards the convergence of functionality on a single mobile device (such as phone + mp3 player + camera + Web browser + GPS + mobile apps + sensors). However, this conjunction penalizes the mobile system both with respect to computational resources such as processor speed, memory consumption, disk capacity, and in weight, size, ergonomics and the component most important to users, battery life. Therefore, energy consumption and response time are major concerns when executing complex algorithms on mobile devices because they require significant resources to solve intricate problems.</p><p>Offloading mobile processing is an excellent solution to augment mobile capabilities by migrating computation to powerful infrastructures. Current cloud computing environments for performing complex and data intensive computation remotely are likely to be an excellent solution for offloading computation and data processing from mobile devices restricted by reduced resources. This research uses cloud computing as processing platform for intensive-computation workloads while measuring energy consumption and response times on a Samsung Galaxy S5 Android mobile phone running Android 4.1OS.</p>


Author(s):  
Franck Cappello ◽  
Gilles Fedak ◽  
Derrick Kondo ◽  
Paul Malecot ◽  
Ala Rezmerita

Desktop Grids, literally Grids made of Desktop Computers, are very popular in the context of “Volunteer Computing” for large scale “Distributed Computing” projects like SETI@home and Folding@home. They are very appealing, as “Internet Computing” platforms for scientific projects seeking a huge amount of computational resources for massive high throughput computing, like the EGEE project in Europe. Companies are also interested of using cheap computing solutions that does not add extra hardware and cost of ownership. A very recent argument for Desktop Grids is their ecological impact: by scavenging unused CPU cycles without increasing excessively the power consumption, they reduce the waste of electricity. This book chapter presents the background of Desktop Grid, their principles and essential mechanisms, the evolution of their architectures, their applications and the research tools associated with this technology.


2020 ◽  
Vol 45 (1) ◽  
pp. 28-30
Author(s):  
Antonio Brogi ◽  
Antonio Bucchiarone ◽  
Rafael Capilla ◽  
Pooyan Jamshidi ◽  
Maurizio Leotta ◽  
...  

Author(s):  
А.С. Антонов ◽  
И.В. Афанасьев ◽  
Вл.В. Воеводин

В данной статье представлен обзор современного состояния суперкомпьютерной техники. Обзор сделан с разных точек зрения — начиная от особенностей построения современных вычислительных устройств до особенностей архитектуры больших суперкомпьютерных комплексов. В данный обзор вошли описания самых мощных суперкомпьютеров мира и России по состоянию на начало 2021 г., а также некоторых менее мощных систем, интересных с других точек зрения. Также делается акцент на тенденциях развития суперкомпьютерной отрасли и описываются наиболее известные проекты построения будущих экзафлопсных суперкомпьютеров. This paper provides an overview of the current state of supercomputer technology. The review is done from different points of view — from the construction features of modern computing devices to the features of the architecture of large supercomputer complexes. This review includes descriptions of the most powerful supercomputers in the world and Russia since the early of 2021 as well as some less powerful systems that are interesting from other points of view. It also focuses on the development trends of the supercomputer industry and describes the most famous projects for building future exascale supercomputers.


AI Magazine ◽  
2015 ◽  
Vol 36 (2) ◽  
pp. 22-32
Author(s):  
Christopher W. Geib ◽  
Christopher E. Swetenham

Modern multicore computers provide an opportunity to parallelize plan recognition algorithms to decrease runtime. Viewing plan recognition as parsing based on a complete breadth first search, makes ELEXIR (engine for lexicalized intent recognition) (Geib 2009; Geib and Goldman 2011) particularly suited for parallelization. This article documents the extension of ELEXIR to utilize such modern computing platforms. We will discuss multiple possible algorithms for distributing work between parallel threads and the associated performance wins. We will show, that the best of these algorithms provides close to linear speedup (up to a maximum number of processors), and that features of the problem domain have an impact on the achieved speedup.


BioScience ◽  
2021 ◽  
Author(s):  
Amanda T Stahl ◽  
Alexander K Fremier ◽  
Laura Heinse

Abstract Timely, policy-relevant monitoring data are essential for evaluating the effectiveness of environmental policies and conservation measures. Satellite and aerial imagery can fill data gaps at low cost but are often underused for ongoing environmental monitoring. Barriers include a lack of expertise or computational resources and the lag time between image acquisition and information delivery. Online image repositories and cloud computing platforms are increasingly used by researchers because they offer near-real-time, centralized access to local-to-global-scale data sets and analytics with minimal in-house computational requirements. We aim to broaden knowledge of these open access resources for biologists whose work routinely informs policy and management. To illustrate potential applications of cloud-based environmental monitoring (CBEM), we developed an adaptable approach to detect changes in natural vegetative cover in an agricultural watershed. The steps we describe can be applied to identify opportunities and caveats for applying CBEM in a wide variety of monitoring programs.


Author(s):  
C.L. Woodcock

Despite the potential of the technique, electron tomography has yet to be widely used by biologists. This is in part related to the rather daunting list of equipment and expertise that are required. Thanks to continuing advances in theory and instrumentation, tomography is now more feasible for the non-specialist. One barrier that has essentially disappeared is the expense of computational resources. In view of this progress, it is time to give more attention to practical issues that need to be considered when embarking on a tomographic project. The following recommendations and comments are derived from experience gained during two long-term collaborative projects.Tomographic reconstruction results in a three dimensional description of an individual EM specimen, most commonly a section, and is therefore applicable to problems in which ultrastructural details within the thickness of the specimen are obscured in single micrographs. Information that can be recovered using tomography includes the 3D shape of particles, and the arrangement and dispostion of overlapping fibrous and membranous structures.


2014 ◽  
Vol 21 (4) ◽  
pp. 173-181 ◽  
Author(s):  
Ryan Lee ◽  
Janna B. Oetting

Zero marking of the simple past is often listed as a common feature of child African American English (AAE). In the current paper, we review the literature and present new data to help clinicians better understand zero marking of the simple past in child AAE. Specifically, we provide information to support the following statements: (a) By six years of age, the simple past is infrequently zero marked by typically developing AAE-speaking children; (b) There are important differences between the simple past and participle morphemes that affect AAE-speaking children's marking options; and (c) In addition to a verb's grammatical function, its phonetic properties help determine whether an AAE-speaking child will produce a zero marked form.


Sign in / Sign up

Export Citation Format

Share Document