scholarly journals Multithread Affinity Scheduling Using a Decision Maker

Author(s):  
Shatha K. Jawad ◽  
Ronald P. Uhlig ◽  
Bhaskar Sinha ◽  
Mohammad N. Amin ◽  
Pradip Peter Dey

In a multiprocessor-multithread Operating System (OS), scheduling has two dimensions. The operating system has to decide which thread to run and which Central Processing Unit (CPU) to run it on. Assume the threads are independent and each thread has a priority, the operating system selects a thread with the highest priority and assigns it to the first free CPU. Usually, each CPU has its private cache. To increase the throughput of the system, it is preferred to use affinity scheduling. The affinity scheduling concept is to make an effort to have a thread run on the same CPU it ran on the last time. The existing affinity scheduling is implemented by using a two-level scheduling algorithm. In this paper a new approach is designed to implement independent multithread scheduling on a multiprocessor system. The design approach uses a decision maker to compute a new priority for each ready thread according to the thread pre-priority and affinity. The results show that by using the new priority, the goal of having affinity is satisfied in addition to taking the pre-priority of the thread in consideration. Also, the design approach reduces the scheduling time because it implements affinity scheduling and priority scheduling by employing a one level scheduling algorithm.

2019 ◽  
Vol 17 (1) ◽  
pp. 90-98 ◽  
Author(s):  
Uferah Shafi ◽  
Munam Shah ◽  
Abdul Wahid ◽  
Kamran Abbasi ◽  
Qaisar Javaid ◽  
...  

Central Processing Unit (CPU) is the most significant resource and its scheduling is one of the main functions of an operating system. In timeshared systems, Round Robin (RR) is most widely used scheduling algorithm. The efficiency of RR algorithm is influenced by the quantum time, if quantum is small, there will be overheads of more context switches and if quantum time is large, then given algorithm will perform as First Come First Served (FCFS) in which there is more risk of starvation. In this paper, a new CPU scheduling algorithm is proposed named as Amended Dynamic Round Robin (ADRR) based on CPU burst time. The primary goal of ADRR is to improve the conventional RR scheduling algorithm using the active quantum time notion. Quantum time is cyclically adjusted based on CPU burst time. We evaluate and compare the performance of our proposed ADRR algorithm based on certain parameters such as, waiting time, turnaround time etc. and compare the performance of our proposed algorithm. Our numerical analysis and simulation results in MATLAB reveals that ADRR outperforms other well-known algorithms such as conventional Round Robin, Improved Round Robin (IRR), Optimum Multilevel Dynamic Round Robin (OMDRR) and Priority Based Round Robin (PRR)


2012 ◽  
Vol 1 (4) ◽  
pp. 88-131 ◽  
Author(s):  
Hamza Gharsellaoui ◽  
Mohamed Khalgui ◽  
Samir Ben Ahmed

Scheduling tasks is an essential requirement in most real-time and embedded systems, but leads to unwanted central processing unit (CPU) overheads. The authors present a real-time schedulability algorithm for preemptable, asynchronous and periodic reconfigurable task systems with arbitrary relative deadlines, scheduled on a uniprocessor by an optimal scheduling algorithm based on the earliest deadline first (EDF) principles and on the dynamic reconfiguration. A reconfiguration scenario is assumed to be a dynamic automatic operation allowing addition, removal or update of operating system’s (OS) functional asynchronous tasks. When such a scenario is applied to save the system at the occurrence of hardware-software faults, or to improve its performance, some real-time properties can be violated. The authors propose an intelligent agent-based architecture where a software agent is used to satisfy the user requirements and to respect time constraints. The agent dynamically provides precious technical solutions for users when these constraints are not verified, by removing tasks according to predefined heuristic, or by modifying the worst case execution times (WCETs), periods, and deadlines of tasks in order to meet deadlines and to minimize their response time. They implement the agent to support these services which are applied to a Blackberry Bold 9700 and to a Volvo system and present and discuss the results of experiments.


Author(s):  
Sonia Zouaoui ◽  
Lotfi Boussaid ◽  
Abdellatif Mtibaa

<p>This paper introduce a new approach for scheduling algorithms which aim to improve real time operating system CPU performance. This new approach of CPU Scheduling algorithm is based on the combination of round-robin (RR) and Priority based (PB) scheduling algorithms. This solution maintains the advantage of simple round robin scheduling algorithm, which is reducing starvation and integrates the advantage of priority scheduling. The proposed algorithm implements the concept of time quantum and assigning as well priority index to the processes. Existing round robin CPU scheduling algorithm cannot be dedicated to real time operating system due to their large waiting time, large response time, large turnaround time and less throughput. This new algorithm improves all the drawbacks of round robin CPU scheduling algorithm. In addition, this paper presents analysis comparing proposed algorithm with existing round robin scheduling algorithm focusing on average waiting time and average turnaround time.</p>


SIMULATION ◽  
2019 ◽  
Vol 96 (3) ◽  
pp. 347-361
Author(s):  
Wenjie Tang ◽  
Wentong Cai ◽  
Yiping Yao ◽  
Xiao Song ◽  
Feng Zhu

In the past few years, the graphics processing unit (GPU) has been widely used to accelerate time-consuming models in simulations. Since both model computation and simulation management are main factors that affect the performance of large-scale simulations, only accelerating model computation will limit the potential speedup. Moreover, models that can be well accelerated by a GPU could be insufficient, especially for simulations with many lightweight models. Traditionally, the parallel discrete event simulation (PDES) method is used to solve this class of simulation, but most PDES simulators only utilize the central processing unit (CPU) even though the GPU is commonly available now. Hence, we propose an alternative approach for collaborative simulation execution on a CPU+GPU hybrid system. The GPU supports both simulation management and model computation as CPUs. A concurrency-oriented scheduling algorithm was proposed to enable cooperation between the CPU and the GPU, so that multiple computation and communication resources can be efficiently utilized. In addition, GPU functions have also been carefully designed to adapt the algorithm. The combination of those efforts allows the proposed approach to achieve significant speedup compared to the traditional PDES on a CPU.


2019 ◽  
Vol 15 (8) ◽  
pp. 155014771986866
Author(s):  
Miloš Kotlar ◽  
Dragan Bojić ◽  
Marija Punt ◽  
Veljko Milutinović

This article overviews the emerging use of deep neural networks in data analytics and explores which type of underlying hardware and architectural approach is best used in various deployment locations when implementing deep neural networks. The locations which are discussed are in the cloud, fog, and dew computing (dew computing is performed by end devices). Covered architectural approaches include multicore processors (central processing unit), manycore processors (graphics processing unit), field programmable gate arrays, and application-specific integrated circuits. The proposed classification in this article divides the existing solutions into 12 different categories, organized in two dimensions. The proposed classification allows a comparison of existing architectures, which are predominantly cloud-based, and anticipated future architectures, which are expected to be hybrid cloud-fog-dew architectures for applications in Internet of Things and Wireless Sensor Networks. Researchers interested in studying trade-offs among data processing bandwidth, data processing latency, and processing power consumption would benefit from the classification made in this article.


1983 ◽  
Vol 62 (1) ◽  
pp. 191-206 ◽  
Author(s):  
M. W. Rolund ◽  
J. T. Beckett ◽  
D. A. Harms

Author(s):  
K Muralidhar ◽  
A Chatterjee ◽  
B V Nagabhushana Rao

The present work is concerned with the application of the domain decomposition technique for modelling transient flow and heat transfer problems. The solutions obtained within each subdomain are matched at the interfaces using Uzawa's algorithm. This algorithm has been originally developed in the context of steady heat conduction. The objective of the present study is to test and extend the algorithm to a wider class of problems. Examples considered are non-linear heat conduction in one and two dimensions, simulation of oil recovery from porous formations using water injection, movement of a plane thermal front and heat transfer from a cylinder placed in Darcian flow. The suitability of Uzawa's algorithm for interface treatment with up to nine subdomains has been studied. The method is found to converge to the full-domain solution in all cases considered. Besides this, results show that there are additional advantages which include the generation of small matrices and, in certain cases, a marginal reduction in CPU (central processing unit) time, even on sequential machines.


Author(s):  
SHAKTIRAJ KUMAR CHAGANTY ◽  
B. LAVAN ◽  
DR.S.SIVA PRASAD

A real-time microkernel is the near-minimum amount of software that can provide the mechanisms needed to implement a real-time operating system. Real-time systems are those systems whose response is deterministic in time. In our research a 32-task Real Time Microkernel is designed using which multi tasking can be done on the targeted processor ARM7TDMI. Two sets of functions are developed in this research work. First one is Operating System functions and second is application functions. Operating System functions are mainly for carrying out task creation, multi-tasking, scheduling, context switching and Inter task communication. The process of scheduling and switching the CPU (Central Processing Unit) between several tasks is illustrated in this paper. The number of application functions can vary between 1 to 32. Each of these application functions is created as a task by the microkernel and scheduled by the pre-emptive priority scheduler. Multi tasking of these application tasks is demonstrated in this paper.


2019 ◽  
Author(s):  
yuda fahrozi

Cisco adalah peralatan utama yang banyak digunakan pada Jaringan Area Luas atau Wide Area Network (WAN). Dengan cisco router, informasi dapat diteruskan ke alamat-alamat yang berjauhan dan berada di jaringan computer yang berlainan. Yang bertujuan untuk dapat meneruskan paket data dari suatu LAN ke LAN lainnya, Cisco router menggunakan tabel dan protocol routing yang berfungsi untuk mengatur lalu lintas data. Paket data yang tiba di router diperiksa dan diteruskan ke alamat yang dituju. Agar paket data yang diterima dapat sampai ke tujuannya dengan cepat, router harus memproses data tersebut dengan sangat tepat. Untuk itu, Cisco Router menggunakan Central Processing Unit (CPU) seperti yang digunakan di dalam komputer untuk memproses lalu lintas data tersebut dengan cepat. Seperti komputer, cisco router juga mempunyai sejumlah jenis memori yaitu ROM, RAM, NVRAM dan FLASH, yang berguna untuk membantu kerjanya CPU. Selain itu dilengkapi pula dengan sejumlah interface untuk berhubungan dengan dunia luar dan keluar masuk data. Sistem operasi yang digunakan oleh cisco router adalah Internetwork Operating System (IOS).Kata Kunci : Pengertian Cisco


Sign in / Sign up

Export Citation Format

Share Document