Algorithm for Improving Processor Utilization in Multi-core Processor Environment by Python Language

Author(s):  
Feng He ◽  
Xuran Hu ◽  
Shuren Liu ◽  
Tao Li ◽  
Kaifeng Zhu ◽  
...  
2019 ◽  
Vol 139 (7) ◽  
pp. 802-811
Author(s):  
Kenta Fujimoto ◽  
Shingo Oidate ◽  
Yuhei Yabuta ◽  
Atsuyuki Takahashi ◽  
Takuya Yamasaki ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
pp. 339-348
Author(s):  
Piotr Bojarczak ◽  
Piotr Lesiak

Abstract The article uses images from Unmanned Aerial Vehicles (UAVs) for rail diagnostics. The main advantage of such a solution compared to traditional surveys performed with measuring vehicles is the elimination of decreased train traffic. The authors, in the study, limited themselves to the diagnosis of hazardous split defects in rails. An algorithm has been proposed to detect them with an efficiency rate of about 81% for defects not less than 6.9% of the rail head width. It uses the FCN-8 deep-learning network, implemented in the Tensorflow environment, to extract the rail head by image segmentation. Using this type of network for segmentation increases the resistance of the algorithm to changes in the recorded rail image brightness. This is of fundamental importance in the case of variable conditions for image recording by UAVs. The detection of these defects in the rail head is performed using an algorithm in the Python language and the OpenCV library. To locate the defect, it uses the contour of a separate rail head together with a rectangle circumscribed around it. The use of UAVs together with artificial intelligence to detect split defects is an important element of novelty presented in this work.


2021 ◽  
Author(s):  
Bashar Romanous ◽  
Skyler Windh ◽  
Ildar Absalyamov ◽  
Prerna Budhkar ◽  
Robert Halstead ◽  
...  

AbstractThe join and group-by aggregation are two memory intensive operators that are affecting the performance of relational databases. Hashing is a common approach used to implement both operators. Recent paradigm shifts in multi-core processor architectures have reinvigorated research into how the join and group-by aggregation operators can leverage these advances. However, the poor spatial locality of the hashing approach has hindered performance on multi-core processor architectures which rely on using large cache hierarchies for latency mitigation. Multithreaded architectures can better cope with poor spatial locality by masking memory latency with many outstanding requests. Nevertheless, the number of parallel threads, even in the most advanced multithreaded processors, such as UltraSPARC, is not enough to fully cover the main memory access latency. In this paper, we explore the hardware re-configurability of FPGAs to enable deeper execution pipelines that maintain hundreds (instead of tens) of outstanding memory requests across four FPGAs-drastically increasing concurrency and throughput. We present two end-to-end in-memory accelerators for the join and group-by aggregation operators using FPGAs. Both accelerators use massive multithreading to mask long memory delays of traversing linked-list data structures, while concurrently managing hundreds of thread states across four FPGAs locally. We explore how content addressable memories can be intermixed within our multithreaded designs to act as a synchronizing cache, which enforces locks and merges jobs together before they are written to memory. Throughput results for our hash-join operator accelerator show a speedup between 2$$\times $$ × and 3.4$$\times $$ × over the best multi-core approaches with comparable memory bandwidths on uniform and skewed datasets. The accelerator for the hash-based group-by aggregation operator demonstrates that leveraging CAMs achieves average speedup of 3.3$$\times $$ × with a best case of 9.4$$\times $$ × in terms of throughput over CPU implementations across five types of data distributions.


Sorting algorithmdeals with the arrangement of alphanumeric data in static order.It plays an important roleinthe field of data science. Selection sort is one ofthe simplest and efficient algorithms which can be applied for the huge number of elements it works likeby giving list of unsorted information, the calculation which breaksintotwo partitions. One section has all the sorted information and another sectionhas all thestaying unsorted information. The calculation rehashes itself, by finding the smallestcomponentinside the rundown of unsorted information and swappingitwith the furthest left component, in the end setting everything straight information.This researchpresents the implementationof selection sort usingC/C++, Python, and Rust and measuredthetime complexity. After experiment,we have collectedtheresults in terms of running time, andanalyzed the outcomes.It was observed that python language hasvery smallamount of line of code, and it also consumesless storage and fast running time then other two languages.


Sign in / Sign up

Export Citation Format

Share Document