multiple granularity
Recently Published Documents


TOTAL DOCUMENTS

49
(FIVE YEARS 17)

H-INDEX

7
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Qinglan Meng ◽  
Xiyu Pang ◽  
Gangwu Jiang ◽  
Yanli Zheng ◽  
Xin Tian

2021 ◽  
Vol 13 (18) ◽  
pp. 3601
Author(s):  
Jin Wu ◽  
Changqing Cao ◽  
Yuedong Zhou ◽  
Xiaodong Zeng ◽  
Zhejun Feng ◽  
...  

In remote sensing images, small target size and diverse background cause difficulty in locating targets accurately and quickly. To address the lack of accuracy and inefficient real-time performance of existing tracking algorithms, a multi-object tracking (MOT) algorithm for ships using deep learning was proposed in this study. The feature extraction capability of target detectors determines the performance of MOT algorithms. Therefore, you only look once (YOLO)-v3 model, which has better accuracy and speed than other algorithms, was selected as the target detection framework. The high similarity of ship targets will cause poor tracking results; therefore, we used the multiple granularity network (MGN) to extract richer target appearance information to improve the generalization ability of similar images. We compared the proposed algorithm with other state-of-the-art multi-object tracking algorithms. Results show that the tracking accuracy is improved by 2.23%, while the average running speed is close to 21 frames per second, meeting the needs of real-time tracking.


2021 ◽  
Vol 2010 (1) ◽  
pp. 012122
Author(s):  
Xile Wang ◽  
Sihan Zhang ◽  
Junyu Song ◽  
Miaohui Zhang
Keyword(s):  

2021 ◽  
Vol 9 (1) ◽  
pp. 932-947
Author(s):  
Ms. Swati, Dr. Shalini Bhaskar Bajaj, Dr. Vivek Jaglan

We present an efficient locking scheme in a hierarchical data structure. The existing multi-granularity locking mechanism works on two extremes: fine-grained locking through which concurrency is being maximized, and coarse grained locking that is being applied to minimize the locking cost. Between the two extremes, there lies several pare to-optimal options that provide a trade-off between the concurrency that can be attained. In this work, we present a locking technique, Collaborative Granular Version Locking (CGVL) which selects an optimal locking combination to serve locking requests in a hierarchical structure. In CGVL a series of version is being maintained at each granular level which allows the simultaneous execution of read and write operation on the data item. Our study reveals that in order to achieve optimal performance the lock manager explore various locking options by converting certain non-supporting locking modes into supporting locking modes thereby improving the existing compatibility matrix of multiple granularity locking protocol. Our claim is being quantitatively validated by using a Java Sun JDK environment, which shows that our CGVL perform better compared to the state-of-the-art existing MGL methods. In particular, CGVL attains 20% reduction in execution time for the locking operation that are being carried out by considering, the following parameters: i) The number of threads ii) The number of locked object iii) The duration of critical section (CPU Cycles) which significantly supports the achievement of enhanced concurrency  in terms of  the number of concurrent read accesses.


Author(s):  
Gavindya Jayawardena ◽  
Sampath Jayarathna

Eye-tracking experiments involve areas of interest (AOIs) for the analysis of eye gaze data. While there are tools to delineate AOIs to extract eye movement data, they may require users to manually draw boundaries of AOIs on eye tracking stimuli or use markers to define AOIs. This paper introduces two novel techniques to dynamically filter eye movement data from AOIs for the analysis of eye metrics from multiple levels of granularity. The authors incorporate pre-trained object detectors and object instance segmentation models for offline detection of dynamic AOIs in video streams. This research presents the implementation and evaluation of object detectors and object instance segmentation models to find the best model to be integrated in a real-time eye movement analysis pipeline. The authors filter gaze data that falls within the polygonal boundaries of detected dynamic AOIs and apply object detector to find bounding-boxes in a public dataset. The results indicate that the dynamic AOIs generated by object detectors capture 60% of eye movements & object instance segmentation models capture 30% of eye movements.


Author(s):  
Jun-Yan He ◽  
Shi-Hua Liang ◽  
Xiao Wu ◽  
Bo Zhao ◽  
Lei Zhang

Sign in / Sign up

Export Citation Format

Share Document