time requirement
Recently Published Documents


TOTAL DOCUMENTS

129
(FIVE YEARS 21)

H-INDEX

15
(FIVE YEARS 1)

Author(s):  
Jinlong Peng ◽  
Zhengkai Jiang ◽  
Yueyang Gu ◽  
Yang Wu ◽  
Yabiao Wang ◽  
...  

Recently, most siamese network based trackers locate targets via object classification and bounding-box regression. Generally, they select the bounding-box with maximum classification confidence as the final prediction. This strategy may miss the right result due to the accuracy misalignment between classification and regression. In this paper, we propose a novel siamese tracking algorithm called SiamRCR, addressing this problem with a simple, light and effective solution. It builds reciprocal links between classification and regression branches, which can dynamically re-weight their losses for each positive sample. In addition, we add a localization branch to predict the localization accuracy, so that it can work as the replacement of the regression assistance link during inference. This branch makes the training and inference more consistent. Extensive experimental results demonstrate the effectiveness of SiamRCR and its superiority over the state-of-the-art competitors on GOT-10k, LaSOT, TrackingNet, OTB-2015, VOT-2018 and VOT-2019. Moreover, our SiamRCR runs at 65 FPS, far above the real-time requirement.


Author(s):  
Son Tung Ngo ◽  
Jafreezal B Jaafar ◽  
Izzatdin Abdul Aziz ◽  
Giang Hoang Nguyen ◽  
Anh Ngoc Bui

Examination timetabling is one of 3 critical timetabling jobs besides enrollment timetabling and teaching assignment. After a semester, scheduling examinations is not always an easy job in education management, especially for many data. The timetabling problem is an optimization and Np-hard problem. In this study, we build a multi-objective optimizer to create exam schedules for more than 2500 students. Our model aims to optimize the material costs while ensuring the dignity of the exam and students' convenience while considering the rooms' design, the time requirement of each exam, which involves rules and policy constraints. We propose a programmatic compromise to approach the maximum tar-get optimization model and solve it using the Genetic Algorithm. The results show the effectiveness of the introduced algorithm.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Sujan Kumar Saha

AbstractIn this paper, we present a system for automatic evaluation of the quality of a question paper. Question paper plays a major role in educational assessment. The quality of a question paper is crucial to fulfilling the purpose of the assessment. In many education sectors, question papers are prepared manually. A prior analysis of a question paper might help in finding the errors in the question paper, and better achieving the goals of the assessment. In this experiment, we focus on higher education in the technical domain. First, we conducted a student survey to identify the key factors that affect the quality of a question paper. The top factors we identified are question relevance, question difficulty, and time requirement. We explored the strategies to handle these factors and implemented them. We employ various concepts and techniques for the implementation. The system finally assigns a numerical quality score against these factors. The system is evaluated using a set of question papers collected from various sources. The experimental results show that the proposed system is quite promising.


2021 ◽  
Vol 6 (01) ◽  
pp. 44-57
Author(s):  
Hannes Renzsch ◽  
Britton Ward

Abstract. In this paper an approach to mimic the influence of appendages on the pressure distribution on a boat’s hull in RANS simulations is given. While, of course, the appendages could be modelled explicitly in the RANS simulation, this significantly increases the cell count and CPU-time requirements of the simulations, particularly for boats with multiple appendages. In this approach it is assumed that the pressure fields generated by the appendages can be decomposed into two parts: one related to lift (asymmetric) and one related to the displaced volume (symmetric). For these parts actuator line momentum theory is utilized, and doublet mass sources are described based on potential flow theory. An initial assessment of the approach’s capabilities and accuracy is presented based on the SYRF wide light series (Claughton, 2015), showing good promise. An application example with particular focus on the reduction of CPU-time requirement is given based on a boat fitted with canting keel and DSS foil.


2021 ◽  
Vol 14 (1) ◽  
pp. 244-256
Author(s):  
Gokulapriya Raman ◽  
◽  
Ganesh Raj ◽  

Web usage behaviour mining is a substantial research problem to be resolved as it identifies different user’s behaviour pattern by analysing web log files. But, accuracy of finding the usage behaviour of users frequently accessed web patterns was limited and also it requires more time. Mutual Information Pre-processing based Broken-Stick Linear Regression (MIP-BSLR) technique is proposed for refining the performance of web user behaviour pattern mining with higher accuracy. Initially, web log files from Apache web log dataset and NASA dataset are considered as input. Then, Mutual Information based Pre-processing (MI-P) method is applied to compute mutual dependence between the two web patterns. Based on the computed value, web access patterns which relevant are taken for further processing and irrelevant patterns are removed. After that, Broken-Stick Linear Regression analysis (BLRA) is performed in MIPBSLR for Web User Behaviour analysis. By applying the BLRA, the frequently visited web patterns are identified. With the identification of frequently visited web patterns, MIP-BSLR technique exactly predicts the usage behaviour of web users, and also increases the performance of web usage behaviour mining. Experimental evaluation of MIPBSLR method is conducted on factors such as pattern mining accuracy, false positives, time requirements and space requirements with respect to number of web patterns. Outcomes show that the proposed technique improves the pattern mining accuracy by 14%, and reduces the false positive rate by 52%, time requirement by 19% and space complexity by 21% using Apache web log dataset as compared to conventional methods. Similarly, the pattern mining accuracy of NASA dataset is increased by 16% with the reduction of false positive rate by 47%, time requirement by 20% and space complexity by 22% as compared to conventional methods.


2020 ◽  
Author(s):  
Mischa Young ◽  
Steven Farber

We examine the wait-time of Uber’s wheelchair accessible service (UberWAV) in Toronto, to determine whether it meets the City’s 11-minutes average wait-time requirement. Using a 12-million record dataset of every ride-hailing trip conducted in Toronto between September 2016 and March 2017, we show that wait-times for UberWAV services were, on average, longer during rush hour periods and for trips further away from downtown. Despite this, we find that UberWAV services met the average wait-time requirement imposed by the City and believe that by offering shorter wait-times than previously available, this service significantly improves the mobility of people who require accessible transport services.


Author(s):  
Martin H. Maurer ◽  
Michael Brönnimann ◽  
Christophe Schroeder ◽  
Ehssan Ghadamgahi ◽  
Florian Streitparth ◽  
...  

Objective To estimate the human resources required for a retrospective quality review of different percentages of all routine diagnostic procedures in the Department of Radiology at Bern University Hospital, Switzerland. Materials and Methods Three board-certified radiologists retrospectively evaluated the quality of the radiological reports of a total of 150 examinations (5 different examination types: abdominal CT, chest CT, mammography, conventional X-ray images and abdominal MRI). Each report was assigned a RADPEER score of 1 to 3 (score 1: concur with previous interpretation; score 2: discrepancy in interpretation/not ordinarily expected to be made; score 3: discrepancy in interpretation/should be made most of the time). The time (in seconds, s) required for each review was documented and compared. A sensitivity analysis was conducted to calculate the total workload for reviewing different percentages of the total annual reporting volume of the clinic. Results Among the total of 450 reviews analyzed, 91.1 % (410/450) were assigned a score of 1 and 8.9 % (40/450) were assigned scores of 2 or 3. The average time (in seconds) required for a peer review was 60.4 s (min. 5 s, max. 245 s). The reviewer with the greatest clinical experience needed significantly less time for reviewing the reports than the two reviewers with less clinical expertise (p < 0.05). Average review times were longer for discrepant ratings with a score of 2 or 3 (p < 0.05). The total time requirement calculated for reviewing all 5 types of examination for one year would be more than 1200 working hours. Conclusion A retrospective peer review of reports of radiological examinations using the RADPEER system requires considerable human resources. However, to improve quality, it seems feasible to peer review at least a portion of the total yearly reporting volume. Key Points:  Citation Format


2020 ◽  
Vol 9 (1) ◽  
pp. 1279-1282

With the advancement of technology and proliferation of computers in the country, the amount of Afaan Oromo language news documents produced increasingly which becomes a difficult task for news agencies to organize such huge collection of documents items manually. To solve this problem, researches is conducted using unsupervised machine learning python tools for Afaan Oromo news document clustering with low cost and best quality of clustering solution. In this research work focusing on k-means clustering analysis which produced better results as compared to the other cluster analysis both in terms of time requirement and the quality of the clusters produced


Sign in / Sign up

Export Citation Format

Share Document