scholarly journals A package auto-counting model based on tailored YOLO and DeepSort techniques

2022 ◽  
Vol 355 ◽  
pp. 02054
Author(s):  
Sijun Xie ◽  
Yipeng Zhou ◽  
Iker Zhong ◽  
Wenjing Yan ◽  
Qingchuan Zhang

In the industrial area, the deployment of deep learning models in object detection and tracking are normally too large, also, it requires appropriate trade-offs between speed and accuracy. In this paper, we present a compressed object identification model called Tailored-YOLO (T-YOLO), and builds a lighter deep neural network construction based on the T-YOLO and DeepSort. The model greatly reduces the number of parameters by tailoring the two layers of Conv and BottleneckCSP. We verify the construction by realizing the package counting during the input-output warehouse process. The theoretical analysis and experimental results show that the mean average precision (mAP) is 99.50%, the recognition accuracy of the model is 95.88%, the counting accuracy is 99.80%, and the recall is 99.15%. Compared with the YOLOv5 combined DeepSort model, the proposed optimization method ensures the accuracy of packages recognition and counting and reduces the model parameters by 11MB.

2012 ◽  
Vol 11 (3) ◽  
pp. 118-126 ◽  
Author(s):  
Olive Emil Wetter ◽  
Jürgen Wegge ◽  
Klaus Jonas ◽  
Klaus-Helmut Schmidt

In most work contexts, several performance goals coexist, and conflicts between them and trade-offs can occur. Our paper is the first to contrast a dual goal for speed and accuracy with a single goal for speed on the same task. The Sternberg paradigm (Experiment 1, n = 57) and the d2 test (Experiment 2, n = 19) were used as performance tasks. Speed measures and errors revealed in both experiments that dual as well as single goals increase performance by enhancing memory scanning. However, the single speed goal triggered a speed-accuracy trade-off, favoring speed over accuracy, whereas this was not the case with the dual goal. In difficult trials, dual goals slowed down scanning processes again so that errors could be prevented. This new finding is particularly relevant for security domains, where both aspects have to be managed simultaneously.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1382
Author(s):  
Olga Martyna Koper-Lenkiewicz ◽  
Violetta Dymicka-Piekarska ◽  
Anna Justyna Milewska ◽  
Justyna Zińczuk ◽  
Joanna Kamińska

The aim of the study was the evaluation whether in primary colorectal cancer (CRC) patients (n = 55): age, sex, TNM classification results, WHO grade, tumor location (proximal colon, distal colon, rectum), tumor size, platelet count (PLT), mean platelet volume (MPV), mean platelet component (MCP), levels of carcinoembryonic antigen (CEA), cancer antigen (CA 19-9), as well as soluble lectin adhesion molecules (L-, E-, and P-selectins) may influence circulating inflammatory biomarkers: IL-6, CRP, and sCD40L. We found that CRP concentration evaluation in routine clinical practice may have an advantage as a prognostic biomarker in CRC patients, as this protein the most comprehensively reflects clinicopathological features of the tumor. Univariate linear regression analysis revealed that in CRC patients: (1) with an increase in PLT by 10 × 103/μL, the mean concentration of CRP increases by 3.4%; (2) with an increase in CA 19-9 of 1 U/mL, the mean concentration of CRP increases by 0.7%; (3) with the WHO 2 grade, the mean CRP concentration increases 3.631 times relative to the WHO 1 grade group; (4) with the WHO 3 grade, the mean CRP concentration increases by 4.916 times relative to the WHO 1 grade group; (5) with metastases (T1-4N+M+) the mean CRP concentration increases 4.183 times compared to non-metastatic patients (T1-4N0M0); (6) with a tumor located in the proximal colon, the mean concentration of CRP increases 2.175 times compared to a tumor located in the distal colon; (7) in patients with tumor size > 3 cm, the CRP concentration is about 2 times higher than in patients with tumor size ≤ 3 cm. In the multivariate linear regression model, the variables that influence the mean CRP value in CRC patients included: WHO grade and tumor localization. R2 for the created model equals 0.50, which indicates that this model explains 50% of the variance in the dependent variable. In CRC subjects: (1) with the WHO 2 grade, the mean CRP concentration rises 3.924 times relative to the WHO 1 grade; (2) with the WHO 3 grade, the mean CRP concentration increases 4.721 times in relation to the WHO 1 grade; (3) with a tumor located in the rectum, the mean CRP concentration rises 2.139 times compared to a tumor located in the distal colon; (4) with a tumor located in the proximal colon, the mean concentration of CRP increases 1.998 times compared to the tumor located in the distal colon; if other model parameters are fixed.


Author(s):  
Kersten Schuster ◽  
Philip Trettner ◽  
Leif Kobbelt

We present a numerical optimization method to find highly efficient (sparse) approximations for convolutional image filters. Using a modified parallel tempering approach, we solve a constrained optimization that maximizes approximation quality while strictly staying within a user-prescribed performance budget. The results are multi-pass filters where each pass computes a weighted sum of bilinearly interpolated sparse image samples, exploiting hardware acceleration on the GPU. We systematically decompose the target filter into a series of sparse convolutions, trying to find good trade-offs between approximation quality and performance. Since our sparse filters are linear and translation-invariant, they do not exhibit the aliasing and temporal coherence issues that often appear in filters working on image pyramids. We show several applications, ranging from simple Gaussian or box blurs to the emulation of sophisticated Bokeh effects with user-provided masks. Our filters achieve high performance as well as high quality, often providing significant speed-up at acceptable quality even for separable filters. The optimized filters can be baked into shaders and used as a drop-in replacement for filtering tasks in image processing or rendering pipelines.


2021 ◽  
pp. 875697282199994
Author(s):  
Joseph F. Hair ◽  
Marko Sarstedt

Most project management research focuses almost exclusively on explanatory analyses. Evaluation of the explanatory power of statistical models is generally based on F-type statistics and the R 2 metric, followed by an assessment of the model parameters (e.g., beta coefficients) in terms of their significance, size, and direction. However, these measures are not indicative of a model’s predictive power, which is central for deriving managerial recommendations. We recommend that project management researchers routinely use additional metrics, such as the mean absolute error or the root mean square error, to accurately quantify their statistical models’ predictive power.


2021 ◽  
Vol 10 (s1) ◽  
Author(s):  
Chris Groendyke ◽  
Adam Combs

Abstract Objectives: Diseases such as SARS-CoV-2 have novel features that require modifications to the standard network-based stochastic SEIR model. In particular, we introduce modifications to this model to account for the potential changes in behavior patterns of individuals upon becoming symptomatic, as well as the tendency of a substantial proportion of those infected to remain asymptomatic. Methods: Using a generic network model where every potential contact exists with the same common probability, we conduct a simulation study in which we vary four key model parameters (transmission rate, probability of remaining asymptomatic, and the mean lengths of time spent in the exposed and infectious disease states) and examine the resulting impacts on various metrics of epidemic severity, including the effective reproduction number. We then consider the effects of a more complex network model. Results: We find that the mean length of time spent in the infectious state and the transmission rate are the most important model parameters, while the mean length of time spent in the exposed state and the probability of remaining asymptomatic are less important. We also find that the network structure has a significant impact on the dynamics of the disease spread. Conclusions: In this article, we present a modification to the network-based stochastic SEIR epidemic model which allows for modifications to the underlying contact network to account for the effects of quarantine. We also discuss the changes needed to the model to incorporate situations where some proportion of the individuals who are infected remain asymptomatic throughout the course of the disease.


2011 ◽  
Vol 60 (2) ◽  
pp. 248-255 ◽  
Author(s):  
Sangmun Shin ◽  
Funda Samanlioglu ◽  
Byung Rae Cho ◽  
Margaret M. Wiecek

2016 ◽  
Vol 19 (2) ◽  
pp. 191-206 ◽  
Author(s):  
Emmanouil A. Varouchakis

Reliable temporal modelling of groundwater level is significant for efficient water resources management in hydrological basins and for the prevention of possible desertification effects. In this work we propose a stochastic method of temporal monitoring and prediction that can incorporate auxiliary information. More specifically, we model the temporal (mean annual and biannual) variation of groundwater level by means of a discrete time autoregressive exogenous variable (ARX) model. The ARX model parameters and its predictions are estimated by means of the Kalman filter adaptation algorithm (KFAA) which, to our knowledge, is applied for the first time in hydrology. KFAA is suitable for sparsely monitored basins that do not allow for an independent estimation of the ARX model parameters. We apply KFAA to time series of groundwater level values from the Mires basin in the island of Crete. In addition to precipitation measurements, we use pumping data as exogenous variables. We calibrate the ARX model based on the groundwater level for the years 1981 to 2006 and use it to predict the mean annual and biannual groundwater level for recent years (2007–2010). The predictions are validated with the available annual averages reported by the local authorities.


Author(s):  
Barbara J. Kelso

A legibility study was performed to investigate the effects of scale factors, graduation marks, orientation of scales, and reading conditions on the speed and accuracy of reading moving-tape instruments. Each of 150 Air Force Officers made 150 self-paced readings from slides of hand drawn tape instruments. Error was expressed as the magnitude of deviation of a subjects' verbal response from the set scale value. An analysis of variance was performed on the mean error scores, standard deviations of error, mean reaction times, and standard deviations of reaction times. The results clearly favored the 1 7/8 inch scale factor over the 1 3/8 inch and the 2 3/8 scale factor. The use of 9 graduation marks was superior to either 0, 1, 3, or 4 graduation marks. Reading conditions had little effect on performance. Horizontal scales were read more rapidly but no more accurately than vertical scales.


2018 ◽  
Vol 51 (1) ◽  
pp. 40-60 ◽  
Author(s):  
Heinrich René Liesefeld ◽  
Markus Janczyk

Sign in / Sign up

Export Citation Format

Share Document