counting model
Recently Published Documents


TOTAL DOCUMENTS

73
(FIVE YEARS 25)

H-INDEX

11
(FIVE YEARS 1)

2022 ◽  
pp. 75-95
Author(s):  
Ranjit Barua ◽  
Sudipto Datta ◽  
Pallab Datta ◽  
Amit Roychowdhury

Additive manufacturing (AM) make simpler the manufacturing of difficult geometric structures. Its possibility has quickly prolonged from the manufacture of pre-fabrication conception replicas to the making of finish practice portions driving the essential for superior part feature guarantee in the additively fabricated products. Machine learning (ML) is one of the encouraging methods that can be practiced to succeed in this aim. A modern study in this arena contains the procedure of managed and unconfirmed ML algorithms for excellent control and forecast of mechanical characteristics of AM products. This chapter describes the development of applying machine learning (ML) to numerous aspects of the additive manufacturing whole chain, counting model design, and quality evaluation. Present challenges in applying machine learning (ML) to additive manufacturing and possible solutions for these problems are then defined. Upcoming trends are planned in order to deliver a general discussion of this additive manufacturing area.


2022 ◽  
Vol 355 ◽  
pp. 02054
Author(s):  
Sijun Xie ◽  
Yipeng Zhou ◽  
Iker Zhong ◽  
Wenjing Yan ◽  
Qingchuan Zhang

In the industrial area, the deployment of deep learning models in object detection and tracking are normally too large, also, it requires appropriate trade-offs between speed and accuracy. In this paper, we present a compressed object identification model called Tailored-YOLO (T-YOLO), and builds a lighter deep neural network construction based on the T-YOLO and DeepSort. The model greatly reduces the number of parameters by tailoring the two layers of Conv and BottleneckCSP. We verify the construction by realizing the package counting during the input-output warehouse process. The theoretical analysis and experimental results show that the mean average precision (mAP) is 99.50%, the recognition accuracy of the model is 95.88%, the counting accuracy is 99.80%, and the recall is 99.15%. Compared with the YOLOv5 combined DeepSort model, the proposed optimization method ensures the accuracy of packages recognition and counting and reduces the model parameters by 11MB.


2021 ◽  
Author(s):  
Marcin Woźniak ◽  
Jakub Siłka ◽  
Michal Wieczorek

2021 ◽  
Vol 9 ◽  
Author(s):  
Wenyue Zhao ◽  
Jinming Yan ◽  
Ganggang Hou ◽  
Pengxiang Diwu ◽  
Tongjing Liu ◽  
...  

Polymer microspheres (PMs) are a kind of self-similar volume expansion particle, and their fractal dimension varies with hydration swelling. However, there is no unique fractal dimension calculation method for their characteristics. A new model is established in this paper, which is particular to calculate the fractal dimension of PMs. We carried out swelling hydration experiments and scanning electron microscope (SEM) experiments to verify the new model. The new model and the box-counting model were used to calculate the fractal dimensions of PMs based on the hydration experiment results. Then, a comparison of the calculation results of the two methods was used to verify the validity of the model. Finally, according to the new model calculation results, the fractal dimension characteristics of PMs were analyzed. The research results indicate that the new model successfully correlates the cumulative probability of the PMs dispersed system with the fractal dimension and makes fractal dimension calculation of PMs more accurate and convenient. Based on the experiment results, the new model was used to calculate the fractal dimension of PMs and the box-counting model, and its findings were all 2.638 at initial state hydration and 2.739 and 2.741 at hydration time as of day 1. This result verifies the correctness of the new model. According to the hydration swelling experiments and the new model calculation results, the fractal dimension is linear correlated to the average particle size of PMs and the standard deviation average particle size. This means the fractal dimension of PMs represents the space occupancy ability and space occupancy effectiveness.


arq.urb ◽  
2021 ◽  
pp. 114-124
Author(s):  
Pedro Oscar Pizzetti Mariano ◽  
Gabriela Pinho Mallmann

his article seeks to create and evaluate a parametric process that allows identifying the D dimension in fractal compositions through a Box-counting tool. The use of this method in a parametric process allows sequential tests that can confirm whether a composition is a fractal structure. The theoretical survey and the development of a parametric process in a visual algorithm were carried out for the development of this research. The tests with the tool took place in linear fractal compositions already known and developed in another research. As a result, it was possible to compare the D dimension of different compositions, made with fractal geometric patterns. In the conclusion, it was possible to observe that the process through a parametric tool was successful in making it possible to evaluate compositions and arrangements in an agile way. A direct relationship was identified between the iterations used and the proportional increase dimension D.


Plants ◽  
2021 ◽  
Vol 10 (8) ◽  
pp. 1625
Author(s):  
Hongmin Shao ◽  
Rong Tang ◽  
Yujie Lei ◽  
Jiong Mu ◽  
Yan Guan ◽  
...  

The real-time detection and counting of rice ears in fields is one of the most important methods for estimating rice yield. The traditional manual counting method has many disadvantages: it is time-consuming, inefficient and subjective. Therefore, the use of computer vision technology can improve the accuracy and efficiency of rice ear counting in the field. The contributions of this article are as follows. (1) This paper establishes a dataset containing 3300 rice ear samples, which represent various complex situations, including variable light and complex backgrounds, overlapping rice and overlapping leaves. The collected images were manually labeled, and a data enhancement method was used to increase the sample size. (2) This paper proposes a method that combines the LC-FCN (localization-based counting fully convolutional neural network) model based on transfer learning with the watershed algorithm for the recognition of dense rice images. The results show that the model is superior to traditional machine learning methods and the single-shot multibox detector (SSD) algorithm for target detection. Moreover, it is currently considered an advanced and innovative rice ear counting model. The mean absolute error (MAE) of the model on the 300-size test set is 2.99. The model can be used to calculate the number of rice ears in the field. In addition, it can provide reliable basic data for rice yield estimation and a rice dataset for research.


2021 ◽  
Vol 52 (S2) ◽  
pp. 815-818
Author(s):  
Rui Tang ◽  
Tianyou Zhang ◽  
Fan Tian ◽  
Jing Xu ◽  
Jason Hwang
Keyword(s):  

Author(s):  
Wenjing Zhou ◽  
Xueyan Zhu ◽  
Mengmeng Gu ◽  
Fengjun Chen

To achieve rapid and accurate counting of seedlings on mobile terminals such as Unmanned Aerial Vehicle (UAV), we propose a lightweight spruce counting model. Given the difficulties of spruce adhesion and complex environment interference, we adopt the Mask R-CNN as the basic model, which performs instance-level segmentation of the target. To successfully apply the basic model to the mobile terminal applications, we modify the Mask R-CNN model in terms of the light-weighted as follows: the feature extraction network is changed to MobileNetV1 network; NMS is changed to Fast NMS. At the implementation level, we expand the 403 spruce images taken by UAV to the 1612 images, where 1440 images are selected as the training set and 172 images are selected as the test set. We evaluate the lightweight Mask R-CNN model. Experimental results indicate that the Mean Counting Accuracy (MCA) is 95%, the Mean Absolute Error (MAE) is 8.02, the Mean Square Error (MSE) is 181.55, the Average Counting Time (ACT) is 1.514 s, and the Model Size (MS) is 90Mb. We compare the lightweight Mask R-CNN model with the counting effects of the Mask R-CNN model, the SSD+MobileNetV1 counting model, the FCN+Hough circle counting model, and the FCN+Slice counting model. ACT of the lightweight Mask R-CNN model is 0.876 s, 0.359 s, 1.691 s, and 2.443 s faster than the other four models, respectively. In terms of MCA, the lightweight Mask R-CNN model is similar to the Mask R-CNN model. It is 4.2%, 5.2%, and 9.3% higher than the SSD+MobileNetV1 counting model, the FCN+Slice counting model, and the FCN+Hough circle counting model, respectively. Experimental results demonstrate that the lightweight Mask R-CNN model achieves high accuracy and real-time performance, and makes a valuable exploration for the deployment of automatic seedling counting on the mobile terminal.


Teknik ◽  
2021 ◽  
Vol 42 (2) ◽  
pp. 169-177
Author(s):  
Faqih Rofii ◽  
Gigih Priyandoko ◽  
Muhammad Ifan Fanani ◽  
Aji Suraji

Models for vehicle detection, classification, and counting based on computer vision and artificial intelligence are constantly evolving. In this study, we present the Yolov4-based vehicle detection, classification, and counting model approach. The number of vehicles was calculated by generating the serial number of the identity of each vehicle. The object is detected and classified, marked by the display of bounding boxes, classes, and confidence scores. The system input is a video dataset that considers the camera position, light intensity, and vehicle traffic density. The method has counted the number of vehicles: cars, motorcycles, buses, and trucks. Evaluation of model performance is based on accuracy, precision, and total recall of the confusion matrix. The results of the dataset test and the calculation of the model performance parameters had obtained the best accuracy, precision. Total recall values when the model testing was carried out during the day where the camera position was at the height of 6 m and the loss of 500 was 83%, 93%, and 94%. Meanwhile, the lowest total accuracy, precision, and recall were obtained when the model was tested at night. The camera position was at the height of 1.5 m, and 900 losses were 68%, 77%, and 78%.


Sign in / Sign up

Export Citation Format

Share Document