scholarly journals A Commodity Classification Framework Based on Machine Learning for Analysis of Trade Declaration

Symmetry ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 964
Author(s):  
Mingshu He ◽  
Xiaojuan Wang ◽  
Chundong Zou ◽  
Bingying Dai ◽  
Lei Jin

Text, voice, images and videos can express some intentions and facts in daily life. By understanding these contents, people can identify and analyze some behaviors. This paper focuses on the commodity trade declaration process and identifies the commodity categories based on text information on customs declarations. Although the technology of text recognition is mature in many application fields, there are few studies on the classification and recognition of customs declaration goods. In this paper, we proposed a classification framework based on machine learning (ML) models for commodity trade declaration that reaches a high rate of accuracy. This paper also proposed a symmetrical decision fusion method for this task based on convolutional neural network (CNN) and transformer. The experimental results show that the fusion model can make up for the shortcomings of the two original models and some improvements have been made. In the two datasets used in this paper, the accuracy can reach 88% and 99%, respectively. To promote the development of study of customs declaration business and Chinese text recognition, we also exposed the proprietary datasets used in this study.

Author(s):  
Samir Bandyopadhyay Sr ◽  
SHAWNI DUTTA ◽  
SHAWNI DUTTA ◽  
SHAWNI DUTTA

BACKGROUND In recent days, Covid-19 coronavirus has been an immense impact on social, economic fields in the world. The objective of this study determines if it is feasible to use machine learning method to evaluate how much prediction results are close to original data related to Confirmed-Negative-Released-Death cases of Covid-19. For this purpose, a verification method is proposed in this paper that uses the concept of Deep-learning Neural Network. In this framework, Long short-term memory (LSTM) and Gated Recurrent Unit (GRU) are also assimilated finally for training the dataset and the prediction results are tally with the results predicted by clinical doctors. The prediction results are validated against the original data based on some predefined metric. The experimental results showcase that the proposed approach is useful in generating suitable results based on the critical disease outbreak. It also helps doctors to recheck further verification of virus by the proposed method. The outbreak of Coronavirus has the nature of exponential growth and so it is difficult to control with limited clinical persons for handling a huge number of patients with in a reasonable time. So it is necessary to build an automated model, based on machine learning approach, for corrective measure after the decision of clinical doctors. It could be a promising supplementary confirmation method for frontline clinical doctors. The proposed method has a high prediction rate and works fast for probable accurate identification of the disease. The performance analysis shows that a high rate of accuracy is obtained by the proposed method. OBJECTIVE Validation of COVID-19 disease METHODS Machine Learning RESULTS 90% CONCLUSIONS The combined LSTM-GRU based RNN model provides a comparatively better results in terms of prediction of confirmed, released, negative, death cases on the data. This paper presented a novel method that could recheck occurred cases of COVID-19 automatically. The data driven RNN based model is capable of providing automated tool for confirming, estimating the current position of this pandemic, assessing the severity, and assisting government and health workers to act for good decision making policy. It could be a promising supplementary rechecking method for frontline clinical doctors. It is now essential for improving the accuracy of detection process. CLINICALTRIAL 2020-04-03 3:22:36 PM


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 46
Author(s):  
Gangqiang Zhang ◽  
Wei Zheng ◽  
Wenjie Yin ◽  
Weiwei Lei

The launch of GRACE satellites has provided a new avenue for studying the terrestrial water storage anomalies (TWSA) with unprecedented accuracy. However, the coarse spatial resolution greatly limits its application in hydrology researches on local scales. To overcome this limitation, this study develops a machine learning-based fusion model to obtain high-resolution (0.25°) groundwater level anomalies (GWLA) by integrating GRACE observations in the North China Plain. Specifically, the fusion model consists of three modules, namely the downscaling module, the data fusion module, and the prediction module, respectively. In terms of the downscaling module, the GRACE-Noah model outperforms traditional data-driven models (multiple linear regression and gradient boosting decision tree (GBDT)) with the correlation coefficient (CC) values from 0.24 to 0.78. With respect to the data fusion module, the groundwater level from 12 monitoring wells is incorporated with climate variables (precipitation, runoff, and evapotranspiration) using the GBDT algorithm, achieving satisfactory performance (mean values: CC: 0.97, RMSE: 1.10 m, and MAE: 0.87 m). By merging the downscaled TWSA and fused groundwater level based on the GBDT algorithm, the prediction module can predict the water level in specified pixels. The predicted groundwater level is validated against 6 in-situ groundwater level data sets in the study area. Compare to the downscaling module, there is a significant improvement in terms of CC metrics, on average, from 0.43 to 0.71. This study provides a feasible and accurate fusion model for downscaling GRACE observations and predicting groundwater level with improved accuracy.


2021 ◽  
pp. 1-12
Author(s):  
Fei Long

The difficulty of English text recognition lies in fuzzy image text classification and part-of-speech classification. Traditional models have a high error rate in English text recognition. In order to improve the effect of English text recognition, guided by machine learning ideas, this paper combines ant colony algorithm and genetic algorithm to construct an English text recognition model based on machine learning. Moreover, based on the characteristics of ant colony intelligent algorithm optimization, a method of using ant colony algorithm to solve the central node is proposed. In addition, this paper uses the ant colony algorithm to obtain the characteristic points in the study area and determine a reasonable number, and then combine the uniform grid to select some non-characteristic points as the central node of the core function, and finally use the central node with a reasonable distribution for modeling. Finally, this paper designs experiments to verify the performance of the model constructed in this paper and combines mathematical statistics to visually display the experimental results using tables and graphs. The research results show that the performance of the model constructed in this paper is good.


2021 ◽  
Vol 2 (3) ◽  

Cold forging is a high-speed forming technique used to shape metals at near room temperature. and it allows high-rate production of high strength metal-based products in a consistent and cost-effective manner. However, cold forming processes are characterized by complex material deformation dynamics which makes product quality control difficult to achieve. There is no well defined mathematical model that governs the interactions between a cold forming process, material properties, and final product quality. The goal of this work is to provide a review for the state of research in the field of using acoustic emission (AE) technology in monitoring cold forging process. The integration of AE with machine learning (ML) algorithms to monitor the quality is also reviewed and discussed. It is realized that this promising technology didn’t receive the deserving attention for its implementation in cold forging and that more work is needed.


The hand gesture detection problem is one of the most prominent problems in machine learning and computer vision applications. Many machine learning techniques have been employed to solve the hand gesture recognition. These techniques find applications in sign language recognition, virtual reality, human machine interaction, autonomous vehicles, driver assistive systems etc. In this paper, the goal is to design a system to correctly identify hand gestures from a dataset of hundreds of hand gesture images. In order to incorporate this, decision fusion based system using the transfer learning architectures is proposed to achieve the said task. Two pretrained models namely ‘MobileNet’ and ‘Inception V3’ are used for this purpose. To find the region of interest (ROI) in the image, YOLO (You Only Look Once) architecture is used which also decides the type of model. Edge map images and the spatial images are trained using two separate versions of the MobileNet based transfer learning architecture and then the final probabilities are combined to decide upon the hand sign of the image. The simulation results using classification accuracy indicate the superiority of the approach of this paper against the already researched approaches using different quantitative techniques such as classification accuracy.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Khalid Twarish Alhamazani ◽  
Jalawi Alshudukhi ◽  
Saud Aljaloud ◽  
Solomon Abebaw

Chronic kidney disease (CKD) is a global health issue with a high rate of morbidity and mortality and a high rate of disease progression. Because there are no visible symptoms in the early stages of CKD, patients frequently go unnoticed. The early detection of CKD allows patients to receive timely treatment, slowing the disease’s progression. Due to its rapid recognition performance and accuracy, machine learning models can effectively assist physicians in achieving this goal. We propose a machine learning methodology for the CKD diagnosis in this paper. This information was completely anonymized. As a reference, the CRISP-DM® model (Cross industry standard process for data mining) was used. The data were processed in its entirety in the cloud on the Azure platform, where the sample data was unbalanced. Then the processes for exploration and analysis were carried out. According to what we have learned, the data were balanced using the SMOTE technique. Four matching algorithms were used after the data balancing was completed successfully. Artificial intelligence (AI) (logistic regression, decision forest, neural network, and jungle of decisions). The decision forest outperformed the other machine learning models with a score of 92%, indicating that the approach used in this study provides a good baseline for solutions in the production.


Sign in / Sign up

Export Citation Format

Share Document