Attention Deeplabv3 model and its application into gear pitting measurement

2021 ◽  
pp. 1-14
Author(s):  
Dejun Xi ◽  
Yi Qin ◽  
Zhiwen Wang

An efficient visual detection method is explored in this study to address the low accuracy and efficiency of manual detection for irregular gear pitting. The results of gear pitting detection are enhanced by embedding two attention modules into Deeplabv3 + to obtain an improved segmentation model called attention Deeplabv3. The attention mechanism of the proposed model endows the latter with an enhanced ability for feature representation of small and irregular objects and effectively improves the segmentation performance of Deeplabv3. The segmentation ability of attention Deeplabv3+ is verified by comparing its performance with those of other typical segmentation networks using two public datasets, namely, Cityscapes and Voc2012. The proposed model is subsequently applied to segment gear pitting and tooth surfaces simultaneously, and the pitting area ratio is calculated. Experimental results show that attention Deeplabv3 has higher segmentation performance and measurement accuracy compared with the existing classical models under the same computing speed. Thus, the proposed model is suitable for measuring various gear pittings.

2017 ◽  
Vol 921 (3) ◽  
pp. 7-13 ◽  
Author(s):  
S.V. Grishko

This paper shows that the accuracy of relative satellite measurements depend not only on the length of the baseline, as it is regulated by the rating formula of accuracy of GNSS equipment, but also on the duration of observations. As a result of the strict adjustment much redundant satellite networks with different duration of observations obtained covariance matrix of baselines, the most realistic reflecting the actual error of satellite observations. Research of forms of communication of these errors from length of the baseline and duration of its measurement is executed. A significant influence of solar activity on accuracy of satellite measurements, in general, leads to unequal similar series of measurements made at different periods, for example, in the production of monitoring activities. The model of approximation of the functional dependence of accuracy of the baseline from its length and duration of observations having good qualitative characteristics is offered. Based on the proposed model, we analyzed the dynamics of changes in measurement accuracy with an increase in observation time.


2021 ◽  
Vol 11 (9) ◽  
pp. 3974
Author(s):  
Laila Bashmal ◽  
Yakoub Bazi ◽  
Mohamad Mahmoud Al Rahhal ◽  
Haikel Alhichri ◽  
Naif Al Ajlan

In this paper, we present an approach for the multi-label classification of remote sensing images based on data-efficient transformers. During the training phase, we generated a second view for each image from the training set using data augmentation. Then, both the image and its augmented version were reshaped into a sequence of flattened patches and then fed to the transformer encoder. The latter extracts a compact feature representation from each image with the help of a self-attention mechanism, which can handle the global dependencies between different regions of the high-resolution aerial image. On the top of the encoder, we mounted two classifiers, a token and a distiller classifier. During training, we minimized a global loss consisting of two terms, each corresponding to one of the two classifiers. In the test phase, we considered the average of the two classifiers as the final class labels. Experiments on two datasets acquired over the cities of Trento and Civezzano with a ground resolution of two-centimeter demonstrated the effectiveness of the proposed model.


2021 ◽  
pp. 108128652110258
Author(s):  
Yi-Ying Feng ◽  
Xiao-Jun Yang ◽  
Jian-Gen Liu ◽  
Zhan-Qing Chen

The general fractional operator shows its great predominance in the construction of constitutive model owing to its agility in choosing the embedded parameters. A generalized fractional viscoelastic–plastic constitutive model with the sense of the k-Hilfer–Prabhakar ( k-H-P) fractional operator, which has the character recovering the known classical models from the proposed model, is established in this article. In order to describe the damage in the creep process, a time-varying elastic element [Formula: see text] is used in the proposed model with better representation of accelerated creep stage. According to the theory of the kinematics of deformation and the Laplace transform, the creep constitutive equation and the strain of the modified model are established and obtained. The validity and rationality of the proposed model are identified by fitting with the experimental data. Finally, the influences of the fractional derivative order [Formula: see text] and parameter k on the creep process are investigated through the sensitivity analyses with two- and three-dimensional plots.


2021 ◽  
Author(s):  
Masaki Uto

AbstractPerformance assessment, in which human raters assess examinee performance in a practical task, often involves the use of a scoring rubric consisting of multiple evaluation items to increase the objectivity of evaluation. However, even when using a rubric, assigned scores are known to depend on characteristics of the rubric’s evaluation items and the raters, thus decreasing ability measurement accuracy. To resolve this problem, item response theory (IRT) models that can estimate examinee ability while considering the effects of these characteristics have been proposed. These IRT models assume unidimensionality, meaning that a rubric measures one latent ability. In practice, however, this assumption might not be satisfied because a rubric’s evaluation items are often designed to measure multiple sub-abilities that constitute a targeted ability. To address this issue, this study proposes a multidimensional IRT model for rubric-based performance assessment. Specifically, the proposed model is formulated as a multidimensional extension of a generalized many-facet Rasch model. Moreover, a No-U-Turn variant of the Hamiltonian Markov chain Monte Carlo algorithm is adopted as a parameter estimation method for the proposed model. The proposed model is useful not only for improving the ability measurement accuracy, but also for detailed analysis of rubric quality and rubric construct validity. The study demonstrates the effectiveness of the proposed model through simulation experiments and application to real data.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Mingying Xu ◽  
Junping Du ◽  
Feifei Kou ◽  
Meiyu Liang ◽  
Xin Xu ◽  
...  

Internet of Things search has great potential applications with the rapid development of Internet of Things technology. Combining Internet of Things technology and academic search to build academic search framework based on Internet of Things is an effective solution to realize massive academic resource search. Recently, the academic big data has been characterized by a large number of types and spanning many fields. The traditional web search technology is no longer suitable for the search environment of academic big data. Thus, this paper designs academic search framework based on Internet of Things Technology. In order to alleviate the pressure of the cloud server processing massive academic big data, the edge server is introduced to clean and remove the redundancy of the data to form a clean data for further analysis and processing by the cloud server. Edge computing network effectively makes up for the deficiency of cloud computing in the conditions of distributed and high concurrent access, reduces long-distance data transmission, and improves the quality of network user experience. For Academic Search, this paper proposes a novel weakly supervised academic search model based on knowledge-enhanced feature representation. The proposed model can relieve high cost of acquisition of manually labeled data by obtaining a lot of pseudolabeled data and consider word-level interactive matching and sentence-level semantic matching for more accurate matching in the process of academic search. The experimental result on academic datasets demonstrate that the performance of the proposed model is much better than that of the existing methods.


2020 ◽  
Vol 17 (3) ◽  
pp. 849-865
Author(s):  
Zhongqin Bi ◽  
Shuming Dou ◽  
Zhe Liu ◽  
Yongbin Li

Neural network methods have been trained to satisfactorily learn user/product representations from textual reviews. A representation can be considered as a multiaspect attention weight vector. However, in several existing methods, it is assumed that the user representation remains unchanged even when the user interacts with products having diverse characteristics, which leads to inaccurate recommendations. To overcome this limitation, this paper proposes a novel model to capture the varying attention of a user for different products by using a multilayer attention framework. First, two individual hierarchical attention networks are used to encode the users and products to learn the user preferences and product characteristics from review texts. Then, we design an attention network to reflect the adaptive change in the user preferences for each aspect of the targeted product in terms of the rating and review. The results of experiments performed on three public datasets demonstrate that the proposed model notably outperforms the other state-of-the-art baselines, thereby validating the effectiveness of the proposed approach.


Author(s):  
Ming Hao ◽  
Weijing Wang ◽  
Fang Zhou

Short text classification is an important foundation for natural language processing (NLP) tasks. Though, the text classification based on deep language models (DLMs) has made a significant headway, in practical applications however, some texts are ambiguous and hard to classify in multi-class classification especially, for short texts whose context length is limited. The mainstream method improves the distinction of ambiguous text by adding context information. However, these methods rely only the text representation, and ignore that the categories overlap and are not completely independent of each other. In this paper, we establish a new general method to solve the problem of ambiguous text classification by introducing label embedding to represent each category, which makes measurable difference between the categories. Further, a new compositional loss function is proposed to train the model, which makes the text representation closer to the ground-truth label and farther away from others. Finally, a constraint is obtained by calculating the similarity between the text representation and label embedding. Errors caused by ambiguous text can be corrected by adding constraints to the output layer of the model. We apply the method to three classical models and conduct experiments on six public datasets. Experiments show that our method can effectively improve the classification accuracy of the ambiguous texts. In addition, combining our method with BERT, we obtain the state-of-the-art results on the CNT dataset.


Author(s):  
Yingchun Guo ◽  
Yanhong Feng ◽  
Gang Yan ◽  
Shuo Shi

Salient region detection is a challenge problem in computer vision, which is useful in image segmentation, region-based image retrieval, and so on. In this paper we present a multi-resolution salient region detection method in frequency domain which can highlight salient regions with well-defined boundaries of object. The original image is sub-sampled into three multi-resolution layers, and for each layer the luminance and color salient features are extracted in frequency domain. Then, the significant values are calculated by using invariant laws of Euclidean distance in Lab space and the normal distribution function is used to specify the salient map in each layer in order to remove noise and enhance the correlation among the vicinity pixels. The final saliency map is obtained by normalizing and merging the multi-resolution salient maps. Experimental evaluation depicts the promising results from the proposed model by outperforming the state-of-art frequency-tuned model.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4635
Author(s):  
Angel de la Torre ◽  
Santiago Medina-Rodríguez ◽  
Jose C. Segura ◽  
Jorge F. Fernández-Sánchez

In this work, we propose a new model describing the relationship between the analyte concentration and the instrument response in photoluminescence sensors excited with modulated light sources. The concentration is modeled as a polynomial function of the analytical signal corrected with an exponent, and therefore the model is referred to as a polynomial-exponent (PE) model. The proposed approach is motivated by the limitations of the classical models for describing the frequency response of the luminescence sensors excited with a modulated light source, and can be considered as an extension of the Stern–Volmer model. We compare the calibration provided by the proposed PE-model with that provided by the classical Stern–Volmer, Lehrer, and Demas models. Compared with the classical models, for a similar complexity (i.e., with the same number of parameters to be fitted), the PE-model improves the trade-off between the accuracy and the complexity. The utility of the proposed model is supported with experiments involving two oxygen-sensitive photoluminescence sensors in instruments based on sinusoidally modulated light sources, using four different analytical signals (phase-shift, amplitude, and the corresponding lifetimes estimated from them).


Sign in / Sign up

Export Citation Format

Share Document