scholarly journals An Automated Recognition of Work Activity in Industrial Manufacturing Using Convolutional Neural Networks

Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2946
Author(s):  
Justyna Patalas-Maliszewska ◽  
Daniel Halikowski ◽  
Robertas Damaševičius

The automated assessment and analysis of employee activity in a manufacturing enterprise, operating in accordance with the concept of Industry 4.0, is essential for a quick and precise diagnosis of work quality, especially in the process of training a new employee. In the case of industrial solutions, many approaches involving the recognition and detection of work activity are based on Convolutional Neural Networks (CNNs). Despite the wide use of CNNs, it is difficult to find solutions supporting the automated checking of work activities performed by trained employees. We propose a novel framework for the automatic generation of workplace instructions and real-time recognition of worker activities. The proposed method integrates CNN, CNN Support Vector Machine (SVM), CNN Region-Based CNN (Yolov3 Tiny) for recognizing and checking the completed work tasks. First, video recordings of the work process are analyzed and reference video frames corresponding to work activity stages are determined. Next, work-related features and objects are determined using CNN with SVM (achieving 94% accuracy) and Yolov3 Tiny network based on the characteristics of the reference frames. Additionally, matching matrix between the reference frames and the test frames using mean absolute error (MAE) as a measure of errors between paired observations was built. Finally, the practical usefulness of the proposed approach by applying the method for supporting the automatic training of new employees and checking the correctness of their work done on solid fuel boiler equipment in a manufacturing company was demonstrated. The developed information system can be integrated with other Industry 4.0 technologies introduced within an enterprise.

Symmetry ◽  
2019 ◽  
Vol 11 (9) ◽  
pp. 1151 ◽  
Author(s):  
Patalas-Maliszewska ◽  
Halikowski

(1) Background: Improving the management and effectiveness of employees’ learning processes within manufacturing companies has attracted a high level of attention in recent years, especially within the context of Industry 4.0. Convolutional Neural Networks with a Support Vector Machine (CNN-SVM) can be applied in this business field, in order to generate workplace procedures. To overcome the problem of usefully acquiring and sharing specialist knowledge, we use CNN-SVM to examine features from video material concerning each work activity for further comparison with the instruction picture’s features. (2) Methods: This paper uses literature studies and a selected workplace procedure: repairing a solid and using a fuel boiler as the benchmark dataset, which contains 20 s of training and a test video, in order to provide a reference model of features for a workplace procedure. In this model, the method used is also known as Convolutional Neural Networks with Support Vector Machine. This method effectively determines features for the further comparison and detection of objects. (3) Results: The innovative model for generating a workplace procedure, using CNN-SVM architecture, once built, can then be used to provide a learning process to the employees of manufacturing companies. The novelty of the proposed methodology is its architecture, which combines the acquisition of specialist knowledge and formalising and recording it in a useful form for new employees in the company. Moreover, three new algorithms were created: an algorithm to match features, an algorithm to detect each activity in the workplace procedure, and an algorithm to generate an activity scenario. (4) Conclusions: The efficiency of the proposed methodology can be demonstrated on a dataset comprising a collection of workplace procedures, such as the repair of the solid fuel boiler. We also highlighted the impracticality for managers of manufacturing companies to support learning processes in a company, resulting from a lack of resources to teach new employees.


2021 ◽  
Vol 5 (2) ◽  
Author(s):  
Alexander Knyshov ◽  
Samantha Hoang ◽  
Christiane Weirauch

Abstract Automated insect identification systems have been explored for more than two decades but have only recently started to take advantage of powerful and versatile convolutional neural networks (CNNs). While typical CNN applications still require large training image datasets with hundreds of images per taxon, pretrained CNNs recently have been shown to be highly accurate, while being trained on much smaller datasets. We here evaluate the performance of CNN-based machine learning approaches in identifying three curated species-level dorsal habitus datasets for Miridae, the plant bugs. Miridae are of economic importance, but species-level identifications are challenging and typically rely on information other than dorsal habitus (e.g., host plants, locality, genitalic structures). Each dataset contained 2–6 species and 126–246 images in total, with a mean of only 32 images per species for the most difficult dataset. We find that closely related species of plant bugs can be identified with 80–90% accuracy based on their dorsal habitus alone. The pretrained CNN performed 10–20% better than a taxon expert who had access to the same dorsal habitus images. We find that feature extraction protocols (selection and combination of blocks of CNN layers) impact identification accuracy much more than the classifying mechanism (support vector machine and deep neural network classifiers). While our network has much lower accuracy on photographs of live insects (62%), overall results confirm that a pretrained CNN can be straightforwardly adapted to collection-based images for a new taxonomic group and successfully extract relevant features to classify insect species.


2020 ◽  
Vol 10 (14) ◽  
pp. 4916
Author(s):  
Syna Sreng ◽  
Noppadol Maneerat ◽  
Kazuhiko Hamamoto ◽  
Khin Yadanar Win

Glaucoma is a major global cause of blindness. As the symptoms of glaucoma appear, when the disease reaches an advanced stage, proper screening of glaucoma in the early stages is challenging. Therefore, regular glaucoma screening is essential and recommended. However, eye screening is currently subjective, time-consuming and labor-intensive and there are insufficient eye specialists available. We present an automatic two-stage glaucoma screening system to reduce the workload of ophthalmologists. The system first segmented the optic disc region using a DeepLabv3+ architecture but substituted the encoder module with multiple deep convolutional neural networks. For the classification stage, we used pretrained deep convolutional neural networks for three proposals (1) transfer learning and (2) learning the feature descriptors using support vector machine and (3) building ensemble of methods in (1) and (2). We evaluated our methods on five available datasets containing 2787 retinal images and found that the best option for optic disc segmentation is a combination of DeepLabv3+ and MobileNet. For glaucoma classification, an ensemble of methods performed better than the conventional methods for RIM-ONE, ORIGA, DRISHTI-GS1 and ACRIMA datasets with the accuracy of 97.37%, 90.00%, 86.84% and 99.53% and Area Under Curve (AUC) of 100%, 92.06%, 91.67% and 99.98%, respectively, and performed comparably with CUHKMED, the top team in REFUGE challenge, using REFUGE dataset with an accuracy of 95.59% and AUC of 95.10%.


2020 ◽  
Vol 143 ◽  
pp. 02015
Author(s):  
Li Zherui ◽  
Cai Huiwen

Sea ice classification is one of the important tasks of sea ice monitoring. Accurate extraction of sea ice types is of great significance on sea ice conditions assessment, smooth navigation and safty marine operations. Sentinel-2 is an optical satellite launched by the European Space Agency. High spatial resolution and wide range imaging provide powerful support for sea ice monitoring. However, traditional supervised classification method is difficult to achieve fine results for small sample features. In order to solve the problem, this paper proposed a sea ice extraction method based on deep learning and it was applied to Liaodong Bay in Bohai Sea, China. The convolutional neural network was used to extract and classify the feature of the image from Sentinel-2. The results showed that the overall accuracy of the algorithm was 85.79% which presented a significant improvement compared with the tranditional algorithms, such as minimum distance method, maximum likelihood method, Mahalanobis distance method, and support vector machine method. The method proposed in this paper, which combines convolutional neural networks and high-resolution multispectral data, provides a new idea for remote sensing monitoring of sea ice.


2019 ◽  
Vol 8 (4) ◽  
pp. 160 ◽  
Author(s):  
Bingxin Liu ◽  
Ying Li ◽  
Guannan Li ◽  
Anling Liu

Spectral characteristics play an important role in the classification of oil film, but the presence of too many bands can lead to information redundancy and reduced classification accuracy. In this study, a classification model that combines spectral indices-based band selection (SIs) and one-dimensional convolutional neural networks was proposed to realize automatic oil films classification using hyperspectral remote sensing images. Additionally, for comparison, the minimum Redundancy Maximum Relevance (mRMR) was tested for reducing the number of bands. The support vector machine (SVM), random forest (RF), and Hu’s convolutional neural networks (CNN) were trained and tested. The results show that the accuracy of classifications through the one dimensional convolutional neural network (1D CNN) models surpassed the accuracy of other machine learning algorithms such as SVM and RF. The model of SIs+1D CNN could produce a relatively higher accuracy oil film distribution map within less time than other models.


2020 ◽  
Vol 12 (3) ◽  
pp. 408
Author(s):  
Małgorzata Krówczyńska ◽  
Edwin Raczko ◽  
Natalia Staniszewska ◽  
Ewa Wilk

Due to the pathogenic nature of asbestos, a statutory ban on asbestos-containing products has been in place in Poland since 1997. In order to protect human health and the environment, it is crucial to estimate the quantity of asbestos–cement products in use. It has been evaluated that about 90% of them are roof coverings. Different methods are used to estimate the amount of asbestos–cement products, such as the use of indicators, field inventory, remote sensing data, and multi- and hyperspectral images; the latter are used for relatively small areas. Other methods are sought for the reliable estimation of the quantity of asbestos-containing products, as well as their spatial distribution. The objective of this paper is to present the use of convolutional neural networks for the identification of asbestos–cement roofing on aerial photographs in natural color (RGB) and color infrared (CIR) compositions. The study was conducted for the Chęciny commune. Aerial photographs, each with the spatial resolution of 25 cm in RGB and CIR compositions, were used, and field studies were conducted to verify data and to develop a database for Convolutional Neural Networks (CNNs) training. Network training was carried out using the TensorFlow and R-Keras libraries in the R programming environment. The classification was carried out using a convolutional neural network consisting of two convolutional blocks, a spatial dropout layer, and two blocks of fully connected perceptrons. Asbestos–cement roofing products were classified with the producer’s accuracy of 89% and overall accuracy of 87% and 89%, depending on the image composition used. Attempts have been made at the identification of asbestos–cement roofing. They focus primarily on the use of hyperspectral data and multispectral imagery. The following classification algorithms were usually employed: Spectral Angle Mapper, Support Vector Machine, object classification, Spectral Feature Fitting, and decision trees. Previous studies undertaken by other researchers showed that low spectral resolution only allowed for a rough classification of roofing materials. The use of one coherent method would allow data comparison between regions. Determining the amount of asbestos–cement products in use is important for assessing environmental exposure to asbestos fibres, determining patterns of disease, and ultimately modelling potential solutions to counteract threats.


2016 ◽  
Vol 21 (9) ◽  
pp. 998-1003 ◽  
Author(s):  
Oliver Dürr ◽  
Beate Sick

Deep learning methods are currently outperforming traditional state-of-the-art computer vision algorithms in diverse applications and recently even surpassed human performance in object recognition. Here we demonstrate the potential of deep learning methods to high-content screening–based phenotype classification. We trained a deep learning classifier in the form of convolutional neural networks with approximately 40,000 publicly available single-cell images from samples treated with compounds from four classes known to lead to different phenotypes. The input data consisted of multichannel images. The construction of appropriate feature definitions was part of the training and carried out by the convolutional network, without the need for expert knowledge or handcrafted features. We compare our results against the recent state-of-the-art pipeline in which predefined features are extracted from each cell using specialized software and then fed into various machine learning algorithms (support vector machine, Fisher linear discriminant, random forest) for classification. The performance of all classification approaches is evaluated on an untouched test image set with known phenotype classes. Compared to the best reference machine learning algorithm, the misclassification rate is reduced from 8.9% to 6.6%.


Author(s):  
Pham Van Hai ◽  
Samson Eloanyi Amaechi

Conventional methods used in brain tumors detection, diagnosis, and classification such as magnetic resonance imaging and computed tomography scanning technologies are unbridged in their results. This paper presents a proposed model combination, convolutional neural networks with fuzzy rules in the detection and classification of medical imaging such as healthy brain cell and tumors brain cells. This model contributes fully on the automatic classification and detection medical imaging such as brain tumors, heart diseases, breast cancers, HIV and FLU. The experimental result of the proposed model shows overall accuracy of 97.6%, which indicates that the proposed method achieves improved performance than the other current methods in the literature such as [classification of tumors in human brain MRI using wavelet and support vector machine 94.7%, and deep convolutional neural networks with transfer learning for automated brain image classification 95.0%], uses in the detection, diagnosis, and classification of medical imaging decision supports.


Sign in / Sign up

Export Citation Format

Share Document