Application of Flame Image Recognition-Based Information Fusion Technology to Roller Kiln Temperature Detection

2012 ◽  
Vol 239-240 ◽  
pp. 769-774
Author(s):  
Yong Hong Zhu ◽  
Yan Fang Liu

Roller kiln is a kind of advanced fast sintering kiln. In production process of roller kiln, materials sintering of burning zone is the key working procedure which affects product quality directly. Hence, the temperature detection process of burning zone became the key link in roller kiln control system. This paper proposed a kind of fusion method of both temperature point detection and flame image recognition of imitating ‘artificial-look-fire’. Flame image processing-based temperature detection scheme was also given. In the scheme, expert system fuses temperature data detected by the thermocouple with flame image data of burning zone detected by CCD so as to obtain the actual temperature of burning zone. The method proposed greatly improves the temperature detection precision of burning zone working conditions. The experimental results show that the proposed method is feasible and effective.

2014 ◽  
Vol 602-605 ◽  
pp. 1761-1767
Author(s):  
Yong Hong Zhu ◽  
Peng Li

In the firing process of ceramic products, the sintering conditions vary from firing phase to firing phase. In different firing phases, flame texture changes obviously, so it can be used as a important parameter of burning zone identification for ceramic roller kiln. In this paper, both flame image recognition of simulating artificial-look-fire and multi-point temperature detection technology are used to detect burning zone working conditions of ceramic roller kiln so as to greatly improve detection accuracy. The key data fusion algorithm of PTCR-based point detection temperature and flame image recognition–based detection method of burning zone working condition for ceramic roller kiln are proposed. The temperature measurement experiment system scheme of ceramic roller kiln burning zone is also given. The system can fuse the key process data with flame image characteristics so as to get the comprehensive database used to judge burning zone working conditions and temperatures. In the end, The testing experiment was carried out. The experimental results show that the method proposed above is feasible and effective.


2013 ◽  
Vol 380-384 ◽  
pp. 2673-2676
Author(s):  
Ze Yu Xiong

DDoS attacks have relatively low proportion of normal flow in the boundary network at the attack traffic,In this paper,we establish DDoS attack detection method based on defense stage and defensive position, and design and implement collaborative detection of DDoS attacks. Simulation results show that our approach has good timeliness, accuracy and scalability than the single-point detection and route-based distributed detection scheme.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Michael D. Vasilakakis ◽  
Dimitris K. Iakovidis ◽  
Evaggelos Spyrou ◽  
Anastasios Koulaouzidis

Wireless Capsule Endoscopy (WCE) is a noninvasive diagnostic technique enabling the inspection of the whole gastrointestinal (GI) tract by capturing and wirelessly transmitting thousands of color images. Proprietary software “stitches” the images into videos for examination by accredited readers. However, the videos produced are of large length and consequently the reading task becomes harder and more prone to human errors. Automating the WCE reading process could contribute in both the reduction of the examination time and the improvement of its diagnostic accuracy. In this paper, we present a novel feature extraction methodology for automated WCE image analysis. It aims at discriminating various kinds of abnormalities from the normal contents of WCE images, in a machine learning-based classification framework. The extraction of the proposed features involves an unsupervised color-based saliency detection scheme which, unlike current approaches, combines both point and region-level saliency information and the estimation of local and global image color descriptors. The salient point detection process involves estimation of DIstaNces On Selective Aggregation of chRomatic image Components (DINOSARC). The descriptors are extracted from superpixels by coevaluating both point and region-level information. The main conclusions of the experiments performed on a publicly available dataset of WCE images are (a) the proposed salient point detection scheme results in significantly less and more relevant salient points; (b) the proposed descriptors are more discriminative than relevant state-of-the-art descriptors, promising a wider adoption of the proposed approach for computer-aided diagnosis in WCE.


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Rik Das ◽  
Sudeep Thepade ◽  
Subhajit Bhattacharya ◽  
Saurav Ghosh

The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene) dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.


Author(s):  
Lambodar Jena ◽  
Ramakrushna Swain ◽  
N.K. Kamila

Image mining is more than just an extension of data mining to image domain. Web Image mining is a technique commonly used to extract knowledge directly from images on WWW. Since main targets of conventional Web mining are numerical and textual data, Web mining for image data is on demand. There are huge image data as well as text data on the Web. However, mining image data from the Web is paid less attention than mining text data, since treating semantics of images are much more difficult. This paper proposes a novel image recognition and image classification technique using a large number of images automatically gathered from the Web as learning images. For classification the system uses imagefeature- based search exploited in content-based image retrieval(CBIR), which do not restrict target images unlike conventional image recognition methods and support vector machine(SVM), which is one of the most efficient & widely used statistical method for generic image classification that fit to the learning tasks. By the experiments it is observed that the proposed system outperforms some existing search systems.


2021 ◽  
Vol 5 (6) ◽  
pp. 1153-1160
Author(s):  
Mayanda Mega Santoni ◽  
Nurul Chamidah ◽  
Desta Sandya Prasvita ◽  
Helena Nurramdhani Irmanda ◽  
Ria Astriratma ◽  
...  

One of efforts by the Indonesian people to defend the country is to preserve and to maintain the regional languages. The current era of modernity makes the regional language image become old-fashioned, so that most them are no longer spoken.  If it is ignored, then there will be a cultural identity crisis that causes regional languages to be vulnerable to extinction. Technological developments can be used as a way to preserve regional languages. Digital image-based artificial intelligence technology using machine learning methods such as machine translation can be used to answer the problems. This research will use Deep Learning method, namely Convolutional Neural Networks (CNN). Data of this research were 1300 alphabetic images, 5000 text images and 200 vocabularies of Minangkabau regional language. Alphabetic image data is used for the formation of the CNN classification model. This model is used for text image recognition, the results of which will be translated into regional languages. The accuracy of the CNN model is 98.97%, while the accuracy for text image recognition (OCR) is 50.72%. This low accuracy is due to the failure of segmentation on the letters i and j. However, the translation accuracy increases after the implementation of the Leveinstan Distance algorithm which can correct text classification errors, with an accuracy value of 75.78%. Therefore, this research has succeeded in implementing the Convolutional Neural Networks (CNN) method in identifying text in text images and the Leveinstan Distance method in translating Indonesian text into regional language texts.  


Sign in / Sign up

Export Citation Format

Share Document