scholarly journals Post-Processing of Schlieren Images

2021 ◽  
Vol 13 (3) ◽  
pp. 113-122
Author(s):  
Emilia PRISACARIU ◽  
Tudor PRISECARU ◽  
Valeriu VILAG ◽  
Cosmin SUCIU ◽  
Cristian DOBROMIRESCU ◽  
...  

In general, the Schlieren visualization method is used to qualitatively describe phenomena. However, recent studies have attempted to convert the classical Schlieren system into a quantitative method to describe certain flow parameters. This paper aims at analysing pictures from a qualitative and a quantitative point of view. The post-processing of images for both situations is described based on different applications. Real examples are used and both methodologies and logical schemes are explained. The article focuses on image processing, and not on the studied phenomena.

Author(s):  
Tomoya Masuyama ◽  
Takuya Ikeda ◽  
Satoshi Yoshiizumi ◽  
Katsumi Inoue

The detection of damage in early stage of fatigue is important for a reliable evaluation of gear life and strength. From this point of view, the variation of strain distribution in a tooth due to cyclic load contains useful information because the fatigue crack will initiate as a result of the accumulation of plastic strain. Meanwhile, digital image equipments are widely used in our life and the performance is in progress. We took digital pictures of cyclic loaded tooth by the digital camera and compared with the picture of no load to find displacement. The strain distribution of tooth is calculated by the correlation method using those pictures. The initiation of a micro crack is observed by the method. It is also confirmed by the detection of acoustic emission wave with higher energy. The variation of stress-strain diagram in fatigue process is presented, and this illustrates the increase of strain in the final stage of fatigue.


2019 ◽  
Author(s):  
Darian Jancowicz-Pitel

The presented paper aimed for exploring the translation process, a translator or interpreter needs equipment or tools so that the objectives of a translation can be achieved. If an interpreter needs a pencil, paper, headphones, and a mic, then an interpreter needs even more tools. The tools required include conventional and modern tools. Meanwhile, the approach needed in research on translation is qualitative and quantitative, depending on the research objectives. If you want to find a correlation between a translator's translation experience with the quality or type of translation errors, a quantitative method is needed. Also, this method is very appropriate to be used in research in the scope of teaching translation, for example from the student's point of view, their level of intelligence regarding the quality or translation errors. While the next method is used if the research contains translation errors, procedures, etc., it is more appropriate to use qualitative methods. Seeing this fact, these part-time translators can switch to the third type of translator, namely free translators. This is because there is an awareness that they can live by translation. These translators set up their translation efforts that involve multiple languages.


2014 ◽  
Vol 12 ◽  
pp. 41-47 ◽  
Author(s):  
Petr Jašek ◽  
Martin Štroner

Regarding the terrestrial laser scanning accuracy, one of the main problems is the noise in measured distance which is necessary for the spatial coordinates´ determination. In this paper the technique of using the wavelet transformation for the reduction of the noise in the laser scanning data is described. This method of filtration is made in “post processing” and due to this fact any changes in the measuring procedure in the field shouldn´t be done. The creation of the regular matrix is needed to apply image processing. This matrix then makes the range image. In the paper real and simulated efficiency tests of wavelet transformation, the final summary and advantages or disadvantages of this method are introduced.


2018 ◽  
Vol 10 (4) ◽  
pp. 140-155 ◽  
Author(s):  
Lu Liu ◽  
Yao Zhao ◽  
Rongrong Ni ◽  
Qi Tian

This article describes how images could be forged using different techniques, and the most common forgery is copy-move forgery, in which a part of an image is duplicated and placed elsewhere in the same image. This article describes a convolutional neural network (CNN)-based method to accurately localize the tampered regions, which combines color filter array (CFA) features. The CFA interpolation algorithm introduces the correlation and consistency among the pixels, which can be easily destroyed by most image processing operations. The proposed CNN method can effectively distinguish the traces caused by copy-move forgeries and some post-processing operations. Additionally, it can utilize the classification result to guide the feature extraction, which can enhance the robustness of the learned features. This article, per the authors, tests the proposed method in several experiments. The results demonstrate the efficiency of the method on different forgeries and quantifies its robustness and sensitivity.


Author(s):  
Yesica Pamela Leandro Chacon ◽  
◽  
Omar Chamorro Atalaya

The present research aims to design an automatic fire detection and extinction system, developed with infrared multi-spectrum electro-optical technology with watch-dog timer control, for an electrical transformer from 220KV to 33KV. Upon its development, it is concluded that the automatic detection and extinction system has a deluge system with sprayed water, which will be activated by a detection system with flame sensors, this system has infrared multispectrum Electro-Optical Technology and will be controlled by through the Timer Watch-Dog, which will automatically detect and report any failure in the state-of-theart microprocessor. By subjecting the detection and extinguishing system to operational and functional tests, an optimal response of the deluge sprinklers was obtained, through the pressure and flow parameters, also a coefficient of determination R2 equal to 0.991 is obtained, which represents that the design is optimal, evidencing feasibility from the operational and functional point of view. Keywords— Detection, Extinction, Automatic, ElectroOptical, Multispectral, Infrared, Timer Watch-Dog, Transformer


2019 ◽  
Vol 1 (01) ◽  
pp. 31-38 ◽  
Author(s):  
Samuel Manoharan

This paper proposes a smart algorithm for image processing by means of recognition of text, extraction of information and vocalization for the visually challenged. The system uses LattePanda Alpha system on board that processes the scanned images. The image is categorized into its equivalent alphanumeric characters following pre-processing, segmentation, extraction of features and post-processing of the scanned or image based information. Further, a text to speech synthesizer is used for vocalization processed content. In converting handwritten scripts, the system offers an accuracy of 97% in conversion. This also depends on the legibility of the data. The time delay for the entire conversion process is also analysed and the efficiency of the system is estimated.


2001 ◽  
Vol 73 (3) ◽  
pp. 303-317 ◽  
Author(s):  
CICERO MOTA ◽  
JONAS GOMES ◽  
MARIA I. A. CAVALCANTE

We study the perceptual problem related to image quantization from an optimization point of view, using different metrics on the color space. A consequence of the results presented is that quantization using histogram equalization provides optimal perceptual results. This fact is well known and widely used but, to our knowledge, a proof has never appeared on the literature of image processing.


2015 ◽  
Vol 77 (22) ◽  
Author(s):  
Sayed Muchallil ◽  
Fitri Arnia ◽  
Khairul Munadi ◽  
Fardian Fardian

Image denoising plays an important role in image processing.  It is also part of the pre-processing technique in a binarization complete procedure that consists of pre-processing, thresholding, and post-processing.  Our previous research has confirmed that the Discrete Cosine Transform (DCT)-based filtering as the new pre-processing process improved the performance of binarization output in terms of recall and precision. This research compares three classical denoising methods; Gaussian, mean, and median filtering with the DCT-based filtering. The noisy ancient document images are filtered using those classical filtering methods. The outputs of this process are used as the input for Otsu, Niblack, Sauvola and NICK binarization methods. Then the resulted binary images of the three classical methods are compared with those of DCT-based filtering. The performance of all denoising algorithms is evaluated by calculating recall and precision of the resulted binary images.  The result of this research is that the DCT based filtering resulted in the highest recall and precision as compared to the other methods. 


2012 ◽  
Vol 23 (1) ◽  
pp. 67-90 ◽  
Author(s):  
Kimi Akita,

AbstractThis article presents empirical evidence of the high referential specificity of sound-symbolic words, based on a FrameNet-aided analysis of collocational data of Japanese mimetics. The definition of mimetics, particularly their semantic definition, has been crosslinguistically the most challenging problem in the literature, and different researchers have used different adjectives (most notably, “vivid,” since Doke 1935) to describe their semantic peculiarity. The present study approaches this longstanding issue from a frame-semantic point of view combined with a quantitative method. It was found that mimetic manner adverbials generally form a frame-semantically restricted range of verbal/nominal collocations than non-mimetic ones. Each mimetic can thus be considered to evoke a highly specific frame, which elaborates the general frame evoked by its typical host predicate and contains a highly limited set of frame elements, which correlate and constrain one another. This conclusion serves as a unified account of previously reported phenomena concerning mimetics, including the lack of hyponymy, the one-mimetic-per-clause restriction, and unparaphrasability. This study can be also viewed as a methodological proposal for the measurement of frame specificity, which supplements bottom-up linguistic tests.


2012 ◽  
Vol 220-223 ◽  
pp. 1350-1355
Author(s):  
Long Zhang ◽  
Jian Jun Yang ◽  
Jun Zhang

Model ice-shape measurement is an essential part in the icing test in wind tunnel. The principle investigation of camera calibration and image processing technology based on OpenCV library which apply to ice-shape measurement in wind tunnel is presented in this paper. A kind of software with perfect function and good reproducibility was successfully developed. Ice-shape measurement test was conducted in wind tunnel and the application of OpenCV library in image post-processing was proved to be practical. This program can also be effectively used in aero-optics research, model attitude measurement and model deformation measurement in wind tunnel.


Sign in / Sign up

Export Citation Format

Share Document