scholarly journals Evaluation of an Intelligent Computer Method for the Automatic Mosaic of Sequential Slub Yarn Images

2018 ◽  
Vol 26 (2(128)) ◽  
pp. 38-48 ◽  
Author(s):  
Zhongjian Li ◽  
Ning Zhang ◽  
Yang Wu ◽  
Jing’an Wang ◽  
Ruru Pan ◽  
...  

This paper is the second part of a series reporting the recent development of a computerised method for automatic mosaic sequential yarn images. In our earlier work, an effective method for stitching sequence slub yarn images automatically was developed based on image processing and the normalised cross correlation (NCC) method. 100 image pairs of two kinds of slub yarn were measured in certain specific conditions, such as the frame rate, size of stitching template, etc., and the measurement results were evaluated with the manual method. In this paper, the effects of various influencing factors are numerically examined, including the stitching template size, threshold value, frame rate, and computing time of the mosaic algorithm. The feasibility and accuracy of the fully computerized method were evaluated further under the various influencing parameters. One hundred percent cotton ring spun single slub yarns of 27.8, 15.6, and 9.7 tex were prepared and used for the evaluation. The measurement results obtained by the method proposed are analysed and compared with those measured manually by Adobe Photoshop. The experimental results show that the method proposed can accurately find the stitch position and has a high consistency with the manual method when the matching template is 100 × N pixels, the threshold value T1 ∈ [20, 40] and T2 ∈ [51, 80], and the frame rate is greater than 40 fps.

2017 ◽  
Vol 16 (2) ◽  
Author(s):  
Afgan Suffan Aviv ◽  
Bambang Suhardi ◽  
Pringgo Widyo Laksono

<p><em>Implementation of ergonomics is generally a design or redesign. One of them may include the design of the physical work environment. Ergonomic work environment conditions are provide comfort and security for workers. Physical environmental factors that can affect the comfort and safety of noise level.  A good physical work environment will increase work capability or labor productivity. In a work environment, workload assessment can also be carried out to measure worker conformity and comfort. Workload assessment is carried out simultaneously with measurement of noise level . </em></p><p><em>Whose problematic noise, the industry is located in Tawangsari RT 03 RW 34 Mojosongo, Jebres, Surakarta named Yessy's Collection. Measurement of noise level to improve worker comfort, so that productivity increases. The methode used is measurement using 4 in 1 Environment on sound level meter function illustrated with Software Surfer 11.</em></p><em> The noise level measurement results are below the specified threshold value except at 1 coordinate in swabing station (stasiun penyesekan),that is  at above threshold value. To proposed improvements as noise control is engineering control, administrative control and use of PPE</em>


2014 ◽  
Vol 6 (2) ◽  
pp. 129-133
Author(s):  
Evaldas Borcovas ◽  
Gintautas Daunys

Image processing, computer vision or other complicated opticalinformation processing algorithms require large resources. It isoften desired to execute algorithms in real time. It is hard tofulfill such requirements with single CPU processor. NVidiaproposed CUDA technology enables programmer to use theGPU resources in the computer. Current research was madewith Intel Pentium Dual-Core T4500 2.3 GHz processor with4 GB RAM DDR3 (CPU I), NVidia GeForce GT320M CUDAcompliable graphics card (GPU I) and Intel Core I5-2500K3.3 GHz processor with 4 GB RAM DDR3 (CPU II), NVidiaGeForce GTX 560 CUDA compatible graphic card (GPU II).Additional libraries as OpenCV 2.1 and OpenCV 2.4.0 CUDAcompliable were used for the testing. Main test were made withstandard function MatchTemplate from the OpenCV libraries.The algorithm uses a main image and a template. An influenceof these factors was tested. Main image and template have beenresized and the algorithm computing time and performancein Gtpix/s have been measured. According to the informationobtained from the research GPU computing using the hardwarementioned earlier is till 24 times faster when it is processing abig amount of information. When the images are small the performanceof CPU and GPU are not significantly different. Thechoice of the template size makes influence on calculating withCPU. Difference in the computing time between the GPUs canbe explained by the number of cores which they have. Vaizdų apdorojimas, kompiuterinė rega ir kiti sudėtingi algoritmai, apdorojantys optinę informaciją, naudoja dideliusskaičiavimo išteklius. Dažnai šiuos algoritmus reikia realizuoti realiuoju laiku. Šį uždavinį išspręsti naudojant tik vienoCPU (angl. Central processing unit) pajėgumus yra sudėtinga. nVidia pasiūlyta CUDA (angl. Compute unified device architecture)technologija leidžia panaudoti GPU (angl. Graphic processing unit) išteklius. Tyrimui atlikti buvo pasirinkti du skirtingiCPU: Intel Pentium Dual-Core T4500 ir Intel Core I5 2500K, bei GPU: nVidia GeForce GT320M ir NVidia GeForce 560.Tyrime buvo panaudotos vaizdų apdorojimo bibliotekos: OpenCV 2.1 ir OpenCV 2.4. Tyrimui buvo pasirinktas šablonų atitiktiesalgoritmas. Algoritmui realizuoti reikalingas analizuojamas vaizdas ir ieškomo objekto vaizdo šablonas. Tyrimo metu buvokeičiamas vaizdo ir šablono dydis bei stebima, kaip tai veikia algoritmo vykdymo trukmę ir vykdomų operacijų skaičių persekundę. Iš gautų rezultatų galima teigti, kad apdorojant didelį duomenų kiekį GPU realizuoja algoritmą iki 24 kartų greičiaunei tik CPU. Dirbant su nedideliu duomenų kiekiu, skirtumas tarp CPU ir GPU yra minimalus. Lyginant skaičiavimus dviejuoseGPU, pastebėta, kad skaičiavimų sparta yra tiesiogiai proporcinga GPU turimų branduolių kiekiui. Mūsų tyrimo atvejuspartesniame GPU jų buvo 16 kartų daugiau, tad ir skaičiavimai vyko 16 kartų sparčiau.


2020 ◽  
Vol 76 (2) ◽  
pp. 287-297
Author(s):  
Raphael Romano Bruno ◽  
Mara Schemmelmann ◽  
Jakob Wollborn ◽  
Malte Kelm ◽  
Christian Jung

OBJECTIVE: Diagnostic and risk stratification in intensive and emergency medicine must be fast, accurate, and reliable. The assessment of sublingual microcirculation is a promising tool for this purpose. However, its value is limited because the measurement is time-consuming in unstable patients. This proof-of-concept validation study examines the non-inferiority of a reduced frame rate in image acquisition regarding quality, measurement results, and time. METHODS: This prospective observational study included healthy volunteers. Sublingual measurement of microcirculation was performed using a sidestream dark field camera (SDF, MicroVision Medical®). Video-quality was evaluated with a modified MIQS (microcirculation image quality score). AVA 4.3C software calculated microcirculatory parameters. RESULTS: Thirty-one volunteers were included. There was no impact of the frame rate on the time needed by the software algorithm to measure one video (4.5 ± 0.5 minutes) for AVA 4.3C. 86 frames per video provided non inferior video quality (MIQS 1.8 ± 0.7 for 86 frames versus MIQS 2.2 ± 0.6 for 215 frames, p < 0.05), equal results for all microcirculatory parameters, but did not result in an advantage in terms of speed. No complications occurred. CONCLUSION: Video captures with 86 frames offer equal video quality and results for consensus parameters compared to 215 frames. However, there was no advantage regarding the time needed for the overall measurement procedure.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7444
Author(s):  
Piotr Kiedrowski ◽  
Beata Marciniak

The pass/fail form is one of the presentation methods of quality assessment results. The authors, as part of a research team, participated in the process of creating the PRIME interface analyzer. The PRIME interface is a standardized interface—considered as communication technology for smart metering wired networks, which are specific kinds of sensor networks. The frame error ratio (FER) assessment and its presentation in the pass/fail form was one of the problems that needed to be solves in the PRIME analyzer project. In this paper, the authors present their method of a unified FER assessment, which was implemented in the PRIME analyzer, as one of its many functionalities. The need for FER unification is the result of using different modulation types and an optional forward error correction mechanism in the PRIME interface. Having one unified FER and a threshold value makes it possible to present measurement results in the pass/fail form. For FER unification, the characteristics of FER vs. signal-to-noise ratio, for all modulations implemented in PRIME, were used in the proposed algorithm (and some are presented in this paper). In communication systems, the FER value is used to forecast the quality of a link or service, but using PLC technology, forecasting is highly uncertain due to the main noise. The presentation of the measurement results in the pass/fail form is important because it allows unskilled staff to make many laborious measurements in last mile smart metering networks.


Author(s):  
Zhaoping He ◽  
Laura Bolling ◽  
Dalal Tonb ◽  
Tracey Nadal ◽  
Devendra I. Mehta

Determination of disaccharidase and glucoamylase activities is important for the diagnosis of intestinal diseases. We adapted a widely accepted manual method to an automated system that uses the same reagents reaction volumes, incubation times, and biopsy size as the manual method. A dye was added to the homogenates as the internal quality control to monitor the pipetting precision of the automated system. When the automated system was tested using human intestinal homogenates, the activities of all the routinely tested disaccharidases, including lactase, maltase, sucrase, and palatinase, as well as the activity of glucoamylase, showed perfect agreement with the manual method and were highly reproducible. The automated analyzer can perform the same routine assays of disaccharidases and glucoamylase with high consistency and accuracy and reduce testing costs by performing a larger sample size with the same number of staff. Additional developments, such as barcoding and built-in plate reading, would result in a completely automated system.


2017 ◽  
Vol 88 (24) ◽  
pp. 2854-2866 ◽  
Author(s):  
Zhongjian Li ◽  
Nian Xiong ◽  
Jingan Wang ◽  
Ruru Pan ◽  
Weidong Gao ◽  
...  

In order to analyze the parameters of slub yarn from sequential images accurately, an automatic image mosaic method is proposed in this paper. In this method, a series of overlapping yarn images, which are captured from a moving slub yarn, are stitched into a panorama automatically. Background subtraction, image segmentation and judgment template traversal methods are applied to preprocess the sequential images for obtaining a test image. Subsequently, certain rows in the bottom of the test image are used as a template image to match the next image. The matching coefficient matrix is calculated between the template image and next image based on the normalized cross-correlation method. In the matrix, the coordinates of the peak value are found as the optimal matching points. Two kinds of slub yarn images captured under 40 fps are stitched by using the proposed method and the manual method, respectively. Finally, an objective method is formulated to evaluate the qualities of the image mosaic by the proposed method. The experimental results show that the proposed method can find the match position accurately and is highly consistent with the manual method.


2019 ◽  
Vol 2 (2) ◽  
pp. 139-144
Author(s):  
Suhardiman Diman ◽  
Zahir Zainuddin ◽  
Salama Manjang

Edge detection was the basic thing used in most image processing applications to get information from the image frame as a beginning for extracting the features of the segmentation object that will be detected. Nowadays, many edge detection methods create doubts in choosing the right edge detection method and according to image conditions. Based on the problems, a study was conducted to compare the performance of edge detection using methods of canny, Sobel and laplacian by using object of rice field. The program was created by using the Python programming language on OpenCV.  The result of the study on one image test that the Canny method produces thin and smooth edges and did not omit the important information on the image while it has required a lot of computing time. Classification is generally started from the data acquisition process; pre-processing and post-processing. Canny edge detection can detect actual edges with minimum error rates and produce optimal image edges. The threshold value obtained from the Canny method was the best and optimal threshold value for each method. The result of a test by comparing the three methods showed that the Canny edge detection method gives better results in determining the rice field boundary, which was 90% compared to Sobel 87% and laplacian 89%.


2014 ◽  
Vol 971-973 ◽  
pp. 1760-1763
Author(s):  
Yong Zhi Min ◽  
Hong Xia Wang ◽  
Jian Wu Dang

For catenary faults are caused by hot, so infrared thermal imaging technology and image processing are combined in catenary thermal fault detection. In image process, image template matching can be used in target detection of infrared image. As mutual information is similarity measure criterion in image template matching, and weakness of NP window is long computing time, so the paper employ MNP method to instead the original NP window. Experimental results show calculating the joint probability distribution and the matching time of MNP method has advantages. When the template size is 5×5, MNP method has the higher template matching accuracy.


Author(s):  
Maria Gemel B. Palconit ◽  
Ronnie S. Concepcion II ◽  
Jonnel D. Alejandrino ◽  
Michael E. Pareja ◽  
Vincent Jan D. Almero ◽  
...  

Three-dimensional multiple fish tracking has gained significant research interest in quantifying fish behavior. However, most tracking techniques use a high frame rate, which is currently not viable for real-time tracking applications. This study discusses multiple fish-tracking techniques using low-frame-rate sampling of stereo video clips. The fish were tagged and tracked based on the absolute error of the predicted indices using past and present fish centroid locations and a deterministic frame index. In the predictor sub-system, linear regression and machine learning algorithms intended for nonlinear systems, such as the adaptive neuro-fuzzy inference system (ANFIS), symbolic regression, and Gaussian process regression (GPR), were investigated. The results showed that, in the context of tagging and tracking accuracy, the symbolic regression attained the best performance, followed by the GPR, that is, 74% to 100% and 81% to 91%, respectively. Considering the computation time, symbolic regression resulted in the highest computing lag of approximately 946 ms per iteration, whereas GPR achieved the lowest computing time of 39 ms.


2021 ◽  
Vol 8 (1) ◽  
pp. 119-133
Author(s):  
Yuan Chang ◽  
Congyi Zhang ◽  
Yisong Chen ◽  
Guoping Wang

AbstractImage interpolation has a wide range of applications such as frame rate-up conversion and free viewpoint TV. Despite significant progresses, it remains an open challenge especially for image pairs with large displacements. In this paper, we first propose a novel optimization algorithm for motion estimation, which combines the advantages of both global optimization and a local parametric transformation model. We perform optimization over dynamic label sets, which are modified after each iteration using the prior of piecewise consistency to avoid local minima. Then we apply it to an image interpolation framework including occlusion handling and intermediate image interpolation. We validate the performance of our algorithm experimentally, and show that our approach achieves state-of-the-art performance.


Sign in / Sign up

Export Citation Format

Share Document