New high-speed adaptive frame synchronisers incorporating postdetection processing techniques

1991 ◽  
Vol 138 (4) ◽  
pp. 269
Author(s):  
L.-K. Shark ◽  
T.J. Terrell ◽  
R.J. Simpson
2018 ◽  
Vol 2 (4) ◽  
pp. 72 ◽  
Author(s):  
German Terrazas ◽  
Giovanna Martínez-Arellano ◽  
Panorios Benardos ◽  
Svetan Ratchev

The new generation of ICT solutions applied to the monitoring, adaptation, simulation and optimisation of factories are key enabling technologies for a new level of manufacturing capability and adaptability in the context of Industry 4.0. Given the advances in sensor technologies, factories, as well as machine tools can now be sensorised, and the vast amount of data generated can be exploited by intelligent information processing techniques such as machine learning. This paper presents an online tool wear classification system built in terms of a monitoring infrastructure, dedicated to perform dry milling on steel while capturing force signals, and a computing architecture, assembled for the assessment of the flank wear based on deep learning. In particular, this approach demonstrates that a big data analytics method for classification applied to large volumes of continuously-acquired force signals generated at high speed during milling responds sufficiently well when used as an indicator of the different stages of tool wear. This research presents the design, development and deployment of the system components and an overall evaluation that involves machining experiments, data collection, training and validation, which, as a whole, has shown an accuracy of 78 % .


2014 ◽  
Vol 70 (3) ◽  
Author(s):  
Nasarudin Ahmad ◽  
Ruzairi Abdul Rahim ◽  
Herlina Abdul Rahim ◽  
Mohd Hafiz Fazlul Rahiman

Although the technique of using ultrasound has reached maturity by given the extent of the development of sensors, but the use of the various areas still can be explore. Many types of ultrasonic sensors are still at conventional in use especially for measurement equipment in the industry. With the advancement of signal processing techniques, high-speed computing, and the latest techniques in image formation based Non-destructive testing (NDT) methods, the usage of ultrasound in concrete NDT testing is very extensive because the technique is very simple and should not damage the concrete structure to be investigated. Many of the parameters need to be tested using ultrasound techniques to concrete can be realized. Starting with the initial process for of concrete mixing until the concrete matured to the age of century old. Various tests are available to test a variety of non-destructive of concrete completely, in which there is no damage to the concrete, through those where the concrete surface is damaged a bit, to partially destructive testing, such as core tests and insertion and pull-off test, which surface to be repaired after the test. Testing parameter features that can be evaluated using non-destructive testing and destructive testing of some rather large and include basic parameters such as density, elastic modulus and strength and surface hardness and surface absorption, and reinforcement location, size and distance from the surface. In some cases it is also possible to check the quality of the workmanship and structural integrity of the ability to detect voids, cracks and delamination. A review of NDT using ultrasound on concrete are presented in this paper to highlight the important aspect to consider when one to consider the application and development of ultrasound testing on concrete by considering ultrasound signal capturing, processing and presenting.


Author(s):  
О.В. Трапезникова

Проблема повышения точности контроля в процессе производства различных печатных изделий не теряет своей актуальности в связи с совершенствованием как технологий изготовления самих изделий, так контрольно-измерительной техники и методов контроля. Поскольку нанесение красочного изображения, выполняющего информационную функцию, на печатное изделие способами печатания осуществляется на достаточно высокой скорости, то широко используемый метод визуального контроля малоэффективен, применим выборочно, не позволяет обеспечить требуемую точность. Следствием этого является брак или ухудшение качества напечатанных изображений на печатных изделиях. Показано, что действующие методы контроля, регламентированные стандартами, не отвечают требованиям для выпуска конкурентоспособной на современном рынке продукции В работе проанализированы стандартные и запатентованные методы контроля показателей качества, нанесенного на печатное изделие изображения способами печатания. Отмечены направления их модернизации и разработки новых объективных методов контроля, что возможно осуществлять только за счет использования современных систем с применением программых продуктов, разработанных под конкретный объект контроля, способствующих интеграции процессов измерения и анализа информации для стабилизации технологического процесса печати. Предложен метод контролякрасковосприятия, который основан на методах математического и гармонического анализа. Ввиду отсутствия стандарта, разработан алгоритм контроля показателя укрывистость. Оценка и контроль укрывистости в данном алгоритме базируется на методах математической статистики. Определение и контроль показателей красковосприятия, укрывистость осуществляется с помощью разработанных программ для ЭВМ, что позволяет не только существенно сократить время контроля, но и повысить его точность. Providing that both current processing techniques and supervisory instruments is updating, the problem of the accuracy quality parameters continues to be relevant for printing industry. Due to high-speed processes of the printed items manufacturing the current visual-based monitoring techniques can be thought of as low informative, selective applied resulted in accuracy decreasing at lot inspections. The most current monitoring methods regulated by standards are low-efficient at the moment. Both standard and patented quality monitoring techniques for printed images monitoring were analyzed. The listed techniques upgrading trends and points of the novel objective ones developing based on specified software hardware integration of measurement and information analysis procedures aimed at the operational process stabilization are noted. The ink trapping monitoring technique based on both mathematical and Fourier analysis is proposed. As long as the standard procedure is deficient, the algorithm of covering power control based used mathematical statistics approaches was developed. The monitoring is carried out by the developed software.


Geophysics ◽  
1964 ◽  
Vol 29 (1) ◽  
pp. 38-53 ◽  
Author(s):  
M. B. Dobrin ◽  
W. G. Rimmer

Many geological features associated with oil accumulation show up on seismic maps as interruptions of regional trends rather than as true structural closures. Among such features are reefs, which are often best detected by draping of overlying formations; erosional escarpments which truncate porous limestone beds on their updip sides; and buried ridges which cause productive stratigraphic buildups in overlying beds. In the presence of regional tilting, seismic indications from such features can be so obscured that special data‐processing techniques are required to make them readily recognizable on seismic maps. The problem here is very similar to that of separating gravity and magnetic effects of features having economic interest from regional background. Residual techniques developed for accomplishing this type of separation can be applied advantageously to seismic data where regional structure obscures significant anomalies. Both contour‐smoothing or grid methods can be used depending on the nature of the problem and the preference of the interpreter. As with gravity or magnetics, the grid methods are particularly adaptable for high‐speed electronic computation. Some examples are shown where regional effects are removed from seismic maps over known reefs and productive erosional escarpments by techniques using electronic computation. A somewhat different approach is necessary when it is desired to remove the effect of velocity variation from time maps by treating the velocity function as a regional effect. Here the regional is multiplicative rather than additive and cross‐product terms must be taken into account. By relating the time maps and velocity maps using this approach, the principal hazards of using time maps for interpretation can be avoided.


2011 ◽  
Vol 4 (7) ◽  
pp. 1361-1381 ◽  
Author(s):  
R. P. Lawson

Abstract. Recently, considerable attention has been focused on the issue of large ice particles shattering on the inlets and tips of cloud particle probes, which produces copious ice particles that can be mistakenly measured as real ice particles. Currently two approaches are being used to mitigate the problem: (1) Based on recent high-speed video in icing tunnels, probe tips have been designed that reduce the number of shattered particles that reach the probe sample volume, and (2) Post processing techniques such as image processing and using the arrival time of each individual particle. This paper focuses on exposing suspected errors in measurements of ice particle size distributions due to shattering, and evaluation of the two techniques used to reduce the errors. Data from 2D-S probes constitute the primary source of the investigation, however, when available comparisons with 2D-C and CIP measurements are also included. Korolev et al. (2010b) report results from a recent field campaign (AIIE) and conclude that modified probe tips are more effective than an arrival time algorithm when applied to 2D-C and CIP measurements. Analysis of 2D-S data from the AIIE and SPARTICUS field campaigns shows that modified probe tips significantly reduce the number of shattered particles, but that a particle arrival time algorithm is more effective than the probe tips designed to reduce shattering. A large dataset of 2D-S measurements with and without modified probe tips was not available from the AIEE and SPARTICUS field campaigns. Instead, measurements in regions with large ice particles are presented to show that shattering on the 2D-S with modified probe tips produces large quantities of small particles that are likely produced by shattering. Also, when an arrival time algorithm is applied to the 2D-S data, the results show that it is more effective than the modified probe tips in reducing the number of small (shattered) particles. Recent results from SPARTICUS and MACPEX show that 2D-S ice particle concentration measurements are more consistent with physical arguments and numerical simulations than measurements with older cloud probes from previous field campaigns. The analysis techniques in this paper can also be used to estimate an upper bound for the effects of shattering. For example, the additional spurious concentration of small ice particles can be measured as a function of the mass concentration of large ice particles. The analysis provides estimates of upper bounds on the concentration of natural ice, and on the remaining concentration of shattered ice particles after application of the post-processing techniques. However, a comprehensive investigation of shattering is required to quantify effects that arise from the multiple degrees of freedom associated with this process, including different cloud environments, probe geometries, airspeed, angle of attack, particle size and type.


2013 ◽  
Vol 385-386 ◽  
pp. 1500-1504
Author(s):  
Jiang Tao Huang ◽  
Jian Peng ◽  
Feng Bo Li

with high speed development of the digital image processing technology, and computer image processing technology used in scientific research more and more widely. Mineral, resources and environment, and other fields, image processing technology also has a wide range of applications. This paper is the application of digital image processing techniques to realize to study all the features of ore and automatically recognize ore , pattern recognition method applied to mineral recognition and testing field, so as to achieve the goal of mineral appraisal.


2017 ◽  
Vol 888 ◽  
pp. 222-227 ◽  
Author(s):  
Wee Chun Wong ◽  
Pei Leng Teh ◽  
Azlin Fazlina Osman ◽  
Cheow Keat Yeoh

In this work, two instruments were used to disperse graphene nanofillers into epoxy matrices, high speed mechanical stirrer and bath sonicator. Two stages experiment were conducted in order to achieve better dispersion of graphene fillers. Flexural test, fracture toughness test and density test were conducted on neat epoxy, 0.2 vol%, 0.4 vol%, 0.6 vol%, 0.8 vol% and 1 vol% graphene incorporated epoxy nanocomposites to observe the loading effect of graphene on the mechanical properties. Flexural results shown improvement in flexural strength graphene incorporated epoxy nanocomposites over neat epoxy. However, these enhancement were observed only up to 0.2 vol% filler loading after which the properties were seen to reduce. Reagglomeration of graphene nanofillers might be the factor that explained this phenomenon. Flexural modulus increased continuously as long as filler concentration increased. Fracture toughness results revealed the fracture toughness of nanocomposites fabricated using bath sonication has shown increasing trend with increasing filler concentration up to 1.0 vol% which not reach to optimum value yet. Nanocomposites fabricated using high speed mechanical stirrer has reached to optimum fracture toughness value at 0.6 vol% loadings. Further addition of graphene nanofillers promoted poor filler dispersion that resulting in decreased fracture toughness of nanocomposites. In addition, density of nanocomposites increased when greater amount of graphene nanofillers added regardless the processing techniques used. These results indicates that both processing techniques were suitable to disperse fillers at low loading only. However, bath sonication method was able to fabricate epoxy/graphene nanocomposites with more homogeneous filler dispersion compared to high speed mechanical mixing.


Sign in / Sign up

Export Citation Format

Share Document