A prototype of on-line extraction and three-dimensional characterisation of wear particle features from video sequence

Wear ◽  
2016 ◽  
Vol 368-369 ◽  
pp. 314-325 ◽  
Author(s):  
Hongkun Wu ◽  
Ngai Ming Kwok ◽  
Sanchi Liu ◽  
Tonghai Wu ◽  
Zhongxiao Peng
2013 ◽  
Vol 330 ◽  
pp. 338-345
Author(s):  
Chun Hui Wang ◽  
Wei Yuan ◽  
Guang Neng Dong ◽  
Jun Hong Mao

On-line visual ferrograph (OLVF) is an efficient and real-time condition monitoring device. From the point of flow conservation, on the basis of the particle coverage area data collected by OLVF, this paper deduced two models about wear loss of the tribo-pairs in the wear process, one is general mathematical (GM) model including distribution impact factor of wear particle, and other simplified GM (SGM) model which does not contain the factor. The key factor affecting the accuracy of the two models is the three dimensional information of wear particles referring to particle area and thickness. This model using the disc and the ball whose materials were GCr15 were experimentally demonstrated on a pin-on-disc testing machine. And the OLVF was used to acquire the coverage area of the wear particles, which can reflect the wear loss. It shows that, in some cases, the approximate wear loss in the process was obtained on-line conveniently. Compared with experiment values derived from other wear measurement methods like weighing mass method and surface profilometry method, the SGM model can reflect tendency of wear loss about the tribo-pairs continuously. The deviations about wear loss by the model were discussed. Meanwhile, compared with the traditional means to compute the wear loss, this SGM model could be employed both for off-line analysis and on-line condition monitoring programs.


Author(s):  
Neil Rowlands ◽  
Jeff Price ◽  
Michael Kersker ◽  
Seichi Suzuki ◽  
Steve Young ◽  
...  

Three-dimensional (3D) microstructure visualization on the electron microscope requires that the sample be tilted to different positions to collect a series of projections. This tilting should be performed rapidly for on-line stereo viewing and precisely for off-line tomographic reconstruction. Usually a projection series is collected using mechanical stage tilt alone. The stereo pairs must be viewed off-line and the 60 to 120 tomographic projections must be aligned with fiduciary markers or digital correlation methods. The delay in viewing stereo pairs and the alignment problems in tomographic reconstruction could be eliminated or improved by tilting the beam if such tilt could be accomplished without image translation.A microscope capable of beam tilt with simultaneous image shift to eliminate tilt-induced translation has been investigated for 3D imaging of thick (1 μm) biologic specimens. By tilting the beam above and through the specimen and bringing it back below the specimen, a brightfield image with a projection angle corresponding to the beam tilt angle can be recorded (Fig. 1a).


2019 ◽  
Vol 63 (5) ◽  
pp. 50401-1-50401-7 ◽  
Author(s):  
Jing Chen ◽  
Jie Liao ◽  
Huanqiang Zeng ◽  
Canhui Cai ◽  
Kai-Kuang Ma

Abstract For a robust three-dimensional video transmission through error prone channels, an efficient multiple description coding for multi-view video based on the correlation of spatial polyphase transformed subsequences (CSPT_MDC_MVC) is proposed in this article. The input multi-view video sequence is first separated into four subsequences by spatial polyphase transform and then grouped into two descriptions. With the correlation of macroblocks in corresponding subsequence positions, these subsequences should not be coded in completely the same way. In each description, one subsequence is directly coded by the Joint Multi-view Video Coding (JMVC) encoder and the other subsequence is classified into four sets. According to the classification, the indirectly coding subsequence selectively employed the prediction mode and the prediction vector of the counter directly coding subsequence, which reduces the bitrate consumption and the coding complexity of multiple description coding for multi-view video. On the decoder side, the gradient-based directional interpolation is employed to improve the side reconstructed quality. The effectiveness and robustness of the proposed algorithm is verified by experiments in the JMVC coding platform.


2005 ◽  
Vol 13 (12) ◽  
pp. 4492 ◽  
Author(s):  
Bahram Javidi ◽  
Inkyu Moon ◽  
Seokwon Yeom ◽  
Edward Carapezza

2002 ◽  
Vol 205 (3) ◽  
pp. 371-378
Author(s):  
L. Christoffer Johansson ◽  
Björn S. Wetterholm Aldrin

SUMMARY To examine the propulsion mechanism of diving Atlantic puffins (Fratercula arctica), their three-dimensional kinematics was investigated by digital analysis of sequential video images of dorsal and lateral views. During the dives of this wing-propelled bird, the wings are partly folded, with the handwings directed backwards. The wings go through an oscillating motion in which the joint between the radius-ulna and the hand bones leads the motion, with the wing tip following. There is a large rotary motion of the wings during the stroke, with the wings being pronated at the beginning of the downstroke and supinated at the end of the downstroke/beginning of the upstroke. Calculated instantaneous velocities and accelerations of the bodies of the birds show that, during the downstroke, the birds accelerate upwards and forwards. During the upstroke, the birds accelerate downwards and, in some sequences analysed, also forwards, but in most cases the birds decelerate. In all the upstrokes analysed, the forward/backward acceleration shows the same pattern, with a reduced deceleration or even a forward acceleration during ‘mid’ upstroke indicating the production of a forward force, thrust. Our results show that the Atlantic puffin can use an active upstroke during diving, in contradiction to previous data. Furthermore, we suggest that the partly folded wings of diving puffins might act as efficient aft-swept wingtips, reducing the induced drag and increasing the lift-to-drag ratio. A movie is available on-line.


Sign in / Sign up

Export Citation Format

Share Document