FORCE AND VISUAL INFORMATION ACQUISITION IN AFM BASED ROBOTIC MWCNT MANIPULATION

2007 ◽  
Vol 04 (02) ◽  
pp. 107-115
Author(s):  
XIAO-JUN TIAN ◽  
YUE-CHAO WANG ◽  
NING XI ◽  
ZAI-LI DONG ◽  
STEVE TUNG

Real-time force and visual information during MWCNT manipulation is required for online controlling MWCNT assembly based on atomic force microscope (AFM). Here real-time three-dimensional (3D) interactive forces between probe and sample are obtained according to PSD signals based on the proposed force model, and MWCNT manipulation process can be online displayed on the visual interface according to probe's position and applied force based on the proposed MWCNT motion model. With real-time force and visual information acquisition and feedback, the operator can control online MWCNT's manipulation process by adjusting the probe's 3D motion and applied forces. MWCNT push and assembly experiments verify the effectiveness of the method, which will be used in assembling MWCNT based nano device.

2012 ◽  
Vol 198-199 ◽  
pp. 1250-1255
Author(s):  
Xia Yang ◽  
Kai Yin ◽  
Hai Ying Liu ◽  
Hong Da Li

In this paper, based on characteristic analysis of driver training operation, a new approach of information acquisition for driver training is found, which is through extracting acceleration information. The information acquisition device which is under MEMS three-axis acceleration transducer, based on ADXL335 and MCU, can achieve real-time and accurate measurement of vehicles’ three-dimensional acceleration. By comparing the change curve of measured acceleration with the related data of good driver, the existing problems will be found out. In this way, the driver training operation can be assessed in a timely way.


2012 ◽  
Vol 256-259 ◽  
pp. 2431-2434
Author(s):  
Xu Liu ◽  
Bo Cui ◽  
Da Wei Tong

According to the high earth dam with a large volume, high construction intensity, tense schedule, numerous construction machines, complex technology, and a lot of unexpected risk factors, the construction transportation becomes poor if the arrangement for transportation to the dam is unreasonable, which may result in schedule delays. This paper discusses the system of the real-time 3D visualization on network environment which is based on Unity3D engine, and the query of real-time three-dimensional visual information for transportation to the dam on the network environment is achieved. Users can have a real-time view of the dump truck during transportation and optimize the arrangements of construction organization. The results of research have great practical significance.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5909
Author(s):  
Qingyu Jia ◽  
Liang Chang ◽  
Baohua Qiang ◽  
Shihao Zhang ◽  
Wu Xie ◽  
...  

Real-time 3D reconstruction is one of the current popular research directions of computer vision, and it has become the core technology in the fields of virtual reality, industrialized automatic systems, and mobile robot path planning. Currently, there are three main problems in the real-time 3D reconstruction field. Firstly, it is expensive. It requires more varied sensors, so it is less convenient. Secondly, the reconstruction speed is slow, and the 3D model cannot be established accurately in real time. Thirdly, the reconstruction error is large, which cannot meet the requirements of scenes with accuracy. For this reason, we propose a real-time 3D reconstruction method based on monocular vision in this paper. Firstly, a single RGB-D camera is used to collect visual information in real time, and the YOLACT++ network is used to identify and segment the visual information to extract part of the important visual information. Secondly, we combine the three stages of depth recovery, depth optimization, and deep fusion to propose a three-dimensional position estimation method based on deep learning for joint coding of visual information. It can reduce the depth error caused by the depth measurement process, and the accurate 3D point values of the segmented image can be obtained directly. Finally, we propose a method based on the limited outlier adjustment of the cluster center distance to optimize the three-dimensional point values obtained above. It improves the real-time reconstruction accuracy and obtains the three-dimensional model of the object in real time. Experimental results show that this method only needs a single RGB-D camera, which is not only low cost and convenient to use, but also significantly improves the speed and accuracy of 3D reconstruction.


Author(s):  
Zhihua Wang ◽  
Stefano Rosa ◽  
Bo Yang ◽  
Sen Wang ◽  
Niki Trigoni ◽  
...  

The ability to interact and understand the environment is a fundamental prerequisite for a wide range of applications from robotics to augmented reality. In particular, predicting how deformable objects will react to applied forces in real time is a significant challenge. This is further confounded by the fact that shape information about encountered objects in the real world is often impaired by occlusions, noise and missing regions e.g. a robot manipulating an object will only be able to observe a partial view of the entire solid. In this work we present a framework, 3D-PhysNet, which is able to predict how a three-dimensional solid will deform under an applied force using intuitive physics modelling. In particular, we propose a new method to encode the physical properties of the material and the applied force, enabling generalisation over materials. The key is to combine deep variational autoencoders with adversarial training, conditioned on the applied force and the material properties.We further propose a cascaded architecture that takes a single 2.5D depth view of the object and predicts its deformation. Training data is provided by a physics simulator. The network is fast enough to be used in real-time applications from partial views. Experimental results show the viability and the generalisation properties of the proposed architecture.


2021 ◽  
Vol 15 ◽  
Author(s):  
Dongyue Sun ◽  
Xian Wang ◽  
Yonghong Lin ◽  
Tianlong Yang ◽  
Shixu Wu

Common visual features used in target tracking, including colour and grayscale, are prone to failure in a confusingly similar-looking background. As the technology of three-dimensional visual information acquisition has gradually gained ground in recent years, the conditions for the wide use of depth information in target tracking has been made available. This study focuses on discussing the possible ways to introduce depth information into the generative target tracking methods based on a kernel density estimation as well as the performance of different methods of introduction, thereby providing a reference for the use of depth information in actual target tracking systems. First, an analysis of the mean-shift technical framework, a typical algorithm used for generative target tracking, is described, and four methods of introducing the depth information are proposed, i.e., the thresholding of the data source, thresholding of the density distribution of the dataset applied, weighting of the data source, and weighting of the density distribution of the dataset. Details of an experimental study conducted to evaluate the validity, characteristics, and advantages of each method are then described. The experimental results showed that the four methods can improve the validity of the basic method to a certain extent and meet the requirements of real-time target tracking in a confusingly similar background. The method of weighting the density distribution of the dataset, into which depth information is introduced, is the prime choice in engineering practise because it delivers an excellent comprehensive performance and the highest level of accuracy, whereas methods such as the thresholding of both the data sources and the density distribution of the dataset are less time-consuming. The performance in comparison with that of a state-of-the-art tracker further verifies the practicality of the proposed approach. Finally, the research results also provide a reference for improvements in other target tracking methods in which depth information can be introduced.


Author(s):  
Kathleen M. Marr ◽  
Mary K. Lyon

Photosystem II (PSII) is different from all other reaction centers in that it splits water to evolve oxygen and hydrogen ions. This unique ability to evolve oxygen is partly due to three oxygen evolving polypeptides (OEPs) associated with the PSII complex. Freeze etching on grana derived insideout membranes revealed that the OEPs contribute to the observed tetrameric nature of the PSIl particle; when the OEPs are removed, a distinct dimer emerges. Thus, the surface of the PSII complex changes dramatically upon removal of these polypeptides. The atomic force microscope (AFM) is ideal for examining surface topography. The instrument provides a topographical view of individual PSII complexes, giving relatively high resolution three-dimensional information without image averaging techniques. In addition, the use of a fluid cell allows a biologically active sample to be maintained under fully hydrated and physiologically buffered conditions. The OEPs associated with PSII may be sequentially removed, thereby changing the surface of the complex by one polypeptide at a time.


Sign in / Sign up

Export Citation Format

Share Document