A New Readout System for LC Resonant Sensors

2014 ◽  
Vol 609-610 ◽  
pp. 957-963
Author(s):  
Bing Er Ge ◽  
Ting Liang ◽  
Ying Ping Hong ◽  
Chen Li ◽  
Wei Wang ◽  
...  

A new readout system based on LC resonant sensor is presented. The readout system consists of a reader coil inductively coupled to the LC resonant sensor, a measurement unit, and a PC post processing unit. The measurement unit generates an output voltage representing the sensor resonance, converts the output voltage to numerical form, and saves the converted digital data. The PC post processing unit processes the digital data and calculates the sensor's resonance frequency. The readout system enables wireless interrogation and its accuracy is exemplified by an experimental system. The experimental system can detect the resonant frequency of sensor automatically and effectively. The experimental results are presented for different sensor resonance frequencies with various sensor capacitance values and show good agreement with the theoretical results. The entire design is simple, easy to use, and widely applicable for applications where the coupling distance between sensor and reader coil is variable.

10.29007/m1cq ◽  
2018 ◽  
Author(s):  
Sanghyun Joung ◽  
Hyunwoo Lee ◽  
Chul-Woo Park ◽  
Chnag-Wug Oh ◽  
Il-Hyung Park

We have developed a laser projection system, which can project laser on corresponding position to surgical planning drawn at a fluoroscopic image without an optical tracking system. In this paper, we introduce a spatial calibration method between a laser module and a fluoroscope for the laser projection and evaluate its accuracy with a mimic experimental system. The experimental system consists of a laser module, a distance measurement unit and a CCD camera. The laser modules can project arbitrary line on surface by reflecting a point source laser with two galvanometers. We designed a calibration phantom by combining a collimator for accurate laser pattern positioning and stainless steel ball arrays for calculation of an extrinsic parameter of a C-arm fluoroscopy. We set a projection plane having ruler in 400mm distance from the CCD camera, and set 54 points on the screen. The laser module projects points with respect to the set points, and a distance error between set points and projected points and angular error are calculated. The distance errors is 1.5±1.9 mm (average ± standard deviation). Maximum error was 7.5 mm. Angular error was smaller than 2 degrees. The laser projection system and its calibration method shows clinically acceptable accuracy and the clinical application is the next step.


Author(s):  
Atsushi Okamura ◽  
Tadashi Mikoshiba ◽  
Wataru Yosizaki ◽  
Hideki Nagai ◽  
Atsushi Mogi ◽  
...  

In this paper, we propose a remote 3-D displacement measurement method using the radio wave for indoor shaking table facilities. It also shows the effectiveness of the proposed method by some radio experiments using a shaking table. The proposed measurement system consists of simple transmitters attached to each reference point and several receiving antennas at different places connected with a digital signal processing unit. The locations of the transmitters are simultaneously estimated by observing the phase differences between signals at receiving antennas. This prototype radio experimental system consists of 3 transmitters of frequency 2.45GHz, 4 receiving antennas and a microcomputer. We made the demonstration for the reconstructions of the 0.1Hz and 2Hz sinusoidal excitations at 3 reference points of the testing structure on the shaking table. The accuracy of estimated displacement was several centi-meters to several mili-meters.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1538 ◽  
Author(s):  
Chi Wang ◽  
Jianmei Sun ◽  
Chenye Yang ◽  
Bin Kuang ◽  
Dong Fang ◽  
...  

A novel Fabry–Perot (F–P) interferometer model based on the ultra-small gradient-index (GRIN) fiber probe is investigated. The signal arm of the F–P interferometer is organically combined with the ultra-small GRIN fiber probe to establish the theoretical model of the novel F–P interferometer. An interferometer experimental system for vibration measurements was built to measure the performance of the novel F–P interferometer system. The experimental results show that under the given conditions, the output voltage of the novel interferometer is 3.9 V at the working distance of 0.506 mm, which is significantly higher than the output voltage 0.48 V of the single-mode fiber (SMF) F–P interferometer at this position. In the range of 0.1–2 mm cavity length, the novel interferometer has a higher output voltage than an SMF F–P interferometer. Therefore, the novel F–P interferometer is available for further study of the precise measurement of micro vibrations and displacements in narrow spaces.


2019 ◽  
Vol 20 (20) ◽  
pp. 5158 ◽  
Author(s):  
Meng Liang ◽  
Yuhang Fu ◽  
Ruibo Gao ◽  
Qiaoqiao Wang ◽  
Junlan Nie

Molecular visualization is often challenged with rendering of large molecular structures in real time. The key to LOD (level-of-detail), a classical technology, lies in designing a series of hierarchical abstractions of protein. In the paper, we improved the smoothness of transition for these abstractions by constructing a complete binary tree of a protein. In order to reduce the degree of expansion of the geometric model corresponding to the high level of abstraction, we introduced minimum ellipsoidal enveloping and some post-processing techniques. At the same time, a simple, ellipsoid drawing method based on graphics processing unit (GPU) is used that can guarantee that the drawing speed is not lower than the existing sphere-drawing method. Finally, we evaluated the rendering performance and effect on series of molecules with different scales. The post-processing techniques applied, diffuse shading and contours, further conceal the expansion problem and highlight the surface details.


Author(s):  
Xiuhua Liu ◽  
Zhihao Zhou ◽  
Qining Wang

Sit-to-stand and stand-to-sit transitions (STS), as one of the most demanding functional task in daily life, are affected by aging or stroke and other neurological injuries. Lower-limb exoskeletons can provide extra assistance for affected limbs to recover functional activities [1]. Several studies presented locomotion mode recognition of sitting, standing and STS, or only STS, or static modes [2–6]. They are based on fusing information of the mechanical sensors worn on the human body, e.g. inertial measurement unit (IMU) [2–4], plantar pressure force [5], barometric pressure[2], EMG [6]. However, most of them put sensors on the human body and did not show experiments integrated with exoskeletons. Since the physical interaction between the exoskeleton and human body, the recognition method might be different when wearing a real exoskeleton. To deal with these problems, in this study we proposed a recognition method about STS based on the multi-sensor fusion information of interior sensors of a light-weight bionic knee exoskeleton (BioKEX). A simple classifier based on Support Vector Machine (SVM) was used considering the computational cost of the processing unit in exoskeleton.


2019 ◽  
Vol 11 (1) ◽  
pp. 1-8
Author(s):  
Putut Son Maria ◽  
Elva Susianti

Digital data recording of the geometry of 3-dimensional objects require a 3D scanner tool which are mostly using an imagery sensor. However there is hardly used information of scanned result, named color. Imagery sensor requires high spesification processing unit as capable as a personal computer for data acquisition processing. This research aims to build a 3D surface scanner using a time of flight laser ranging sensor and to develop its simple function to become more valuable device. Using point to point displacement method, the sensor measures the distance between the outermost point of the object and the sensor surface perpendicularly, once when one measurement is done then the object to be rotated along with the rotary table. The prototype was built using the VL53L0X sensor and ATMEGA8535 microcontroller as a motor controller for rotary table and vertical axis. Scanned data is sent from the microcontroller to the computer to be visualized in real time. The results show that the VL53L0X sensor is suitable for scanning convex objects but it is not capable to handle objects with multiple cavities.


2022 ◽  
Vol 22 (1) ◽  
pp. 1-20
Author(s):  
Di Zhang ◽  
Feng Xu ◽  
Chi-Man Pun ◽  
Yang Yang ◽  
Rushi Lan ◽  
...  

Artificial intelligence including deep learning and 3D reconstruction methods is changing the daily life of people. Now, an unmanned aerial vehicle that can move freely in the air and avoid harsh ground conditions has been commonly adopted as a suitable tool for 3D reconstruction. The traditional 3D reconstruction mission based on drones usually consists of two steps: image collection and offline post-processing. But there are two problems: one is the uncertainty of whether all parts of the target object are covered, and another is the tedious post-processing time. Inspired by modern deep learning methods, we build a telexistence drone system with an onboard deep learning computation module and a wireless data transmission module that perform incremental real-time dense reconstruction of urban cities by itself. Two technical contributions are proposed to solve the preceding issues. First, based on the popular depth fusion surface reconstruction framework, we combine it with a visual-inertial odometry estimator that integrates the inertial measurement unit and allows for robust camera tracking as well as high-accuracy online 3D scan. Second, the capability of real-time 3D reconstruction enables a new rendering technique that can visualize the reconstructed geometry of the target as navigation guidance in the HMD. Therefore, it turns the traditional path-planning-based modeling process into an interactive one, leading to a higher level of scan completeness. The experiments in the simulation system and our real prototype demonstrate an improved quality of the 3D model using our artificial intelligence leveraged drone system.


2020 ◽  
pp. 16-23
Author(s):  
Nikolay N. Vasilyuk

When constructing inertial/GNSS navigation systems, it is necessary to determine coordinates of a GNSS antenna relative to an inertial measurement unit. It is proposed to solve this problem by integrating of the inertial unit and GNSS antenna’s element into a common structure called an integrated antenna. This approach allows to determine the required coordinates in factory conditions, during a manufacturing of the integrated antenna. Operation principles of design modules of the integrated antenna and ways to use this antenna in the inertial/GNSS navigation systems have been described. Design features of a half-duplex digital data exchange between the antenna and a data processor have been indicated. Approaches to use this exchange to solve some service tasks of the navigation system have been proposed. It is noted that the integrated antenna has its own measuring basis. Methods of accounting of the attitude of this basis in practical applications of the integrated antennas in the single- and multi-antenna inertial/GNSS navigation systems have been described.


2019 ◽  
Vol 38 (10-11) ◽  
pp. 1286-1306 ◽  
Author(s):  
Adrian Battiston ◽  
Inna Sharf ◽  
Meyer Nahon

An extensive evaluation of attitude estimation algorithms in simulation and experiments is performed to determine their suitability for a collision recovery pipeline of a quadcopter unmanned aerial vehicle. A multiplicative extended Kalman filter (MEKF), unscented Kalman filter (UKF), complementary filter, [Formula: see text] filter, and novel adaptive varieties of the selected filters are compared. The experimental quadcopter uses a PixHawk flight controller, and the algorithms are implemented using data from only the PixHawk inertial measurement unit (IMU). Performance of the aforementioned filters is first evaluated in a simulation environment using modified sensor models to capture the effects of collision on inertial measurements. Simulation results help define the efficacy and use cases of the conventional and novel algorithms in a quadcopter collision scenario. An analogous evaluation is then conducted by post-processing logged sensor data from collision flight tests, to gain new insights into algorithms’ performance in the transition from simulated to real data. The post-processing evaluation compares each algorithm’s attitude estimate, including the stock attitude estimator of the PixHawk controller, to data collected by an offboard infrared motion capture system. Based on this evaluation, two promising algorithms, the MEKF and an adaptive [Formula: see text] filter, are selected for implementation on the physical quadcopter in the control loop of the collision recovery pipeline. Experimental results show an improvement in the metric used to evaluate experimental performance, the time taken to recover from the collision, when compared with the stock attitude estimator on the PixHawk (PX4) software.


2020 ◽  
Vol 12 (4) ◽  
pp. 657 ◽  
Author(s):  
Hao Zhang ◽  
Bing Zhang ◽  
Zhiqi Wei ◽  
Chenze Wang ◽  
Qiao Huang

The rapid development of unmanned aerial vehicles (UAVs), miniature hyperspectral imagers, and relevant instruments has facilitated the transition of UAV-borne hyperspectral imaging systems from concept to reality. Given the merits and demerits of existing similar UAV hyperspectral systems, we presented a lightweight, integrated solution for hyperspectral imaging systems including a data acquisition and processing unit. A pushbroom hyperspectral imager was selected owing to its superior radiometric performance. The imager was combined with a stabilizing gimbal and global-positioning system combined with an inertial measurement unit (GPS/IMU) system to form the image acquisition system. The postprocessing software included the radiance transform, surface reflectance computation, geometric referencing, and mosaic functions. The geometric distortion of the image was further significantly decreased by a postgeometric referencing software unit; this used an improved method suitable for UAV pushbroom images and showed more robust performance when compared with current methods. Two typical experiments, one of which included the case in which the stabilizing gimbal failed to function, demonstrated the stable performance of the acquisition system and data processing system. The result shows that the relative georectification accuracy of images between the adjacent flight lines was on the order of 0.7–1.5 m and 2.7–13.1 m for cases with spatial resolutions of 5.5 cm and 32.4 cm, respectively.


Sign in / Sign up

Export Citation Format

Share Document