scholarly journals Calib-Net: Calibrating the Low-Cost IMU via Deep Convolutional Neural Network

2022 ◽  
Vol 8 ◽  
Author(s):  
Ruihao Li ◽  
Chunlian Fu ◽  
Wei Yi ◽  
Xiaodong Yi

The low-cost Inertial Measurement Unit (IMU) can provide orientation information and is widely used in our daily life. However, IMUs with bad calibration will provide inaccurate angular velocity and lead to rapid drift of integral orientation in a short time. In this paper, we present the Calib-Net which can achieve the accurate calibration of low-cost IMU via a simple deep convolutional neural network. Following a carefully designed mathematical calibration model, Calib-Net can output compensation components for gyroscope measurements dynamically. Dilation convolution is adopted in Calib-Net for spatio-temporal feature extraction of IMU measurements. We evaluate our proposed system on public datasets quantitively and qualitatively. The experimental results demonstrate that our Calib-Net achieves better calibration performance than other methods, what is more, and the estimated orientation with our Calib-Net is even comparable with the results from visual inertial odometry (VIO) systems.

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Rong Zou ◽  
Yu Zhang ◽  
Junlan Gu ◽  
Jin Chen

Detecting distance between surfaces of transparent materials with large area and thickness has always been a difficult problem in the field of industry. In this paper, a method based on low-cost TOF continuous-wave modulation and deep convolutional neural network technology is proposed. The distance detection between transparent material surfaces is converted to the problem of solving the intersection of the optical path and the transparent material’s front and rear surfaces. On this basis, the Gray code encoding and decoding operations are combined to achieve distance detection between surfaces. The problem of holes and detail loss of depth maps generated by low-resolution TOF depth sensors have been also effectively solved. The entire system is simple and can achieve thickness detection on the full surface area. Besides, it can detect large transparent materials with a thickness of over 30 mm, which far exceeds the existing optical thickness detection system for transparent materials.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Zhiwen Huang ◽  
Jianmin Zhu ◽  
Jingtao Lei ◽  
Xiaoru Li ◽  
Fengqing Tian

Tool wear monitoring is essential in precision manufacturing to improve surface quality, increase machining efficiency, and reduce manufacturing cost. Although tool wear can be reflected by measurable signals in automatic machining operations, with the increase of collected data, features are manually extracted and optimized, which lowers monitoring efficiency and increases prediction error. For addressing the aforementioned problems, this paper proposes a tool wear monitoring method using vibration signal based on short-time Fourier transform (STFT) and deep convolutional neural network (DCNN) in milling operations. First, the image representation of acquired vibration signals is obtained based on STFT, and then the DCNN model is designed to establish the relationship between obtained time-frequency maps and tool wear, which performs adaptive feature extraction and automatic tool wear prediction. Moreover, this method is demonstrated by employing three tool wear experimental datasets collected from three-flute ball nose tungsten carbide cutter of a high-speed CNC machine under dry milling. Finally, the experimental results prove that the proposed method is more accurate and relatively reliable than other compared methods.


Author(s):  
Md. Al-Amin ◽  
Ruwen Qin ◽  
Wenjin Tao ◽  
David Doell ◽  
Ravon Lingard ◽  
...  

Assembly carries paramount importance in manufacturing. Being able to support workers in real time to maximize their positive contributions to assembly is a tremendous interest of manufacturers. Human action recognition has been a way to automatically analyze and understand worker actions to support real-time assistance for workers and facilitate worker–machine collaboration. Assembly actions are distinct from activities that have been well studied in the action recognition literature. Actions taken by assembly workers are intricate, variable, and may involve very fine motions. Therefore, recognizing assembly actions remains a challenging task. This paper proposes to simply use only two wearable devices that respectively capture the inertial measurement unit data of each hand of workers. Then, two convolutional neural network models with an identical architecture are independently trained using the two sources of inertial measurement unit data to respectively recognize the right-hand and the left-hand actions of an assembly worker. Classification results of the two convolutional neural network models are fused to yield a final action recognition result because the two hands often collaborate in assembling operations. Transfer learning is implemented to adapt the action recognition models to subjects whose data have not been included in dataset for training the models. One operation in assembling a Bukito three-dimensional printer, which is composed of seven actions, is used to demonstrate the implementation and assessment of the proposed method. Results from the study have demonstrated that the proposed approach effectively improves the prediction accuracy at both the action level and the subject level. Work of the paper builds a foundation for building advanced action recognition systems such as multimodal sensor-based action recognition.


Sign in / Sign up

Export Citation Format

Share Document