Invited talk: A short-time implementation for a high-performance binarized deep convolutional neural network on an FPGA

Author(s):  
Hiroki Nakahara
2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Zhiwen Huang ◽  
Jianmin Zhu ◽  
Jingtao Lei ◽  
Xiaoru Li ◽  
Fengqing Tian

Tool wear monitoring is essential in precision manufacturing to improve surface quality, increase machining efficiency, and reduce manufacturing cost. Although tool wear can be reflected by measurable signals in automatic machining operations, with the increase of collected data, features are manually extracted and optimized, which lowers monitoring efficiency and increases prediction error. For addressing the aforementioned problems, this paper proposes a tool wear monitoring method using vibration signal based on short-time Fourier transform (STFT) and deep convolutional neural network (DCNN) in milling operations. First, the image representation of acquired vibration signals is obtained based on STFT, and then the DCNN model is designed to establish the relationship between obtained time-frequency maps and tool wear, which performs adaptive feature extraction and automatic tool wear prediction. Moreover, this method is demonstrated by employing three tool wear experimental datasets collected from three-flute ball nose tungsten carbide cutter of a high-speed CNC machine under dry milling. Finally, the experimental results prove that the proposed method is more accurate and relatively reliable than other compared methods.


Aerospace ◽  
2020 ◽  
Vol 7 (9) ◽  
pp. 126 ◽  
Author(s):  
Thaweerath Phisannupawong ◽  
Patcharin Kamsing ◽  
Peerapong Torteeka ◽  
Sittiporn Channumsin ◽  
Utane Sawangwit ◽  
...  

The capture of a target spacecraft by a chaser is an on-orbit docking operation that requires an accurate, reliable, and robust object recognition algorithm. Vision-based guided spacecraft relative motion during close-proximity maneuvers has been consecutively applied using dynamic modeling as a spacecraft on-orbit service system. This research constructs a vision-based pose estimation model that performs image processing via a deep convolutional neural network. The pose estimation model was constructed by repurposing a modified pretrained GoogLeNet model with the available Unreal Engine 4 rendered dataset of the Soyuz spacecraft. In the implementation, the convolutional neural network learns from the data samples to create correlations between the images and the spacecraft’s six degrees-of-freedom parameters. The experiment has compared an exponential-based loss function and a weighted Euclidean-based loss function. Using the weighted Euclidean-based loss function, the implemented pose estimation model achieved moderately high performance with a position accuracy of 92.53 percent and an error of 1.2 m. The in-attitude prediction accuracy can reach 87.93 percent, and the errors in the three Euler angles do not exceed 7.6 degrees. This research can contribute to spacecraft detection and tracking problems. Although the finished vision-based model is specific to the environment of synthetic dataset, the model could be trained further to address actual docking operations in the future.


2022 ◽  
Vol 8 ◽  
Author(s):  
Ruihao Li ◽  
Chunlian Fu ◽  
Wei Yi ◽  
Xiaodong Yi

The low-cost Inertial Measurement Unit (IMU) can provide orientation information and is widely used in our daily life. However, IMUs with bad calibration will provide inaccurate angular velocity and lead to rapid drift of integral orientation in a short time. In this paper, we present the Calib-Net which can achieve the accurate calibration of low-cost IMU via a simple deep convolutional neural network. Following a carefully designed mathematical calibration model, Calib-Net can output compensation components for gyroscope measurements dynamically. Dilation convolution is adopted in Calib-Net for spatio-temporal feature extraction of IMU measurements. We evaluate our proposed system on public datasets quantitively and qualitatively. The experimental results demonstrate that our Calib-Net achieves better calibration performance than other methods, what is more, and the estimated orientation with our Calib-Net is even comparable with the results from visual inertial odometry (VIO) systems.


Author(s):  
Souad Khellat-Kihel ◽  
Zhenan Sun ◽  
Massimo Tistarelli

AbstractRecent research on face analysis has demonstrated the richness of information embedded in feature vectors extracted from a deep convolutional neural network. Even though deep learning achieved a very high performance on several challenging visual tasks, such as determining the identity, age, gender and race, it still lacks a well grounded theory which allows to properly understand the processes taking place inside the network layers. Therefore, most of the underlying processes are unknown and not easy to control. On the other hand, the human visual system follows a well understood process in analyzing a scene or an object, such as a face. The direction of the eye gaze is repeatedly directed, through purposively planned saccadic movements, towards salient regions to capture several details. In this paper we propose to capitalize on the knowledge of the saccadic human visual processes to design a system to predict facial attributes embedding a biologically-inspired network architecture, the HMAX. The architecture is tailored to predict attributes with different textural information and conveying different semantic meaning, such as attributes related and unrelated to the subject’s identity. Salient points on the face are extracted from the outputs of the S2 layer of the HMAX architecture and fed to a local texture characterization module based on LBP (Local Binary Pattern). The resulting feature vector is used to perform a binary classification on a set of pre-defined visual attributes. The devised system allows to distill a very informative, yet robust, representation of the imaged faces, allowing to obtain high performance but with a much simpler architecture as compared to a deep convolutional neural network. Several experiments performed on publicly available, challenging, large datasets demonstrate the validity of the proposed approach.


2018 ◽  
Vol 38 (6) ◽  
Author(s):  
Binbin Wang ◽  
Li Xiao ◽  
Yang Liu ◽  
Jing Wang ◽  
Beihong Liu ◽  
...  

There is a disparity between the increasing application of digital retinal imaging to neonatal ocular screening and slowly growing number of pediatric ophthalmologists. Assistant tools that can automatically detect ocular disorders may be needed. In present study, we develop a deep convolutional neural network (DCNN) for automated classification and grading of retinal hemorrhage. We used 48,996 digital fundus images from 3770 newborns with retinal hemorrhage of different severity (grade 1, 2 and 3) and normal controls from a large cross-sectional investigation in China. The DCNN was trained for automated grading of retinal hemorrhage (multiclass classification problem: hemorrhage-free and grades 1, 2 and 3) and then validated for its performance level. The DCNN yielded an accuracy of 97.85 to 99.96%, and the area under the receiver operating characteristic curve was 0.989–1.000 in the binary classification of neonatal retinal hemorrhage (i.e., one classification vs. the others). The overall accuracy with regard to the multiclass classification problem was 97.44%. This is the first study to show that a DCNN can detect and grade neonatal retinal hemorrhage at high performance levels. Artificial intelligence will play more positive roles in ocular healthcare of newborns and children.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sign in / Sign up

Export Citation Format

Share Document