depth image
Recently Published Documents





2022 ◽  
Vol 2022 ◽  
pp. 1-8
Xin Liu ◽  
Hua Pan

The purpose is to provide a more reliable human-computer interaction (HCI) guarantee for animation works under virtual reality (VR) technology. Inspired by artificial intelligence (AI) technology and based on the convolutional neural network—support vector machine (CNN-SVM), the differences between animation works under VR technology and traditional animation works are analyzed through a comprehensive analysis of VR technology. The CNN-SVM gesture recognition algorithm using the error correction strategy is designed based on HCI recognition. To have better recognition performance, the advantages of depth image and color image are combined, and the collected information is preprocessed including the relations between the times of image training iterations and the accuracy of different methods in the direction of the test set. After experiments, the maximum accuracy of the preprocessed image can reach 0.86 showing the necessity of image preprocessing. The recognition accuracy of the optimized CNN-SVM is compared with other algorithm models. Experiments show that the accuracy of the optimized CNN-SVM has an upward trend compared with the previous CNN-SVM, and the accuracy reaches 0.97. It proves that the designed algorithm can provide good technical support for VR animation, so that VR animation works can interact well with the audience. It is of great significance for the development of VR animation and the improvement of people’s artistic life quality.

2022 ◽  
Vol 88 (1) ◽  
pp. 91-101
Haruki NAKAYAMA ◽  
Takuya KAMITANI ◽  
Masashi NISHIYAMA ◽  
Yoshio IWAI ◽  

2022 ◽  
Vol 12 (1) ◽  
pp. 471
Sang Kuy Han ◽  
Keonwoo Kim ◽  
Yejoon Rim ◽  
Manhyung Han ◽  
Youngjeon Lee ◽  

By virtue of their upright locomotion, similar to that of humans, motion analysis of non-human primates has been widely used in order to better understand musculoskeletal biomechanics and neuroscience problems. Given the difficulty of conducting a marker-based infrared optical tracking system for the behavior analysis of primates, a 2-dimensional (D) video analysis has been applied. Distinct from a conventional marker-based optical tracking system, a depth image sensor system provides 3-D information on movement without any skin markers. The specific aim of this study was to develop a novel algorithm to analyze the behavioral patterns of non-human primates in a home cage using a depth image sensor. The behavioral patterns of nine monkeys in their home cage, including sitting, standing, and pacing, were captured using a depth image sensor. Thereafter, these were analyzed by observers’ manual assessment and the newly written automated program. We confirmed that the measurement results from the observers’ manual assessments and the automated program with depth image analysis were statistically identical.

Wenzhi Wang ◽  
Yuan Zhang ◽  
Jie He ◽  
Zhanqi Chen ◽  
Dan Li ◽  

In order to solve the labor-intensive and time-consuming problem in the process of measuring yak body size and weight in yak breeding industry in Qinghai Province, a non-contact method for measuring yak body size and weight was proposed in this experiment, and key technologies based on semantic segmentation, binocular ranging and neural network algorithm were studied to boost the development of yak breeding industry in Qinghai Province. Main conclusions: (1) Study yak foreground image extraction, and implement yak foreground image extraction model based on U-net algorithm; select 2263 yak images for experiment, and verify that the accuracy of the model in yak image extraction is over 97%. (2) Develop an algorithm for estimating yak body size based on binocular vision, and use the extraction algorithm of yak body size related measurement points combined with depth image to estimate yak body size. The final test shows that the average estimation error of body height and body oblique length is 2.6%, and the average estimation error of chest depth is 5.94%. (3) Study the yak weight prediction model; select the body height, body oblique length and chest depth obtained by binocular vision to estimate the yak weight; use two algorithms to establish the yak weight prediction model, and verify that the average estimation error of the model for yak weight is 10.78% and 13.01% respectively.

2021 ◽  
Vol 15 (24) ◽  
pp. 167-175
Md Shahriar Tasjid ◽  
Ahmed Al Marouf

Walking is one of the most common modes of terrestrial locomotion for humans. Walking is essential for humans to perform most kinds of daily activities. When a person walks, there is a pattern in it, and it is known as gait. Gait analysis is used in sports and healthcare. We can analyze this gait in different ways, like using video captured by the surveillance cameras or depth image cameras in the lab environment. It also can be recognized by wearable sensors. e.g., accelerometer, force sensors, gyroscope, flexible goniometer, magneto resistive sensors, electromagnetic tracking system, force sensors, and electromyography (EMG). Analysis through these sensors required a lab condition, or users must wear these sensors. For detecting abnormality in gait action of a human, we need to incorporate the sensors separately. We can know about one's health condition by abnormal human gait after detecting it. Understanding a regular gait vs. abnormal gait may give insights to the health condition of the subject using the smart wearable technologies. Therefore, in this paper, we proposed a way to analyze abnormal human gait through smartphone sensors. Though smart devices like smartphones and smartwatches are used by most of the person nowadays. So, we can track down their gait using sensors of these intelligent wearable devices. In this study, we used twenty-three (N=23) people to record their walking activities. Among them fourteen people have normal gait actions, and nine people were facing difficulties with their walking due to their illness. To do the stratification of the gait of the subjects, we have adopted five machine learning algorithms with addition a deep learning algorithm. The advantages of the traditional classification are analyzed and compared among themselves. After rigorous performance analysis we found support vector machine (SVM) showing 96% accuracy, highest among the tradition classifiers. 70%, 84%, and 95% accuracy is obtained by the logistic regression, Naïve Bayes, and k-Nearest Neighbor (kNN) classifiers, respectively. As per the state-of-the art, deep learning classifiers has been proven to outperform the traditional classifiers in similar binary classification problems. We have considered the scenario and applied the 2D convolutional neural network (2D-CNN) classification algorithm, which outperformed the other algorithms showing accuracy of 98%. The model can be optimized and can be integrated with the other sensors to be utilized in the mobile wearable devices.

2021 ◽  
Dae-Hyun Jung ◽  
Cheoul Young Kim ◽  
Taek Sung Lee ◽  
Soo Hyun Park

Abstract Background: The truss on tomato plants is a group or cluster of smaller stems where flowers and fruit develop, while a growing truss is the most extended part of the stem. Because the state of the growing truss reacts sensitively to the surrounding environment, it is essential to control the growth in the early stages. With the recent development of IT and artificial intelligence technology in agriculture, a previous study developed a real-time acquisition and evaluation method for images using robots. Further, we used image processing to locate the growing truss and flowering rooms to extract growth information such as the height of the flower room and hard crab. Among the different vision algorithms, the CycleGAN algorithm was used to generate and transform unpaired images using generatively learning images. In this study, we developed a robot-based system for simultaneously acquiring RGB and depth images of the tomato growing truss and flower room groups.Results: The segmentation performance for approximately 35 samples was compared through the false negative (FN) and false positive (FP) indicators. For the depth camera image, we obtained FN as 17.55±3.01% and FP as 17.76±3.55%. Similarly, for CycleGAN, we obtained FN as approximately 19.24±1.45% and FP as 18.24±1.54%. As a result of image processing through depth image, IoU was 63.56 ± 8.44%, and when segmentation was performed through CycelGAN, IoU was 69.25 ± 4.42%, indicating that CycleGAN is advantageous in extracting the desired growing truss. Conclusions: The scannability was confirmed when the image scanning robot drove in a straight line through the plantation in the tomato greenhouse, which confirmed the on-site possibility of the image extraction technique using CycleGAN. In the future, the proposed approach is expected to be used in vision technology to scan the tomato growth indicators in greenhouses using an unmanned robot platform.

2021 ◽  
Ao Yang ◽  
Jie Cao ◽  
Zhijun Li ◽  
Yang Cheng ◽  
Qun Hao

2021 ◽  
Peng Shan ◽  
Wenzhi Wang ◽  
Wenxuan Zhao ◽  
Zishen Yang

Sign in / Sign up

Export Citation Format

Share Document