depth sensor
Recently Published Documents


TOTAL DOCUMENTS

491
(FIVE YEARS 177)

H-INDEX

21
(FIVE YEARS 6)

2022 ◽  
Author(s):  
Stella Gerdemann ◽  
Ronja Büchner ◽  
Robert Hepach

Children sometimes show positive emotions in response to seeing others being helped, yet it remains poorly understood whether there is a strategic value to such emotional expressions. Here, we investigated the influence seeing a peer receive deserving help or not on children’s emotions, which were assessed while the peer was present or not. To measure children’s emotional expression, we used a motion depth sensor imaging camera, which recorded children’s body posture. Five-year-old children (N = 122) worked on a task which yielded greater rewards for them compared to their peer, rendering the peer to be in greater need of help. An adult––who was unaware of the different levels of neediness––then either helped the child who had a lesser need for help (less deserving outcome) or helped the needier peer (deserving outcome). Overall, both children showed a lowered body posture, a more negative emotional expression, after not being helped and an elevated body posture, a more positive emotional expression, after being helped. Seeing their peer (less deservedly) not receive help, and to a lesser extent being observed, blunted children’s otherwise positive emotions in response to receiving help. These results are discussed in the broader theoretical context of how children’s emotions sometimes reflect their commitment to cooperative relationships with peers.


Author(s):  
David Singer ◽  
Dorian Rohner ◽  
Dominik Henrich

AbstractA complete object database containing a model (representing geometric and texture information) of every possible workpiece is a common necessity e.g. for different object recognition or task planning approaches. The generation of these models is often a tedious process. In this paper we present a fully automated approach to tackle this problem by generating complete workpiece models using a robotic manipulator. A workpiece is recorded by a depth sensor from multiple views for one side, then turned, and captured from the other side. The resulting point clouds are merged into one complete model. Additionally, we represent the information provided by the object’s texture using keypoints. We present a proof of concept and evaluate the precision of the final models. In the end we conclude the usefulness of our approach showing a precision of around 1 mm for the resulting models.


2021 ◽  
Vol 33 (6) ◽  
pp. 1408-1422
Author(s):  
Alireza Bilesan ◽  
Shunsuke Komizunai ◽  
Teppei Tsujita ◽  
Atsushi Konno ◽  
◽  
...  

Kinect has been utilized as a cost-effective, easy-to-use motion capture sensor using the Kinect skeleton algorithm. However, a limited number of landmarks and inaccuracies in tracking the landmarks’ positions restrict Kinect’s capability. In order to increase the accuracy of motion capturing using Kinect, joint use of the Kinect skeleton algorithm and Kinect-based marker tracking was applied to track the 3D coordinates of multiple landmarks on human. The motion’s kinematic parameters were calculated using the landmarks’ positions by applying the joint constraints and inverse kinematics techniques. The accuracy of the proposed method and OptiTrack (NaturalPoint, Inc., USA) was evaluated in capturing the joint angles of a humanoid (as ground truth) in a walking test. In order to evaluate the accuracy of the proposed method in capturing the kinematic parameters of a human, lower body joint angles of five healthy subjects were extracted using a Kinect, and the results were compared to Perception Neuron (Noitom Ltd., China) and OptiTrack data during ten gait trials. The absolute agreement and consistency between each optical system and the robot data in the robot test and between each motion capture system and OptiTrack data in the human gait test were determined using intraclass correlations coefficients (ICC3). The reproducibility between systems was evaluated using Lin’s concordance correlation coefficient (CCC). The correlation coefficients with 95% confidence intervals (95%CI) were interpreted substantial for both OptiTrack and proposed method (ICC > 0.75 and CCC > 0.95) in humanoid test. The results of the human gait experiments demonstrated the advantage of the proposed method (ICC > 0.75 and RMSE = 1.1460°) over the Kinect skeleton model (ICC < 0.4 and RMSE = 6.5843°).


Author(s):  
N. V. Gowtham Deekshithulu ◽  
Joyita Mali ◽  
V. Vamsee Krishna ◽  
D. Surekha

In the present study, canal depth, velocity and weather monitoring sensors are designed and implemented in the field irrigation laboratory, Aditya Engineering College, Surampalem, Andhra Pradesh, India. The depth sensor which is used in this project is HC-SR04 sensor and the velocity sensor is YF-S403. A method of data acquisition and transmission based on ThingSpeak IOT is proposed. To record weather data (i.e., temperature, humidity, rainfall depth and wind speed) DHT11 sensor, ultrasonic sensor and IR sensors are used. The purpose of this project is to evaluate the performance of real time canal and weather monitoring devices. A structure of real time weather monitoring devices based on sensors and ThingSpeak IOT, a design was developed to realize the independent operation of sensors and wireless data transmission can help in minimizing the error in data collection. Arduino UNO is connected with canal depth and velocity sensor to generate the output, similarly NodeMCU is connected with weather monitoring device. The results revealed that observed sensor data showed good results when compared/calibrated with the existing conventional measurement system. In order to decrease the time and to get accurate value, it is recommended to consider the sensors for the proper use and to access weather data easily. The developed device worked satisfactorily with minimum or no errors.


Mathematics ◽  
2021 ◽  
Vol 9 (21) ◽  
pp. 2815
Author(s):  
Shih-Hung Yang ◽  
Yao-Mao Cheng ◽  
Jyun-We Huang ◽  
Yon-Ping Chen

Automatic fingerspelling recognition tackles the communication barrier between deaf and hearing individuals. However, the accuracy of fingerspelling recognition is reduced by high intra-class variability and low inter-class variability. In the existing methods, regular convolutional kernels, which have limited receptive fields (RFs) and often cannot detect subtle discriminative details, are applied to learn features. In this study, we propose a receptive field-aware network with finger attention (RFaNet) that highlights the finger regions and builds inter-finger relations. To highlight the discriminative details of these fingers, RFaNet reweights the low-level features of the hand depth image with those of the non-forearm image and improves finger localization, even when the wrist is occluded. RFaNet captures neighboring and inter-region dependencies between fingers in high-level features. An atrous convolution procedure enlarges the RFs at multiple scales and a non-local operation computes the interactions between multi-scale feature maps, thereby facilitating the building of inter-finger relations. Thus, the representation of a sign is invariant to viewpoint changes, which are primarily responsible for intra-class variability. On an American Sign Language fingerspelling dataset, RFaNet achieved 1.77% higher classification accuracy than state-of-the-art methods. RFaNet achieved effective transfer learning when the number of labeled depth images was insufficient. The fingerspelling representation of a depth image can be effectively transferred from large- to small-scale datasets via highlighting the finger regions and building inter-finger relations, thereby reducing the requirement for expensive fingerspelling annotations.


2021 ◽  
Vol 3 (4) ◽  
pp. 840-852
Author(s):  
Duke M. Bulanon ◽  
Colton Burr ◽  
Marina DeVlieg ◽  
Trevor Braddock ◽  
Brice Allen

One of the challenges in the future of food production, amidst increasing population and decreasing resources, is developing a sustainable food production system. It is anticipated that robotics will play a significant role in maintaining the food production system, specifically in labor-intensive operations. Therefore, the main goal of this project is to develop a robotic fruit harvesting system, initially focused on the harvesting of apples. The robotic harvesting system is composed of a six-degrees-of-freedom (DOF) robotic manipulator, a two-fingered gripper, a color camera, a depth sensor, and a personal computer. This paper details the development and performance of a visual servo system that can be used for fruit harvesting. Initial test evaluations were conducted in an indoor laboratory using plastic fruit and artificial trees. Subsequently, the system was tested outdoors in a commercial fruit orchard. Evaluation parameters included fruit detection performance, response time of the visual servo, and physical time to harvest a fruit. Results of the evaluation showed that the developed visual servo system has the potential to guide the robot for fruit harvesting.


Author(s):  
Shogo Sekiguchi ◽  
Liang Li ◽  
Nak Yong Ko ◽  
Woong Choi

2021 ◽  
Vol 22 (1) ◽  
pp. 110-140 ◽  
Author(s):  
Yumiko Tamura ◽  
Masahiro Shiomi ◽  
Mitsuhiko Kimoto ◽  
Takamasa Iio ◽  
Katsunori Shimohara ◽  
...  

Abstract This paper investigates the effects of group interaction in a storytelling situation for children using two robots: a reader robot and a listener robot as a side-participant. We developed a storytelling system that consists of a reader robot, a listener robot, a display, a gaze model, a depth sensor, and a human operator who responds and provides easily understandable answers to the children’s questions. We experimentally investigated the effects of using a listener robot and either one or two children during a storytelling situation on the children’s preferences and their speech activities. Our experimental results showed that the children preferred storytelling with the listener robot. Although two children obviously produced more speech than one child, the listener robot discouraged the children’s speech regardless of whether one or two were listening.


Author(s):  
Hong Jia ◽  
Jiawei Hu ◽  
Wen Hu

Sports analytics in the wild (i.e., ubiquitously) is a thriving industry. Swing tracking is a key feature in sports analytics. Therefore, a centimeter-level tracking resolution solution is required. Recent research has explored deep neural networks for sensor fusion to produce consistent swing-tracking performance. This is achieved by combining the advantages of two sensor modalities (IMUs and depth sensors) for golf swing tracking. Here, the IMUs are not affected by occlusion and can support high sampling rates. Meanwhile, depth sensors produce significantly more accurate motion measurements than those produced by IMUs. Nevertheless, this method can be further improved in terms of accuracy and lacking information for different domains (e.g., subjects, sports, and devices). Unfortunately, designing a deep neural network with good performance is time consuming and labor intensive, which is challenging when a network model is deployed to be used in new settings. To this end, we propose a network based on Neural Architecture Search (NAS), called SwingNet, which is a regression-based automatic generated deep neural network via stochastic neural network search. The proposed network aims to learn the swing tracking feature for better prediction automatically. Furthermore, SwingNet features a domain discriminator by using unsupervised learning and adversarial learning to ensure that it can be adaptive to unobserved domains. We implemented SwingNet prototypes with a smart wristband (IMU) and smartphone (depth sensor), which are ubiquitously available. They enable accurate sports analytics (e.g., coaching, tracking, analysis and assessment) in the wild. Our comprehensive experiment shows that SwingNet achieves less than 10 cm errors of swing tracking with a subject-independent model covering multiple sports (e.g., golf and tennis) and depth sensor hardware, which outperforms state-of-the-art approaches.


Sign in / Sign up

Export Citation Format

Share Document