depth sensors
Recently Published Documents


TOTAL DOCUMENTS

273
(FIVE YEARS 102)

H-INDEX

21
(FIVE YEARS 5)

2021 ◽  
Vol 5 (Supplement_1) ◽  
pp. 95-96
Author(s):  
Erin Robinson ◽  
Wenlong Wu ◽  
Geunhye Park ◽  
Gashaye M Tefera ◽  
Kari Lane ◽  
...  

Abstract Older adults have experienced greater isolation and mental health concerns during the COVID-19 pandemic. In long-term care (LTC) settings, residents have been particularly impacted due to strict lockdown policies. Little is known about how these policies have impacted older adults. This study leveraged existing research with embedded sensors installed in LTC settings, and analyzed sensor data of residents (N=30) two months pre/post the onset of the U.S. COVID-19 pandemic (1/13/20 to 3/13/20, 03/14/20 to 5/13/20). Data from three sensors (bed sensors, depth sensors, and motion sensors) were analyzed for each resident using paired t-tests, which generated information on the resident’s pulse, respiration, sleep, gait, and motion in entering/exiting their front door, living rooms, bedrooms, and bathrooms. A 14.4% decrease was observed in front door motion in the two months post-onset of the pandemic, as well as a 2.4% increase in average nighttime respiration, and a 7.6% increase in nighttime bed restlessness. Over half of our sample (68%) had significant differences (p<0.05) in restlessness. These results highlight the potential impact of the COVID-19 pandemic and social distancing policies on older adults living in LTC. While it is not surprising that significant differences were found in the front door motion sensor, the bed sensor data can potentially shed light on how sleep was impacted during this time. As older adults experienced additional mental health concerns during this time, their normal sleep patterns could have been affected. Implications could help inform LTC staff, healthcare providers, and self-management of health approaches among older adults.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7950
Author(s):  
Radhakrishnan Gopalapillai ◽  
Deepa Gupta ◽  
Mohammed Zakariah ◽  
Yousef Ajami Alotaibi

Classification of indoor environments is a challenging problem. The availability of low-cost depth sensors has opened up a new research area of using depth information in addition to color image (RGB) data for scene understanding. Transfer learning of deep convolutional networks with pairs of RGB and depth (RGB-D) images has to deal with integrating these two modalities. Single-channel depth images are often converted to three-channel images by extracting horizontal disparity, height above ground, and the angle of the pixel’s local surface normal (HHA) to apply transfer learning using networks trained on the Places365 dataset. The high computational cost of HHA encoding can be a major disadvantage for the real-time prediction of scenes, although this may be less important during the training phase. We propose a new, computationally efficient encoding method that can be integrated with any convolutional neural network. We show that our encoding approach performs equally well or better in a multimodal transfer learning setup for scene classification. Our encoding is implemented in a customized and pretrained VGG16 Net. We address the class imbalance problem seen in the image dataset using a method based on the synthetic minority oversampling technique (SMOTE) at the feature level. With appropriate image augmentation and fine-tuning, our network achieves scene classification accuracy comparable to that of other state-of-the-art architectures.


Water ◽  
2021 ◽  
Vol 13 (21) ◽  
pp. 3099
Author(s):  
Daniel A. Segovia-Cardozo ◽  
Leonor Rodríguez-Sinobas ◽  
Freddy Canales-Ide ◽  
Sergio Zubelzu

Hydrologic processes acting on catchments are complex and variable, especially in mountain basins due to their topography and specific characteristics, so runoff simulation models and water management are also complex. Nevertheless, model parameters are usually estimated on the basis of guidelines from user manuals and literature because they are not usually monitored, due to the high cost of conventional monitoring systems. Within this framework, a new and promising generation of low-cost sensors for hydrologic monitoring, logging, and transition has been developed. We aimed to design a low-cost, open-hardware platform, based on a Raspberry Pi and software written in Python 3, for measuring, recording, and wireless data transmission in hydrological monitoring contexts. Moreover, the data are linked to a runoff model, in real time, for flood prevention. Complementarily, it emphasizes the role of the calibration and validation of soil moisture, rain gauges, and water depth sensors in laboratories. It was installed in a small mountain basin. The results showed mean absolute errors of ±2.2% in soil moisture, ±1 mm in rainfall, and ±0.51 cm in water depth measurements; they highlight the potential of this platform for hydrological monitoring and flood risk management.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Rong Zou ◽  
Yu Zhang ◽  
Junlan Gu ◽  
Jin Chen

Detecting distance between surfaces of transparent materials with large area and thickness has always been a difficult problem in the field of industry. In this paper, a method based on low-cost TOF continuous-wave modulation and deep convolutional neural network technology is proposed. The distance detection between transparent material surfaces is converted to the problem of solving the intersection of the optical path and the transparent material’s front and rear surfaces. On this basis, the Gray code encoding and decoding operations are combined to achieve distance detection between surfaces. The problem of holes and detail loss of depth maps generated by low-resolution TOF depth sensors have been also effectively solved. The entire system is simple and can achieve thickness detection on the full surface area. Besides, it can detect large transparent materials with a thickness of over 30 mm, which far exceeds the existing optical thickness detection system for transparent materials.


2021 ◽  
Author(s):  
Konrad P Cop ◽  
Arne Peters ◽  
Bare L Zagar ◽  
Daniel Hettegger ◽  
Alois C Knoll

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6095
Author(s):  
Xiaojing Sun ◽  
Bin Wang ◽  
Longxiang Huang ◽  
Qian Zhang ◽  
Sulei Zhu ◽  
...  

Despite recent successes in hand pose estimation from RGB images or depth maps, inherent challenges remain. RGB-based methods suffer from heavy self-occlusions and depth ambiguity. Depth sensors rely heavily on distance and can only be used indoors, thus there are many limitations to the practical application of depth-based methods. The aforementioned challenges have inspired us to combine the two modalities to offset the shortcomings of the other. In this paper, we propose a novel RGB and depth information fusion network to improve the accuracy of 3D hand pose estimation, which is called CrossFuNet. Specifically, the RGB image and the paired depth map are input into two different subnetworks, respectively. The feature maps are fused in the fusion module in which we propose a completely new approach to combine the information from the two modalities. Then, the common method is used to regress the 3D key-points by heatmaps. We validate our model on two public datasets and the results reveal that our model outperforms the state-of-the-art methods.


Author(s):  
Hong Jia ◽  
Jiawei Hu ◽  
Wen Hu

Sports analytics in the wild (i.e., ubiquitously) is a thriving industry. Swing tracking is a key feature in sports analytics. Therefore, a centimeter-level tracking resolution solution is required. Recent research has explored deep neural networks for sensor fusion to produce consistent swing-tracking performance. This is achieved by combining the advantages of two sensor modalities (IMUs and depth sensors) for golf swing tracking. Here, the IMUs are not affected by occlusion and can support high sampling rates. Meanwhile, depth sensors produce significantly more accurate motion measurements than those produced by IMUs. Nevertheless, this method can be further improved in terms of accuracy and lacking information for different domains (e.g., subjects, sports, and devices). Unfortunately, designing a deep neural network with good performance is time consuming and labor intensive, which is challenging when a network model is deployed to be used in new settings. To this end, we propose a network based on Neural Architecture Search (NAS), called SwingNet, which is a regression-based automatic generated deep neural network via stochastic neural network search. The proposed network aims to learn the swing tracking feature for better prediction automatically. Furthermore, SwingNet features a domain discriminator by using unsupervised learning and adversarial learning to ensure that it can be adaptive to unobserved domains. We implemented SwingNet prototypes with a smart wristband (IMU) and smartphone (depth sensor), which are ubiquitously available. They enable accurate sports analytics (e.g., coaching, tracking, analysis and assessment) in the wild. Our comprehensive experiment shows that SwingNet achieves less than 10 cm errors of swing tracking with a subject-independent model covering multiple sports (e.g., golf and tennis) and depth sensor hardware, which outperforms state-of-the-art approaches.


Author(s):  
S. Harbola ◽  
V. Coors

Abstract. The increased usage of the environmental monitoring system and sensors, installed on a day-to-day basis to explore information and monitor the cities’ environment and pollution conditions, are in demand. Sensor networking advancement with quality and quantity of environmental data has given rise to increasing techniques and methodologies supporting spatiotemporal data interactive visualisation analyses. Moreover, Visualisation (Vis) and Visual Analytics (VA) of spatiotemporal data have become essential for research, policymakers, and industries to improve energy efficiency, environmental management, and cities’ air pollution planning. A platform covering Vis and VA of spatiotemporal data collected from a city helps to portray such techniques’ potential in exploring crucial environmental inside, which is still required. Therefore, this work presents Vis and VA interface for the spatiotemporal data represented in terms of location, including time, and several measured attributes like Particular Matter (PM) PM2.5 and PM10, along with humidity, and wind (speed and direction) to assess the detailed temporal patterns of these parameters in Stuttgart, Germany. The time series are analysed using the unsupervised HDBSCAN clustering on a series of (above mentioned) parameters. Furthermore, with the in-depth sensors nature understanding and trends, Machine Learning (ML) approach called Transformers Network predictor model is integrated, that takes successive time values of parameters as input with sensors’ locations and predict the future dominant (highly measured) values with location in time as the output. The selected parameters variations are compared and analysed in the spatiotemporal frame to provide detailed estimations on how average conditions would change in a region over the time. This work would help to get a better insight into the urban system and enable the sustainable development of cities by improving human interaction with the spatiotemporal data. Hence, the increasing environmental problems for big industrial cities could be alarmed and reduced for the future with proposed work.


Sign in / Sign up

Export Citation Format

Share Document