Corrigendum to “Multivariate event detection methods for non-intrusive load monitoring in smart homes and residential buildings” [Energy and Buildings 208 (2020) 109624]

2020 ◽  
Vol 211 ◽  
pp. 109813
Author(s):  
S. Houidi ◽  
F. Auger ◽  
H. Ben Attia Sethom ◽  
D. Fourer ◽  
L. Miègeville
2020 ◽  
Vol 208 ◽  
pp. 109624 ◽  
Author(s):  
Sarra Houidi ◽  
François Auger ◽  
Houda Ben Attia Sethom ◽  
Dominique Fourer ◽  
Laurence Miègeville

2014 ◽  
Vol 61 ◽  
pp. 1840-1843 ◽  
Author(s):  
Chuan Choong Yang ◽  
Chit Siang Soh ◽  
Vooi Voon Yap

Author(s):  
M. N. Favorskaya ◽  
L. C. Jain

Introduction:Saliency detection is a fundamental task of computer vision. Its ultimate aim is to localize the objects of interest that grab human visual attention with respect to the rest of the image. A great variety of saliency models based on different approaches was developed since 1990s. In recent years, the saliency detection has become one of actively studied topic in the theory of Convolutional Neural Network (CNN). Many original decisions using CNNs were proposed for salient object detection and, even, event detection.Purpose:A detailed survey of saliency detection methods in deep learning era allows to understand the current possibilities of CNN approach for visual analysis conducted by the human eyes’ tracking and digital image processing.Results:A survey reflects the recent advances in saliency detection using CNNs. Different models available in literature, such as static and dynamic 2D CNNs for salient object detection and 3D CNNs for salient event detection are discussed in the chronological order. It is worth noting that automatic salient event detection in durable videos became possible using the recently appeared 3D CNN combining with 2D CNN for salient audio detection. Also in this article, we have presented a short description of public image and video datasets with annotated salient objects or events, as well as the often used metrics for the results’ evaluation.Practical relevance:This survey is considered as a contribution in the study of rapidly developed deep learning methods with respect to the saliency detection in the images and videos.


2021 ◽  
Author(s):  
Hansi Hettiarachchi ◽  
Mariam Adedoyin-Olowe ◽  
Jagdev Bhogal ◽  
Mohamed Medhat Gaber

AbstractSocial media is becoming a primary medium to discuss what is happening around the world. Therefore, the data generated by social media platforms contain rich information which describes the ongoing events. Further, the timeliness associated with these data is capable of facilitating immediate insights. However, considering the dynamic nature and high volume of data production in social media data streams, it is impractical to filter the events manually and therefore, automated event detection mechanisms are invaluable to the community. Apart from a few notable exceptions, most previous research on automated event detection have focused only on statistical and syntactical features in data and lacked the involvement of underlying semantics which are important for effective information retrieval from text since they represent the connections between words and their meanings. In this paper, we propose a novel method termed Embed2Detect for event detection in social media by combining the characteristics in word embeddings and hierarchical agglomerative clustering. The adoption of word embeddings gives Embed2Detect the capability to incorporate powerful semantical features into event detection and overcome a major limitation inherent in previous approaches. We experimented our method on two recent real social media data sets which represent the sports and political domain and also compared the results to several state-of-the-art methods. The obtained results show that Embed2Detect is capable of effective and efficient event detection and it outperforms the recent event detection methods. For the sports data set, Embed2Detect achieved 27% higher F-measure than the best-performed baseline and for the political data set, it was an increase of 29%.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5272
Author(s):  
Nicole Zahradka ◽  
Khushboo Verma ◽  
Ahad Behboodi ◽  
Barry Bodt ◽  
Henry Wright ◽  
...  

Video- and sensor-based gait analysis systems are rapidly emerging for use in ‘real world’ scenarios outside of typical instrumented motion analysis laboratories. Unlike laboratory systems, such systems do not use kinetic data from force plates, rather, gait events such as initial contact (IC) and terminal contact (TC) are estimated from video and sensor signals. There are, however, detection errors inherent in kinematic gait event detection methods (GEDM) and comparative study between classic laboratory and video/sensor-based systems is warranted. For this study, three kinematic methods: coordinate based treadmill algorithm (CBTA), shank angular velocity (SK), and foot velocity algorithm (FVA) were compared to ‘gold standard’ force plate methods (GS) for determining IC and TC in adults (n = 6), typically developing children (n = 5) and children with cerebral palsy (n = 6). The root mean square error (RMSE) values for CBTA, SK, and FVA were 27.22, 47.33, and 78.41 ms, respectively. On average, GED was detected earlier in CBTA and SK (CBTA: −9.54 ± 0.66 ms, SK: −33.41 ± 0.86 ms) and delayed in FVA (21.00 ± 1.96 ms). The statistical model demonstrated insensitivity to variations in group, side, and individuals. Out of three kinematic GEDMs, SK GEDM can best be used for sensor-based gait event detection.


Sign in / Sign up

Export Citation Format

Share Document