scholarly journals A Controlled Benchmark of Video Violence Detection Techniques

Information ◽  
2020 ◽  
Vol 11 (6) ◽  
pp. 321
Author(s):  
Nicola Convertini ◽  
Vincenzo Dentamaro ◽  
Donato Impedovo ◽  
Giuseppe Pirlo ◽  
Lucia Sarcinella

This benchmarking study aims to examine and discuss the current state-of-the-art techniques for in-video violence detection, and also provide benchmarking results as a reference for the future accuracy baseline of violence detection systems. In this paper, the authors review 11 techniques for in-video violence detection. They re-implement five carefully chosen state-of-the-art techniques over three different and publicly available violence datasets, using several classifiers, all in the same conditions. The main contribution of this work is to compare feature-based violence detection techniques and modern deep-learning techniques, such as Inception V3.

2021 ◽  
Vol 13 (19) ◽  
pp. 3836
Author(s):  
Clément Dechesne ◽  
Pierre Lassalle ◽  
Sébastien Lefèvre

In recent years, numerous deep learning techniques have been proposed to tackle the semantic segmentation of aerial and satellite images, increase trust in the leaderboards of main scientific contests and represent the current state-of-the-art. Nevertheless, despite their promising results, these state-of-the-art techniques are still unable to provide results with the level of accuracy sought in real applications, i.e., in operational settings. Thus, it is mandatory to qualify these segmentation results and estimate the uncertainty brought about by a deep network. In this work, we address uncertainty estimations in semantic segmentation. To do this, we relied on a Bayesian deep learning method, based on Monte Carlo Dropout, which allows us to derive uncertainty metrics along with the semantic segmentation. Built on the most widespread U-Net architecture, our model achieves semantic segmentation with high accuracy on several state-of-the-art datasets. More importantly, uncertainty maps are also derived from our model. While they allow for the performance of a sounder qualitative evaluation of the segmentation results, they also include valuable information to improve the reference databases.


2019 ◽  
Vol 11 (12) ◽  
pp. 1499 ◽  
Author(s):  
David Griffiths ◽  
Jan Boehm

Over the past decade deep learning has driven progress in 2D image understanding. Despite these advancements, techniques for automatic 3D sensed data understanding, such as point clouds, is comparatively immature. However, with a range of important applications from indoor robotics navigation to national scale remote sensing there is a high demand for algorithms that can learn to automatically understand and classify 3D sensed data. In this paper we review the current state-of-the-art deep learning architectures for processing unstructured Euclidean data. We begin by addressing the background concepts and traditional methodologies. We review the current main approaches, including RGB-D, multi-view, volumetric and fully end-to-end architecture designs. Datasets for each category are documented and explained. Finally, we give a detailed discussion about the future of deep learning for 3D sensed data, using literature to justify the areas where future research would be most valuable.


2020 ◽  
Vol 13 (1) ◽  
pp. 48-60
Author(s):  
Razvan Rosu ◽  
Alexandru Stefan Stoica ◽  
Paul Stefan Popescu ◽  
Marian Cristian Mihaescu

Plagiarism detection represents an application domain for the NLP research area, which has not been investigated too much by researchers in the context of lately developed attention mechanism and sentence transformers. In this paper, we present a plagiarism detection approach which uses state-of-the-art deep learning techniques in order to provide more accurate results than classical plagiarism detection techniques. This approach goes beyond classical word searching and matching, which is time-consuming and can be easily cheated because it uses attention mechanisms and aims for text encoding and contextualization. In order to get proper insight regarding the system, we investigate three approaches in order to be sure that the results are relevant and well-validated. The experimental results show that the systems that use BERT pre-trained model offers the best results and outperforms GloVe and RoBERTa


Sensors ◽  
2020 ◽  
Vol 20 (19) ◽  
pp. 5577
Author(s):  
Amos Azaria ◽  
Keren Nivasch

Intelligent agents that can interact with users using natural language are becoming increasingly common. Sometimes an intelligent agent may not correctly understand a user command or may not perform it properly. In such cases, the user might try a second time by giving the agent another, slightly different command. Giving an agent the ability to detect such user corrections might help it fix its own mistakes and avoid making them in the future. In this work, we consider the problem of automatically detecting user corrections using deep learning. We develop a multimodal architecture called SAIF, which detects such user corrections, taking as inputs the user’s voice commands as well as their transcripts. Voice inputs allow SAIF to take advantage of sound cues, such as tone, speed, and word emphasis. In addition to sound cues, our model uses transcripts to determine whether a command is a correction to the previous command. Our model also obtains internal input from the agent, indicating whether the previous command was executed successfully or not. Finally, we release a unique dataset in which users interacted with an intelligent agent assistant, by giving it commands. This dataset includes labels on pairs of consecutive commands, which indicate whether the latter command is in fact a correction of the former command. We show that SAIF outperforms current state-of-the-art methods on this dataset.


Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4161
Author(s):  
Aamir Khan ◽  
Weidong Jin ◽  
Muqeet Ahmad ◽  
Rizwan Ali Naqvi ◽  
Desheng Wang

Image-to-image conversion based on deep learning techniques is a topic of interest in the fields of robotics and computer vision. A series of typical tasks, such as applying semantic labels to building photos, edges to photos, and raining to de-raining, can be seen as paired image-to-image conversion problems. In such problems, the image generation network learns from the information in the form of input images. The input images and the corresponding targeted images must share the same basic structure to perfectly generate target-oriented output images. However, the shared basic structure between paired images is not as ideal as assumed, which can significantly affect the output of the generating model. Therefore, we propose a novel Input-Perceptual and Reconstruction Adversarial Network (IP-RAN) as an all-purpose framework for imperfect paired image-to-image conversion problems. We demonstrate, through the experimental results, that our IP-RAN method significantly outperforms the current state-of-the-art techniques.


2021 ◽  
Vol 8 (1) ◽  
pp. 60-70
Author(s):  
Usama Arshad

In the last decade, object detection is one of the interesting topics that played an important role in revolutionizing the presentera. Especially when it comes to computervision, object detection is a challenging and most fundamental problem. Researchersin the last decade enhanced object detection and made many advance discoveries using thetechnological advancements. When wetalk about object detection, we also must talk about deep learning and its advancements over the time. This research work describes theadvancements in object detection over last10 years (2010-2020). Different papers published in last 10 years related to objectdetection and its types are discussed with respect to their role in advancement of object detection. This research work also describesdifferent types of object detection, which include text detection, face detection etc. It clearly describes the changes inobject detection techniques over the period of the last 10 years. The Objectdetection is divided into two groups. General detectionand Task based detection. General detection is discussed chronologically and with its different variants while task based detectionincludes many state of the art algorithms and techniques according to tasks. Wealso described the basic comparison of how somealgorithms and techniques have been updated and played a major role in advancements of different fields related to object detection.We conclude that the most important advancements happened in the last decade and the future is promising much more advancement inobject detection on the basis of work done in this decade.In the last decade, object detection is one of the interesting topics that played an important role in revolutionizing the presentera. Especially when it comes to computervision, object detection is the challenging and most fundamental problem. Researchersinlast decade enhanced object detection and made many advance discoveries using thetechnological advancements. When wetalk about object detection, we also must talk about deep learning and its advancements over the time. This research work describes theadvancements in object detection over last10 years (2010-2020). Different papers published in last 10 years related to objectdetection and its types are discussed with respect to their role in advancement of object detection. This research work also describesdifferent types of object detection, which include text detection, face detection etc. It clearly describes the changes inobject detection techniques over the period of last 10 years. The Objectdetection is divided into two groups. General detectionand Task based detection. General detection is discussed chronologically and with its different variants while task based detectionincludes many state of the art algorithms and techniques according to tasks. Wealso described the basic comparison of how somealgorithms and techniques have been updated and played a major role in advancements of different fields related to object detection.We conclude that the most important advancements happened in last decade and future is promising much more advancement inobject detection on the basis of work done in this decade.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4486
Author(s):  
Niall O’Mahony ◽  
Sean Campbell ◽  
Lenka Krpalkova ◽  
Anderson Carvalho ◽  
Joseph Walsh ◽  
...  

Fine-grained change detection in sensor data is very challenging for artificial intelligence though it is critically important in practice. It is the process of identifying differences in the state of an object or phenomenon where the differences are class-specific and are difficult to generalise. As a result, many recent technologies that leverage big data and deep learning struggle with this task. This review focuses on the state-of-the-art methods, applications, and challenges of representation learning for fine-grained change detection. Our research focuses on methods of harnessing the latent metric space of representation learning techniques as an interim output for hybrid human-machine intelligence. We review methods for transforming and projecting embedding space such that significant changes can be communicated more effectively and a more comprehensive interpretation of underlying relationships in sensor data is facilitated. We conduct this research in our work towards developing a method for aligning the axes of latent embedding space with meaningful real-world metrics so that the reasoning behind the detection of change in relation to past observations may be revealed and adjusted. This is an important topic in many fields concerned with producing more meaningful and explainable outputs from deep learning and also for providing means for knowledge injection and model calibration in order to maintain user confidence.


Agronomy ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 646
Author(s):  
Bini Darwin ◽  
Pamela Dharmaraj ◽  
Shajin Prince ◽  
Daniela Elena Popescu ◽  
Duraisamy Jude Hemanth

Precision agriculture is a crucial way to achieve greater yields by utilizing the natural deposits in a diverse environment. The yield of a crop may vary from year to year depending on the variations in climate, soil parameters and fertilizers used. Automation in the agricultural industry moderates the usage of resources and can increase the quality of food in the post-pandemic world. Agricultural robots have been developed for crop seeding, monitoring, weed control, pest management and harvesting. Physical counting of fruitlets, flowers or fruits at various phases of growth is labour intensive as well as an expensive procedure for crop yield estimation. Remote sensing technologies offer accuracy and reliability in crop yield prediction and estimation. The automation in image analysis with computer vision and deep learning models provides precise field and yield maps. In this review, it has been observed that the application of deep learning techniques has provided a better accuracy for smart farming. The crops taken for the study are fruits such as grapes, apples, citrus, tomatoes and vegetables such as sugarcane, corn, soybean, cucumber, maize, wheat. The research works which are carried out in this research paper are available as products for applications such as robot harvesting, weed detection and pest infestation. The methods which made use of conventional deep learning techniques have provided an average accuracy of 92.51%. This paper elucidates the diverse automation approaches for crop yield detection techniques with virtual analysis and classifier approaches. Technical hitches in the deep learning techniques have progressed with limitations and future investigations are also surveyed. This work highlights the machine vision and deep learning models which need to be explored for improving automated precision farming expressly during this pandemic.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
André D. Gomes ◽  
Jens Kobelke ◽  
Jörg Bierlich ◽  
Jan Dellith ◽  
Manfred Rothhardt ◽  
...  

Abstract The optical Vernier effect consists of overlapping responses of a sensing and a reference interferometer with slightly shifted interferometric frequencies. The beating modulation thus generated presents high magnified sensitivity and resolution compared to the sensing interferometer, if the two interferometers are slightly out of tune with each other. However, the outcome of such a condition is a large beating modulation, immeasurable by conventional detection systems due to practical limitations of the usable spectral range. We propose a method to surpass this limitation by using a few-mode sensing interferometer instead of a single-mode one. The overlap response of the different modes produces a measurable envelope, whilst preserving an extremely high magnification factor, an order of magnification higher than current state-of-the-art performances. Furthermore, we demonstrate the application of that method in the development of a giant sensitivity fibre refractometer with a sensitivity of around 500 µm/RIU (refractive index unit) and with a magnification factor over 850.


Recently, DDoS attacks is the most significant threat in network security. Both industry and academia are currently debating how to detect and protect against DDoS attacks. Many studies are provided to detect these types of attacks. Deep learning techniques are the most suitable and efficient algorithm for categorizing normal and attack data. Hence, a deep neural network approach is proposed in this study to mitigate DDoS attacks effectively. We used a deep learning neural network to identify and classify traffic as benign or one of four different DDoS attacks. We will concentrate on four different DDoS types: Slowloris, Slowhttptest, DDoS Hulk, and GoldenEye. The rest of the paper is organized as follow: Firstly, we introduce the work, Section 2 defines the related works, Section 3 presents the problem statement, Section 4 describes the proposed methodology, Section 5 illustrate the results of the proposed methodology and shows how the proposed methodology outperforms state-of-the-art work and finally Section VI concludes the paper.


Sign in / Sign up

Export Citation Format

Share Document