Detection of Target Persons Using Deep Learning and Training Data Generation for Tsukuba Challenge

2018 ◽  
Vol 30 (4) ◽  
pp. 513-522 ◽  
Author(s):  
Yuichi Konishi ◽  
◽  
Kosuke Shigematsu ◽  
Takashi Tsubouchi ◽  
Akihisa Ohya

The Tsukuba Challenge is an open experiment competition held annually since 2007, and wherein the autonomous navigation robots developed by the participants must navigate through an urban setting in which pedestrians and cyclists are present. One of the required tasks in the Tsukuba Challenge from 2013 to 2017 was to search for persons wearing designated clothes within the search area. This is a very difficult task since it is necessary to seek out these persons in an environment that includes regular pedestrians, and wherein the lighting changes easily because of weather conditions. Moreover, the recognition system must have a light computational cost because of the limited performance of the computer that is mounted onto the robot. In this study, we focused on a deep learning method of detecting the target persons in captured images. The developed detection system was expected to achieve high detection performance, even when small-sized input images were used for deep learning. Experiments demonstrated that the proposed system achieved better performance than an existing object detection network. However, because a vast amount of training data is necessary for deep learning, a method of generating training data to be used in the detection of target persons is also discussed in this paper.

2021 ◽  
Vol 7 (3) ◽  
pp. 59
Author(s):  
Yohanna Rodriguez-Ortega ◽  
Dora M. Ballesteros ◽  
Diego Renza

With the exponential growth of high-quality fake images in social networks and media, it is necessary to develop recognition algorithms for this type of content. One of the most common types of image and video editing consists of duplicating areas of the image, known as the copy-move technique. Traditional image processing approaches manually look for patterns related to the duplicated content, limiting their use in mass data classification. In contrast, approaches based on deep learning have shown better performance and promising results, but they present generalization problems with a high dependence on training data and the need for appropriate selection of hyperparameters. To overcome this, we propose two approaches that use deep learning, a model by a custom architecture and a model by transfer learning. In each case, the impact of the depth of the network is analyzed in terms of precision (P), recall (R) and F1 score. Additionally, the problem of generalization is addressed with images from eight different open access datasets. Finally, the models are compared in terms of evaluation metrics, and training and inference times. The model by transfer learning of VGG-16 achieves metrics about 10% higher than the model by a custom architecture, however, it requires approximately twice as much inference time as the latter.


Forecasting ◽  
2021 ◽  
Vol 3 (4) ◽  
pp. 741-762
Author(s):  
Panagiotis Stalidis ◽  
Theodoros Semertzidis ◽  
Petros Daras

In this paper, a detailed study on crime classification and prediction using deep learning architectures is presented. We examine the effectiveness of deep learning algorithms in this domain and provide recommendations for designing and training deep learning systems for predicting crime areas, using open data from police reports. Having time-series of crime types per location as training data, a comparative study of 10 state-of-the-art methods against 3 different deep learning configurations is conducted. In our experiments with 5 publicly available datasets, we demonstrate that the deep learning-based methods consistently outperform the existing best-performing methods. Moreover, we evaluate the effectiveness of different parameters in the deep learning architectures and give insights for configuring them to achieve improved performance in crime classification and finally crime prediction.


2020 ◽  
Vol 9 (05) ◽  
pp. 25052-25056
Author(s):  
Abhi Kadam ◽  
Anupama Mhatre ◽  
Sayali Redasani ◽  
Amit Nerurkar

Current lighting technologies extend the options for changing the appearance of rooms and closed spaces, as such creating ambiences with an affective meaning. Using intelligence, these ambiences may instantly be adapted to the needs of the room’s occupant(s), possibly improving their well-being. In this paper, we set actuate lighting in our surrounding using mood detection. We analyze the mood of the person by Facial Emotion Recognition using deep learning model such as Convolutional Neural Network (CNN). On recognizing this emotion, we will actuate lighting in our surrounding in accordance with the mood. Based on implementation results, the system needs to be developed further by adding more specific data class and training data.


Author(s):  
Steven Wandale ◽  
Koichi Ichige

AbstractThis paper introduces an enhanced deep learning-based (DL) antenna selection approach for optimum sparse linear array selection for direction-of-arrival (DOA) estimation applications. Generally, the antenna selection problem yields a combination of subarrays as a solution. Previous DL-based methods designated these subarrays as classes to fit the problem into a classification problem to which a convolutional neural network (CNN) is employed to solve it. However, these methods sample the combination set randomly to reduce computational cost related to the generation of training data, and it often leads to sub-optimal solutions due to ill-sampling issues. Hence, in this paper, we propose an improved DL-based method by constraining the combination set to retain the hole-free subarrays to enhance the method’s performance and sparse subarrays rendered. Numerical examples show that the proposed method yields sparser subarrays with better beampattern properties and improved DOA estimation performance than conventional DL techniques.


2020 ◽  
Vol 4 (2) ◽  
pp. 40-49
Author(s):  
Harianto Harianto ◽  
◽  
Andi Sunyoto ◽  
Sudarmawan Sudarmawan ◽  
◽  
...  

System and network security from interference from parties who do not have access to the system is the most important in a system. To realize a system, data or network that is safe at unauthorized users or other interference, a system is needed to detect it. Intrusion-Detection System (IDS) is a method that can be used to detect suspicious activity in a system or network. The classification algorithm in artificial intelligence can be applied to this problem. There are many classification algorithms that can be used, one of which is Naïve Bayes. This study aims to optimize Naïve Bayes using Univariate Selection on the UNSW-NB 15 data set. The features used only take 40 features that have the best relevance. Then the data set is divided into two test data and training data, namely 10%: 90%, 20%: 70%, 30%: 70%, 40%: 60% and 50%: 50%. From the experiments carried out, it was found that feature selection had quite an effect on the accuracy value obtained. The highest accuracy value is obtained when the data set is divided into 40%: 60% for both feature selection and non-feature selection. Naïve Bayes with unselected features obtained the highest accuracy value of 91.43%, while with feature selection 91.62%, using feature selection could increase the accuracy value by 0.19%.


Healthcare ◽  
2022 ◽  
Vol 10 (1) ◽  
pp. 166
Author(s):  
Mohamed Mouhafid ◽  
Mokhtar Salah ◽  
Chi Yue ◽  
Kewen Xia

Novel coronavirus (COVID-19) has been endangering human health and life since 2019. The timely quarantine, diagnosis, and treatment of infected people are the most necessary and important work. The most widely used method of detecting COVID-19 is real-time polymerase chain reaction (RT-PCR). Along with RT-PCR, computed tomography (CT) has become a vital technique in diagnosing and managing COVID-19 patients. COVID-19 reveals a number of radiological signatures that can be easily recognized through chest CT. These signatures must be analyzed by radiologists. It is, however, an error-prone and time-consuming process. Deep Learning-based methods can be used to perform automatic chest CT analysis, which may shorten the analysis time. The aim of this study is to design a robust and rapid medical recognition system to identify positive cases in chest CT images using three Ensemble Learning-based models. There are several techniques in Deep Learning for developing a detection system. In this paper, we employed Transfer Learning. With this technique, we can apply the knowledge obtained from a pre-trained Convolutional Neural Network (CNN) to a different but related task. In order to ensure the robustness of the proposed system for identifying positive cases in chest CT images, we used two Ensemble Learning methods namely Stacking and Weighted Average Ensemble (WAE) to combine the performances of three fine-tuned Base-Learners (VGG19, ResNet50, and DenseNet201). For Stacking, we explored 2-Levels and 3-Levels Stacking. The three generated Ensemble Learning-based models were trained on two chest CT datasets. A variety of common evaluation measures (accuracy, recall, precision, and F1-score) are used to perform a comparative analysis of each method. The experimental results show that the WAE method provides the most reliable performance, achieving a high recall value which is a desirable outcome in medical applications as it poses a greater risk if a true infected patient is not identified.


Khazanah ◽  
2020 ◽  
Vol 12 (2) ◽  
Author(s):  
Xosya Salassa ◽  
◽  
Wais Al Qarni ◽  
Trional Novanza ◽  
Fahmi Guntara Diasa ◽  
...  

Indonesia is an agrarian country whose people mostly work in agriculture by contributing to the 3rd largest GDP. But on the other hand, the main problem in agriculture is the development of pests and diseases of crops. There are cases where there are crops that are attacked by diseases with less obvious symptoms for farmers. For example, in citrus plants that are attacked by CVPD. Initially, the citrus plant does not show too early symptoms of the disease, making it difficult to distinguish from healthy plants. Based on these problems early detection and identification of plant diseases are the main factors to prevent and reduce the spread of plant diseases. The study used deep learning methods with the Convolutional Neural Network (CNN) algorithm model. The dataset used comes from PlantVillage with a total of 20,639 leaf image files that have been classified based on their respective classes. The design of the model architecture is done by designing the CNN model following the DenseNet121 architecture, by changing the parameters to improve the accuracy results. Image size is 64, train shape (20639, 64, 64, 3), epoch value 50,100, and 150. The number of input layers used is 4 layers with shapes (64, 64, 3). Densenet121 shape (1024), global average pooling2D shape (1024), batch normalization 2 (1024), dropout (1024), dense (256), batch normalization 3 (256), root (Dense) (15). This research was conducted with 3 epoch iteration tests to find the best accuracy value. The training data for epoch 50,100, and 150 produces an average model accuracy of 99.38% and the average value of the model loss is 0.019% can also be seen from the testing data results for epoch 50,100, and 150 has an average model of 95.16% and can be seen also from the average value for the loss is 0.20%. Based on the algorithm that applied the resulting training accuracy of 99.58% and the accuracy of testing 96.41% then design this application is useful to accurately detect diseases in plants by using leaf imagery of the plant.


2017 ◽  
Author(s):  
Pooya Mobadersany ◽  
Safoora Yousefi ◽  
Mohamed Amgad ◽  
David A Gutman ◽  
Jill S Barnholtz-Sloan ◽  
...  

ABSTRACTCancer histology reflects underlying molecular processes and disease progression, and contains rich phenotypic information that is predictive of patient outcomes. In this study, we demonstrate a computational approach for learning patient outcomes from digital pathology images using deep learning to combine the power of adaptive machine learning algorithms with traditional survival models. We illustrate how this approach can integrate information from both histology images and genomic biomarkers to predict time-to-event patient outcomes, and demonstrate performance surpassing the current clinical paradigm for predicting the survival of patients diagnosed with glioma. We also provide techniques to visualize the tissue patterns learned by these deep learning survival models, and establish a framework for addressing intratumoral heterogeneity and training data deficits.


Author(s):  
He Xu ◽  
Leixian Shen ◽  
Qingyun Zhang ◽  
Guoxu Cao

Accidental fall detection for the elderly who live alone can minimize the risk of death and injuries. In this article, we present a new fall detection method based on "deep learning and image, where a human body recognition model-DeeperCut is used. First, a camera is used to get the detection source data, and then the video is split into images which can be input into DeeperCut model. The human key point data in the output map and the label of the pictures are used as training data to input into the fall detection neural network. The output model then judges the fall of the subsequent pictures. In addition, the fall detection system is designed and implemented with using Raspberry Pi hardware in a local network environment. The presented method obtains a 100% fall detection rate in the experimental environment. The false positive rate on the test set is around 1.95% which is very low and can be ignored because this will be checked by using SMS, WeChat and other SNS tools to confirm falls. Experimental results show that the proposed fall behavior recognition is effective and feasible to be deployed in home environment.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5731 ◽  
Author(s):  
Xiu-Zhi Chen ◽  
Chieh-Min Chang ◽  
Chao-Wei Yu ◽  
Yen-Lin Chen

Numerous vehicle detection methods have been proposed to obtain trustworthy traffic data for the development of intelligent traffic systems. Most of these methods perform sufficiently well under common scenarios, such as sunny or cloudy days; however, the detection accuracy drastically decreases under various bad weather conditions, such as rainy days or days with glare, which normally happens during sunset. This study proposes a vehicle detection system with a visibility complementation module that improves detection accuracy under various bad weather conditions. Furthermore, the proposed system can be implemented without retraining the deep learning models for object detection under different weather conditions. The complementation of the visibility was obtained through the use of a dark channel prior and a convolutional encoder–decoder deep learning network with dual residual blocks to resolve different effects from different bad weather conditions. We validated our system on multiple surveillance videos by detecting vehicles with the You Only Look Once (YOLOv3) deep learning model and demonstrated that the computational time of our system could reach 30 fps on average; moreover, the accuracy increased not only by nearly 5% under low-contrast scene conditions but also 50% under rainy scene conditions. The results of our demonstrations indicate that our approach is able to detect vehicles under various bad weather conditions without the need to retrain a new model.


Sign in / Sign up

Export Citation Format

Share Document