scholarly journals Pedestrian Re-identity Based on ResNet Lightweight Network

2021 ◽  
Vol 2083 (3) ◽  
pp. 032087
Author(s):  
Xingxing Li ◽  
Chao Duan ◽  
Panpan Yin ◽  
Ningxing Wang

Abstract With the development of deep learning technology, pedestrian re-identity technology has been widely used in multi-target tracking and cross mirror tracking tasks. In this paper, the classical deep learning ResNet18 network is used for pedestrian re-identity tasks. The advantage of the network is that it can easily realize lightweight deployment. In addition, the labeled smooth cross entropy loss function and migration learning technology are used in the process of training the network, which can realize the accuracy of map 67.8 on the Market1501 data set while lightening the network, and promote the engineering landing of pedestrian re-identity network.

Author(s):  
Riya John ◽  
Akhilesh. s ◽  
Gayathri Geetha Nair ◽  
Jeen Raju ◽  
Krishnendhu. B

Attendance management is an important procedure in an educational institution as well as in business organizations. Most of the available methods are time consuming and manipulative. The traditional method of attendance management is carried out in handwritten registers. Other than the manual method, there exist biometric methods like fingerprint and retinal scan, RFID tags, etc. All of these methods have disadvantages, therefore, in order to avoid these difficulties here, we introduce a new method for attendance management using deep learning technology. Using deep learning we can easily train a data-set. Real-time face algorithms are used and recognized faces of students in real-time while attending lectures. This system aims to be less time- consuming in comparison to the existing system of marking attendance.The program runs on anaconda flask server.Here real time image is captured using mobile phone camera. The faces on the image of the persons are then recognized and attendance is marked on an excel file.


2021 ◽  
Vol 267 ◽  
pp. 01034
Author(s):  
Li Mingyang ◽  
Li Chengrong

Household waste is threatening the urban environment increasingly day by day for people’s material needs increasing with the acceleration of urbanization. In this paper, a new waste sorting model is proposed to solve the problems of waste sorting. The style transfer was used to increase the data set to make some objects be sorted well. Then the rotational attention mechanism model was used to increase the accuracy of waste sorting of the blocked objects. The representation vector extraction module in the target tracking algorithm Deep Sort was replaced with Siamese network to make the network more lightweight. As a result, this paper effectively solves the current waste sorting tasks.


CONVERTER ◽  
2021 ◽  
pp. 598-605
Author(s):  
Zhao Jianchao

Behind the rapid development of the Internet industry, Internet security has become a hidden danger. In recent years, the outstanding performance of deep learning in classification and behavior prediction based on massive data makes people begin to study how to use deep learning technology. Therefore, this paper attempts to apply deep learning to intrusion detection to learn and classify network attacks. Aiming at the nsl-kdd data set, this paper first uses the traditional classification methods and several different deep learning algorithms for learning classification. This paper deeply analyzes the correlation among data sets, algorithm characteristics and experimental classification results, and finds out the deep learning algorithm which is relatively good at. Then, a normalized coding algorithm is proposed. The experimental results show that the algorithm can improve the detection accuracy and reduce the false alarm rate.


2019 ◽  
Vol 8 (4) ◽  
pp. 10253-10258

Among all the monitoring methods, models with data that is driven have more success rate when compared to any other methods. However these methods are functional to the procedure of material features such as rate of flow, pressure and temperatures. In this we use Keras, in this a group the neurons forms a pair consisting of a unit from visible layer and hidden layer. Forming so they may be formed in a symmetry which provides us to detect the fault. There must not be type of connection between the nodes of a particular group. CNNs are regularized versions of one of the many multilayer perceptrons. Multilayer perceptrons generally means entirely linked networks, that is, each and every neuron that is present in one of the any layer is linked to all neurons in the rest of all layer. The "fully-connectedness" of these modeling networks makes all of them liable for the over-fitting cause of data. Classic ways for the regular use includes accumulation of magnitude measurement of weights by the loss function. On the other hand, CNN took an unusual move towards or step towards the regular use: they take the benefit of the current hierarchical outline in the data set and gather more and more difficult outline using smaller outlines. Thus, on comparing among the connectedness and difficulty, CNN’s are at the least limit.


Author(s):  
Gabriel Zaid ◽  
Lilian Bossuet ◽  
François Dassance ◽  
Amaury Habrard ◽  
Alexandre Venelli

The side-channel community recently investigated a new approach, based on deep learning, to significantly improve profiled attacks against embedded systems. Compared to template attacks, deep learning techniques can deal with protected implementations, such as masking or desynchronization, without substantial preprocessing. However, important issues are still open. One challenging problem is to adapt the methods classically used in the machine learning field (e.g. loss function, performance metrics) to the specific side-channel context in order to obtain optimal results. We propose a new loss function derived from the learning to rank approach that helps preventing approximation and estimation errors, induced by the classical cross-entropy loss. We theoretically demonstrate that this new function, called Ranking Loss (RkL), maximizes the success rate by minimizing the ranking error of the secret key in comparison with all other hypotheses. The resulting model converges towards the optimal distinguisher when considering the mutual information between the secret and the leakage. Consequently, the approximation error is prevented. Furthermore, the estimation error, induced by the cross-entropy, is reduced by up to 23%. When the ranking loss is used, the convergence towards the best solution is up to 23% faster than a model using the cross-entropy loss function. We validate our theoretical propositions on public datasets.


2021 ◽  
Vol 36 (1) ◽  
pp. 698-703
Author(s):  
Krushitha Reddy ◽  
D. Jenila Rani

Aim: The aim of this research work is to determine the presence of hyperthyroidism using modern algorithms, and comparing the accuracy rate between deep learning algorithms and vivo monitoring. Materials and methods: Data collection containing ultrasound images from kaggle's website was used in this research. Samples were considered as (N=23) for Deep learning algorithm and (N=23) for vivo monitoring in accordance to total sample size calculated using clinical.com. The accuracy was calculated by using DPLA with a standard data set. Results: Comparison of accuracy rate is done by independent sample test using SPSS software. There is a statistically indifference between Deep learning algorithm and in vivo monitoring. Deep learning algorithm (87.89%) showed better results in comparison to vivo monitoring (83.32%). Conclusion: Deep learning algorithms appear to give better accuracy than in vivo monitoring to predict hyperthyroidism.


India is an agricultural country, and rainfall is the main source of irrigation for agriculture. Prediction of rainfall is very crucial for farmers to make decisions. In this research paper, the prediction model has been developed through deep learning using historical data of 10 years of rainfall. A deep learning approach used Keras API with an artificial neural network technique to predict the daily rainfall. The prediction model has been assessed by four-loss function, i.e., MSE, MAE, Hinge, and Binary Cross-Entropy.


Nowadays Deep learning was advanced so much in our daily life. From 2014, there is massive growth in this technology as there is a vast amount of data present. We are even getting better results from whatever we may do. In my work, I have used Convolution Neural Networks as my project depends on image classification. So what I’m trying to do is I’m using two classes in which one class is male and one class is female. I’m classifying both the classes and trying to predict who is male and who is female. For this, I have been using layers like Sequential, Convolution2D, Max-pooling, Flattening, and finally Dense. So, I connect all of these layers. I have been using two more extra layers which are Convolution2D and max-pooling connected as one layer for better classifications. In my model, I’m using Adam optimizer as I’m having only two classes and in my experiments, I found Adam as a good optimizer and I use binary cross entropy as my loss function as I’m using only two classes if we have more than two classes we can use categorical loss function and the images which I use for predictions will be converted into 64*64 matrix form. In the end, I will be getting predictions as 1 for male and 0 for female.


Sign in / Sign up

Export Citation Format

Share Document