scholarly journals Human vs. Deep Neural Network Performance at a Leader Identification Task

Author(s):  
Ankur Deka ◽  
Katia Sycara ◽  
Phillip Walker ◽  
Huao Li ◽  
Michael Lewis

Control of robotic swarms through control over a leader(s) has become the dominant approach to supervisory control over these largely autonomous systems. Resilience in the face of attrition is one of the primary advantages attributed to swarms yet the presence of leader(s) makes them vulnerable to decapitation. Algorithms which allow a swarm to hide its leader are a promising solution. We present a novel approach in which neural networks, NNs, trained in a graph neural network, GNN, replace conventional controllers making them more amenable to training. Swarms and an adversary intent of finding the leader were trained and tested in 4 phases: 1-swarm to follow leader, 2-adversary to recognize leader, 3-swarm to hide leader from adversary, and 4-swarm and adversary compete to hide and recognize the leader. While the NN adversary was more successful in identifying leaders without deception, humans did better in conditions in which the swarm was trained to hide its leader from the NN adversary. The study illustrates difficulties likely to emerge in arms races between machine learners and the potential role humans may play in moderating them.

Author(s):  
Zhixian Chen ◽  
Jialin Tang ◽  
Xueyuan Gong ◽  
Qinglang Su

In order to improve the low accuracy of the face recognition methods in the case of e-health, this paper proposed a novel face recognition approach, which is based on convolutional neural network (CNN). In detail, through resolving the convolutional kernel, rectified linear unit (ReLU) activation function, dropout, and batch normalization, this novel approach reduces the number of parameters of the CNN model, improves the non-linearity of the CNN model, and alleviates overfitting of the CNN model. In these ways, the accuracy of face recognition is increased. In the experiments, the proposed approach is compared with principal component analysis (PCA) and support vector machine (SVM) on ORL, Cohn-Kanade, and extended Yale-B face recognition data set, and it proves that this approach is promising.


Among various biometric systems, over the past few years identifying the face patterns has become the centre of attraction, owing to this, a substantial improvement has been made in this area. However, the security of such systems may be a crucial issue since it is proved in many studies that face identification systems are susceptible to various attacks, out of which spoofing attacks are one of them. Spoofing is defined as the capability of making fool of a system that is biometric for finding out the unauthorised customers as an actual one by the various ways of representing version of synthetic forged of the original biometric trait to the sensing objects. In order to guard face spoofing, several anti-spoofing methods are developed to do liveliness detection. Various techniquesfordetection of spoofing make the use of LBP i.e. local binary patterns that make the difference to symbolise handcrafted texture features from images, whereas, recent researches have shown that deep features are more robust in comparison to the former one. In this paper, a proper countermeasure in opposite to attacks that are on face spoofing are relied on CNN i.e. Convolutional Neural Network. In this novel approach, deep texture features from images are extracted by integrating the modified version of LBP descriptor (Gene LBP net) to a CNN. Experimental results are obtained on NUAA spoofing database which defines that these deep neural network surpass most of the state-of-the-art techniques, showing good outcomes in context to finding out the criminal attacks


2020 ◽  
Author(s):  
Jinhua Tian ◽  
Hailun Xie ◽  
Siyuan Hu ◽  
Jia Liu

AbstractThe increasingly popular application of AI runs the risks of amplifying social bias, such as classifying non-white faces to animals. Recent research has attributed the bias largely to data for training. However, the underlying mechanism is little known, and therefore strategies to rectify the bias are unresolved. Here we examined a typical deep convolutional neural network (DCNN), VGG-Face, which was trained with a face dataset consisting of more white faces than black and Asian faces. The transfer learning result showed significantly better performance in identifying white faces, just like the well-known social bias in human, the other-race effect (ORE). To test whether the effect resulted from the imbalance of face images, we retrained the VGG-Face with a dataset containing more Asian faces, and found a reverse ORE that the newly-trained VGG-Face preferred Asian faces over white faces in identification accuracy. In addition, when the number of Asian faces and white faces were matched in the dataset, the DCNN did not show any bias. To further examine how imbalanced image input led to the ORE, we performed the representational similarity analysis on VGG-Face’s activation. We found that when the dataset contained more white faces, the representation of white faces was more distinct, indexed by smaller ingroup similarity and larger representational Euclidean distance. That is, white faces were scattered more sparsely in the representational face space of the VGG-Face than the other faces. Importantly, the distinctiveness of faces was positively correlated with the identification accuracy, which explained the ORE observed in the VGG-Face. In sum, our study revealed the mechanism underlying the ORE in DCNNs, which provides a novel approach of study AI ethics. In addition, the face multidimensional representation theory discovered in human was found also applicable to DCNNs, advocating future studies to apply more cognitive theories to understand DCNN’s behavior.


2021 ◽  
Vol 13 (3) ◽  
pp. 63
Author(s):  
Maghsoud Morshedi ◽  
Josef Noll

Video conferencing services based on web real-time communication (WebRTC) protocol are growing in popularity among Internet users as multi-platform solutions enabling interactive communication from anywhere, especially during this pandemic era. Meanwhile, Internet service providers (ISPs) have deployed fiber links and customer premises equipment that operate according to recent 802.11ac/ax standards and promise users the ability to establish uninterrupted video conferencing calls with ultra-high-definition video and audio quality. However, the best-effort nature of 802.11 networks and the high variability of wireless medium conditions hinder users experiencing uninterrupted high-quality video conferencing. This paper presents a novel approach to estimate the perceived quality of service (PQoS) of video conferencing using only 802.11-specific network performance parameters collected from Wi-Fi access points (APs) on customer premises. This study produced datasets comprising 802.11-specific network performance parameters collected from off-the-shelf Wi-Fi APs operating at 802.11g/n/ac/ax standards on both 2.4 and 5 GHz frequency bands to train machine learning algorithms. In this way, we achieved classification accuracies of 92–98% in estimating the level of PQoS of video conferencing services on various Wi-Fi networks. To efficiently troubleshoot wireless issues, we further analyzed the machine learning model to correlate features in the model with the root cause of quality degradation. Thus, ISPs can utilize the approach presented in this study to provide predictable and measurable wireless quality by implementing a non-intrusive quality monitoring approach in the form of edge computing that preserves customers’ privacy while reducing the operational costs of monitoring and data analytics.


2021 ◽  
Vol 18 (3) ◽  
pp. 172988142110195
Author(s):  
Sorin Grigorescu ◽  
Cosmin Ginerica ◽  
Mihai Zaha ◽  
Gigel Macesanu ◽  
Bogdan Trasnea

In this article, we introduce a learning-based vision dynamics approach to nonlinear model predictive control (NMPC) for autonomous vehicles, coined learning-based vision dynamics (LVD) NMPC. LVD-NMPC uses an a-priori process model and a learned vision dynamics model used to calculate the dynamics of the driving scene, the controlled system’s desired state trajectory, and the weighting gains of the quadratic cost function optimized by a constrained predictive controller. The vision system is defined as a deep neural network designed to estimate the dynamics of the image scene. The input is based on historic sequences of sensory observations and vehicle states, integrated by an augmented memory component. Deep Q-learning is used to train the deep network, which once trained can also be used to calculate the desired trajectory of the vehicle. We evaluate LVD-NMPC against a baseline dynamic window approach (DWA) path planning executed using standard NMPC and against the PilotNet neural network. Performance is measured in our simulation environment GridSim, on a real-world 1:8 scaled model car as well as on a real size autonomous test vehicle and the nuScenes computer vision dataset.


Sign in / Sign up

Export Citation Format

Share Document