scholarly journals Speedy Image Crowd Counting by Light Weight Convolutional Neural Network

2021 ◽  
Vol 3 (3) ◽  
pp. 208-222
Author(s):  
B. Vivekanandam

In image/video analysis, crowds are actively researched, and their numbers are counted. In the last two decades, many crowd counting algorithms have been developed for a wide range of applications in crisis management systems, large-scale events, workplace safety, and other areas. The precision of neural network research for estimating points is outstanding in computer vision domain. However, the degree of uncertainty in the estimate is rarely indicated. Point estimate is beneficial for measuring uncertainty since it can improve the quality of decisions and predictions. The proposed framework integrates Light weight CNN (LW-CNN) for implementing crowd computing in any public place for delivering higher accuracy in counting. Further, the proposed framework has been trained through various scene analysis such as the full and partial vision of heads in counting. Based on the various scaling sets in the proposed neural network framework, it can easily categorize the partial vision of heads count and it is being counted accurately than other pre-trained neural network models. The proposed framework provides higher accuracy in estimating the headcounts in public places during COVID-19 by consuming less amount of time.

1997 ◽  
pp. 931-935 ◽  
Author(s):  
Anders Lansner ◽  
Örjan Ekeberg ◽  
Erik Fransén ◽  
Per Hammarlund ◽  
Tomas Wilhelmsson

2017 ◽  
Author(s):  
Charlie W. Zhao ◽  
Mark J. Daley ◽  
J. Andrew Pruszynski

AbstractFirst-order tactile neurons have spatially complex receptive fields. Here we use machine learning tools to show that such complexity arises for a wide range of training sets and network architectures, and benefits network performance, especially on more difficult tasks and in the presence of noise. Our work suggests that spatially complex receptive fields are normatively good given the biological constraints of the tactile periphery.


Author(s):  
Sacha J. van Albada ◽  
Jari Pronold ◽  
Alexander van Meegen ◽  
Markus Diesmann

AbstractWe are entering an age of ‘big’ computational neuroscience, in which neural network models are increasing in size and in numbers of underlying data sets. Consolidating the zoo of models into large-scale models simultaneously consistent with a wide range of data is only possible through the effort of large teams, which can be spread across multiple research institutions. To ensure that computational neuroscientists can build on each other’s work, it is important to make models publicly available as well-documented code. This chapter describes such an open-source model, which relates the connectivity structure of all vision-related cortical areas of the macaque monkey with their resting-state dynamics. We give a brief overview of how to use the executable model specification, which employs NEST as simulation engine, and show its runtime scaling. The solutions found serve as an example for organizing the workflow of future models from the raw experimental data to the visualization of the results, expose the challenges, and give guidance for the construction of an ICT infrastructure for neuroscience.


2018 ◽  
Vol 7 (3.15) ◽  
pp. 95 ◽  
Author(s):  
M Zabir ◽  
N Fazira ◽  
Zaidah Ibrahim ◽  
Nurbaity Sabri

This paper aims to evaluate the accuracy performance of pre-trained Convolutional Neural Network (CNN) models, namely AlexNet and GoogLeNet accompanied by one custom CNN. AlexNet and GoogLeNet have been proven for their good capabilities as these network models had entered ImageNet Large Scale Visual Recognition Challenge (ILSVRC) and produce relatively good results. The evaluation results in this research are based on the accuracy, loss and time taken of the training and validation processes. The dataset used is Caltech101 by California Institute of Technology (Caltech) that contains 101 object categories. The result reveals that custom CNN architecture produces 91.05% accuracy whereas AlexNet and GoogLeNet achieve similar accuracy which is 99.65%. GoogLeNet consistency arrives at an early training stage and provides minimum error function compared to the other two models. 


Author(s):  
Ratish Puduppully ◽  
Li Dong ◽  
Mirella Lapata

Recent advances in data-to-text generation have led to the use of large-scale datasets and neural network models which are trained end-to-end, without explicitly modeling what to say and in what order. In this work, we present a neural network architecture which incorporates content selection and planning without sacrificing end-to-end training. We decompose the generation task into two stages. Given a corpus of data records (paired with descriptive documents), we first generate a content plan highlighting which information should be mentioned and in which order and then generate the document while taking the content plan into account. Automatic and human-based evaluation experiments show that our model1 outperforms strong baselines improving the state-of-the-art on the recently released RotoWIRE dataset.


2011 ◽  
Vol 2011 ◽  
pp. 1-16 ◽  
Author(s):  
Feng Li ◽  
Lei Nie ◽  
Gang Wu ◽  
Jianjun Qiao ◽  
Weiwen Zhang

Proteomic datasets are often incomplete due to identification range and sensitivity issues. It becomes important to develop methodologies to estimate missing proteomic data, allowing better interpretation of proteomic datasets and metabolic mechanisms underlying complex biological systems. In this study, we applied an artificial neural network to approximate the relationships between cognate transcriptomic and proteomic datasets ofDesulfovibrio vulgaris, and to predict protein abundance for the proteins not experimentally detected, based on several relevant predictors, such as mRNA abundance, cellular role and triple codon counts. The results showed that the coefficients of determination for the trained neural network models ranged from 0.47 to 0.68, providing better modeling than several previous regression models. The validity of the trained neural network model was evaluated using biological information (i.e. operons). To seek understanding of mechanisms causing missing proteomic data, we used a multivariate logistic regression analysis and the result suggested that some key factors, such as protein instability index, aliphatic index, mRNA abundance, effective number of codons () and codon adaptation index (CAI) values may be ascribed to whether a given expressed protein can be detected. In addition, we demonstrated that biological interpretation can be improved by use of imputed proteomic datasets.


Author(s):  
Manoj Kumar

In this chapter, an attempt has been made to develop neural network models to predict the hardness distribution of hardened zone in plasma arc surface hardening process. The back propagation method with the Levenberg-Marquardt algorithm was used to train the neural network models. Hardness distributions were collected by the experimental setup in the laboratory and the associated data were used to train the neural network models. Furthermore, the prediction of neural network models were compared with those obtained from a statistical regression models. It is confirmed experimentally that the hardness distribution can be accurately predicted by the trained neural network models. The accuracy of hardness distribution prediction using neural network is superior to that using other statistical regression models.


Sign in / Sign up

Export Citation Format

Share Document