scholarly journals Sparse Output Coding for Large-Scale Visual Recognition

Author(s):  
Bin Zhao ◽  
Eric P. Xing
2017 ◽  
Vol 26 (4) ◽  
pp. 1923-1938 ◽  
Author(s):  
Jianping Fan ◽  
Tianyi Zhao ◽  
Zhenzhong Kuang ◽  
Yu Zheng ◽  
Ji Zhang ◽  
...  

2011 ◽  
Vol 25 (4) ◽  
pp. 645-651 ◽  
Author(s):  
Dionisio Andújar ◽  
Ángela Ribeiro ◽  
Cesar Fernández-Quintanilla ◽  
José Dorado

The feasibility of visual detection of weeds for map-based patch spraying systems needs to be assessed for use in large-scale cropping systems. The main objective of this research was to evaluate the reliability and profitability of using maps of Johnsongrass patches constructed at harvest to predict spatial distribution of weeds during the next cropping season. Johnsongrass patches visually were assessed from the cabin of a combine harvester in three corn fields and were compared with maps obtained in the subsequent year prior to postemergence herbicide application. There was a good correlation (71% on average) between the position of Johnsongrass patches on the two maps (fall vs. spring). The highest correlation (82%) was obtained with relatively large infestations, whereas the lowest (58%) was obtained when the infested area was smaller. Although the relative positions of the patches remained almost unchanged from 1 yr to the next, the infested area increased in all fields during the 4-yr experimental period. According to our estimates, using a strategy based on spraying full rates of herbicides to patches recorded in the map generated in the previous fall resulted in higher net returns than spraying the whole field, either at full or half rate. This site-specific strategy resulted in an average 65% reduction in the volume of herbicide applied to control this weed.


2021 ◽  
Vol 10 (9) ◽  
pp. 25394-25398
Author(s):  
Chitra Desai

Deep learning models have demonstrated improved efficacy in image classification since the ImageNet Large Scale Visual Recognition Challenge started since 2010. Classification of images has further augmented in the field of computer vision with the dawn of transfer learning. To train a model on huge dataset demands huge computational resources and add a lot of cost to learning. Transfer learning allows to reduce on cost of learning and also help avoid reinventing the wheel. There are several pretrained models like VGG16, VGG19, ResNet50, Inceptionv3, EfficientNet etc which are widely used.   This paper demonstrates image classification using pretrained deep neural network model VGG16 which is trained on images from ImageNet dataset. After obtaining the convolutional base model, a new deep neural network model is built on top of it for image classification based on fully connected network. This classifier will use features extracted from the convolutional base model.


2018 ◽  
Vol 7 (3.15) ◽  
pp. 95 ◽  
Author(s):  
M Zabir ◽  
N Fazira ◽  
Zaidah Ibrahim ◽  
Nurbaity Sabri

This paper aims to evaluate the accuracy performance of pre-trained Convolutional Neural Network (CNN) models, namely AlexNet and GoogLeNet accompanied by one custom CNN. AlexNet and GoogLeNet have been proven for their good capabilities as these network models had entered ImageNet Large Scale Visual Recognition Challenge (ILSVRC) and produce relatively good results. The evaluation results in this research are based on the accuracy, loss and time taken of the training and validation processes. The dataset used is Caltech101 by California Institute of Technology (Caltech) that contains 101 object categories. The result reveals that custom CNN architecture produces 91.05% accuracy whereas AlexNet and GoogLeNet achieve similar accuracy which is 99.65%. GoogLeNet consistency arrives at an early training stage and provides minimum error function compared to the other two models. 


Author(s):  
Subhabrata Bhattacharya ◽  
Rahul Sukthankar ◽  
Rong Jin ◽  
Mubarak Shah

Sign in / Sign up

Export Citation Format

Share Document