flickr images
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 9)

H-INDEX

6
(FIVE YEARS 0)

2022 ◽  
Vol 37 ◽  
pp. 100479
Author(s):  
Maximilian C. Hartmann ◽  
Olga Koblet ◽  
Manuel F. Baer ◽  
Ross S. Purves

Author(s):  
Harshala Bhoir ◽  
Dr. K. Jayamalini

Visual sentiment analysis is the way to automatically recognize positive and negative emotions from images, videos, graphics and stickers. To estimate the polarity of the sentiment evoked by images in terms of positive or negative sentiment, most of the state of the art works exploit the text associated with a social post provided by the user. However, such textual data is typically noisy due to the subjectivity of the user, which usually includes text useful to maximize the diffusion of the social post. This System will extract three views: visual view, subjective text view and objective text view of Flickr images and will give sentiment polarity positive, negative or neutral based on the hypothesis table. Subjective text view gives sentiment polarity using VADER (Valence Aware Dictionary and sEntiment Reasoner) and objective text view gives sentiment polarity with three convolution neural network models. This system implements VGG-16, Inception-V3 and ResNet-50 convolution neural networks with pre pre-trained ImageNet dataset. The text extracted through these three convolution networks is given to VADER as input to find sentiment polarity. This system implements visual view using a bag of visual word model with BRISK (Binary Robust Invariant Scalable Key points) descriptor. System has a training dataset of 30000 positive, negative and neutral images. All the three views’ sentiment polarity is compared. The final sentiment polarity is calculated as positive if two or more views gives positive sentiment polarity, as negative if two or more views gives negative sentiment polarity and as neutral if two or more views gives neutral sentiment polarity. If all three views give unique polarity then the polarity of the objective text view is given as output sentiment polarity.


Author(s):  
M. Lotfian ◽  
J. Ingensand

Abstract. Social media data are becoming potential sources of passive VGI (Volunteered Geographic Information) and citizen science, in particular with regard to location-based environmental monitoring. Flickr, as one of the largest photo-sharing platforms, has been used in various environmental analyses from natural disaster prediction to wildlife monitoring. In this article, we have used bird photos from Flickr to illustrate the spatial distribution of bird species in Switzerland, and most importantly to see the correlation between the location of bird species and land cover types. A chi-square test of independence has been applied to illustrate the association between bird species and land cover classes and results illustrated a statistically significant association between the two variables. Furthermore, species distributions in Flickr were compared to eBird data, and the results demonstrated that Flickr can be a possible complementary source to citizen science data.


Recent advancement in digital technology and vast use of social image sharing websites leads to a huge database of images. On social websites the images are associated with the tags or keywords which describe the visual content of the images and other information. Theses tags are used by social image sharing websites for retrieval of the images. Therefore, it is important to assign appropriate tags to the images. To assign related tags, it is necessary to choose appropriate classifier for automatic classification of images into various sematic categories with respect to the classification accuracy which is important step for image tag recommendation. In this paper, three supervised classifier algorithms are implemented for image classifications which are SVM, kNN and random forest and its performance is analyzed on Flickr images. For classification of images, the features are extracted using color moment and wavelet packet descriptor


2020 ◽  
Author(s):  
Andrew Shepley ◽  
Greg Falzon ◽  
Paul Meek ◽  
Paul Kwan

AbstractA time-consuming challenge faced by camera trap practitioners all over the world is the extraction of meaningful data from images to inform ecological management. The primary methods of image processing used by practitioners includes manual analysis and citizen science. An increasingly popular alternative is automated image classification software. However, most automated solutions are not sufficiently robust to be deployed on a large scale. Key challenges include limited access to images for each species and lack of location invariance when transferring models between sites. This prevents optimal use of ecological data and results in significant expenditure of time and resources to annotate and retrain deep learning models.In this study, we aimed to (a) assess the value of publicly available non-iconic FlickR images in the training of deep learning models for camera trap object detection, (b) develop an out-of-the-box location invariant automated camera trap image processing solution for ecologist using deep transfer learning and (c) explore the use of small subsets of camera trap images in optimisation of a FlickR trained deep learning model for high precision ecological object detection.We collected and annotated a dataset of images of “pigs” (Sus scrofa and Phacochoerus africanus) from the consumer image sharing website FlickR. These images were used to achieve transfer learning using a RetinaNet model in the task of object detection. We compared the performance of this model to the performance of models trained on combinations of camera trap images obtained from five different projects, each characterised by 5 different geographical regions. Furthermore, we explored optimisation of the FlickR model via infusion of small subsets of camera trap images to increase robustness in difficult images.In most cases, the mean Average Precision (mAP) of the FlickR trained model when tested on out of sample camera trap sites (67.21-91.92%) was significantly higher than the mAP achieved by models trained on only one geographical location (4.42-90.8%) and rivalled the mAP of models trained on mixed camera trap datasets (68.96-92.75%). The infusion of camera trap images into the FlickR training further improved AP by 5.10-22.32% to 83.60-97.02%.Ecology researchers can use FlickR images in the training of automated deep learning solutions for camera trap image processing to significantly reduce time and resource expenditure by allowing the development of location invariant, highly robust out-of-the-box solutions. This would allow AI technologies to be deployed on a large scale in ecological applications.


Author(s):  
Ashwini Tonge ◽  
Cornelia Caragea

With millions of images shared online, privacy concerns are on the rise. In this paper, we propose an approach to image privacy prediction by dynamically identifying powerful features corresponding to objects, scene context, and image tags derived from Convolutional Neural Networks for each test image. Specifically, our approach identifies the set of most “competent” features on the fly, according to each test image whose privacy has to be predicted. Experimental results on thousands of Flickr images show that our approach predicts the sensitive (or private) content more accurately than the models trained on each individual feature set (object, scene, and tags alone) or their combination.


Sign in / Sign up

Export Citation Format

Share Document