food images
Recently Published Documents


TOTAL DOCUMENTS

211
(FIVE YEARS 95)

H-INDEX

26
(FIVE YEARS 5)

Author(s):  
Tejaswini Oduru ◽  
Alexis Jordan ◽  
Albert Park

Obesity is a modern public health problem. Social media images can capture eating behavior and the potential implications to health, but research for identifying the healthiness level of the food image is relatively under-explored. This study presents a deep learning architecture that transfers features from a 152 residual layer network (ResNet) for predicting the level of healthiness of food images that were built using images from the Google images search engine gathered in 2020. Features learned from the ResNet 152 were transferred to a second network to train on the dataset. The trained SoftMax layer was stacked on top of the layers transferred from ResNet 152 to build our deep learning model. We then evaluate the performance of the model using Twitter images in order to better understand the generalizability of the methods. The results show that the model is able to predict the images into their respective classes, including Definitively Healthy, Healthy, Unhealthy and Definitively Unhealthy at an F1-score of 78.8%. This finding shows promising results for classifying social media images by healthiness, which could contribute to maintaining a balanced diet at the individual level and also understanding general food consumption trends of the public.


2021 ◽  
Author(s):  
◽  
Dayna Mercer

<p>New Zealand obesity rates have reached epidemic proportions. Excessive eating not only harms individual health, but also the NZ economy; health-related costs soar with rising obesity rates. The need to understand possible mechanisms driving excessive eating behaviour is now crucial. One cognitive mechanism thought to contribute to excessive eating is an attentional bias towards food stimuli. We propose this bias would be similar to the attentional bias that is consistently shown with emotional stimuli (e.g. erotic and mutilation images). In this thesis I examined attentional biases towards food stimuli and how they relate to both state (hunger) and trait (waist circumference) factors. In Experiment 1, I investigated the existence of a food-related attentional bias and whether this bias is stronger towards high calorie food images, compared to low-calorie and non-food images (household objects). Participants were asked to fast for 2 hours (to promote self-reported hunger) before completing a distraction task. This task has repeatedly shown an attentional bias to high arousal emotional images (erotic and mutilation scenes). On each trial, participants had to determine whether a target letter was a ‘K’ or an ‘N’, while ignoring centrally-presented distractors (high calorie, low calorie and household object images). Compared to scrambled images, all image types were similarly distracting. We found no support for the existence of an attentional bias towards food stimuli; nor did we find a significant association between the bias and either state or trait factors. Experiment 2 sought to conceptually replicate Cunningham & Egeth (2018) who found significant support for the existence of a food-related attentional bias. Participants completed a similar task. However, distractor relevance was manipulated by incorporating both central and peripheral distractors, to increase ecological validity. Additionally, participants were asked to fast for longer (4 hours) to increase self-reported hunger. Despite a significant distraction effect (participants were more distracted on distractor present vs. distractor absent trials) and distractor-location effect (participants were more distracted by central vs. peripheral distractors), participants did not exhibit an attentional bias towards food stimuli. Furthermore, no significant associations between the bias and either state or trait factors were found. Thus, food stimuli do not appear to rapidly capture attention the way that emotional stimuli do, at least not in this task. Future research is needed to clarify the role of cognitive mechanisms in excessive eating behaviour.</p>


2021 ◽  
Author(s):  
◽  
Dayna Mercer

<p>New Zealand obesity rates have reached epidemic proportions. Excessive eating not only harms individual health, but also the NZ economy; health-related costs soar with rising obesity rates. The need to understand possible mechanisms driving excessive eating behaviour is now crucial. One cognitive mechanism thought to contribute to excessive eating is an attentional bias towards food stimuli. We propose this bias would be similar to the attentional bias that is consistently shown with emotional stimuli (e.g. erotic and mutilation images). In this thesis I examined attentional biases towards food stimuli and how they relate to both state (hunger) and trait (waist circumference) factors. In Experiment 1, I investigated the existence of a food-related attentional bias and whether this bias is stronger towards high calorie food images, compared to low-calorie and non-food images (household objects). Participants were asked to fast for 2 hours (to promote self-reported hunger) before completing a distraction task. This task has repeatedly shown an attentional bias to high arousal emotional images (erotic and mutilation scenes). On each trial, participants had to determine whether a target letter was a ‘K’ or an ‘N’, while ignoring centrally-presented distractors (high calorie, low calorie and household object images). Compared to scrambled images, all image types were similarly distracting. We found no support for the existence of an attentional bias towards food stimuli; nor did we find a significant association between the bias and either state or trait factors. Experiment 2 sought to conceptually replicate Cunningham & Egeth (2018) who found significant support for the existence of a food-related attentional bias. Participants completed a similar task. However, distractor relevance was manipulated by incorporating both central and peripheral distractors, to increase ecological validity. Additionally, participants were asked to fast for longer (4 hours) to increase self-reported hunger. Despite a significant distraction effect (participants were more distracted on distractor present vs. distractor absent trials) and distractor-location effect (participants were more distracted by central vs. peripheral distractors), participants did not exhibit an attentional bias towards food stimuli. Furthermore, no significant associations between the bias and either state or trait factors were found. Thus, food stimuli do not appear to rapidly capture attention the way that emotional stimuli do, at least not in this task. Future research is needed to clarify the role of cognitive mechanisms in excessive eating behaviour.</p>


10.2196/27512 ◽  
2021 ◽  
Vol 5 (12) ◽  
pp. e27512
Author(s):  
Katharine Harrington ◽  
Shannon N Zenk ◽  
Linda Van Horn ◽  
Lauren Giurini ◽  
Nithya Mahakala ◽  
...  

Background As poor diet quality is a significant risk factor for multiple noncommunicable diseases prevalent in the United States, it is important that methods be developed to accurately capture eating behavior data. There is growing interest in the use of ecological momentary assessments to collect data on health behaviors and their predictors on a micro timescale (at different points within or across days); however, documenting eating behaviors remains a challenge. Objective This pilot study (N=48) aims to examine the feasibility—usability and acceptability—of using smartphone-captured and crowdsource-labeled images to document eating behaviors in real time. Methods Participants completed the Block Fat/Sugar/Fruit/Vegetable Screener to provide a measure of their typical eating behavior, then took pictures of their meals and snacks and answered brief survey questions for 7 consecutive days using a commercially available smartphone app. Participant acceptability was determined through a questionnaire regarding their experiences administered at the end of the study. The images of meals and snacks were uploaded to Amazon Mechanical Turk (MTurk), a crowdsourcing distributed human intelligence platform, where 2 Workers assigned a count of food categories to the images (fruits, vegetables, salty snacks, and sweet snacks). The agreement among MTurk Workers was assessed, and weekly food counts were calculated and compared with the Screener responses. Results Participants reported little difficulty in uploading photographs and remembered to take photographs most of the time. Crowdsource-labeled images (n=1014) showed moderate agreement between the MTurk Worker responses for vegetables (688/1014, 67.85%) and high agreement for all other food categories (871/1014, 85.89% for fruits; 847/1014, 83.53% for salty snacks, and 833/1014, 81.15% for sweet snacks). There were no significant differences in weekly food consumption between the food images and the Block Screener, suggesting that this approach may measure typical eating behaviors as accurately as traditional methods, with lesser burden on participants. Conclusions Our approach offers a potentially time-efficient and cost-effective strategy for capturing eating events in real time.


2021 ◽  
pp. 104502
Author(s):  
Kenichi Shibuya ◽  
Rina Kasuga ◽  
Naoto Sato ◽  
Risa Santa ◽  
Chihiro Homma ◽  
...  
Keyword(s):  

2021 ◽  
Vol 4 ◽  
pp. 31-41
Author(s):  
Francesco Piluso

Once translated into images, food acquires a broader meaning. Food is no longer merely something to eat, but to show, share and look at. The increasing amount of images and pictures of dishes on our social networks, associated with hashtags such as #foodporn, expresses this renewed social, communicative and provocative function of food. However, the exhibition of these images is quite ambivalent when it comes to establishing determined patterns of visual and social relationships with and between users. The aim of this article is to analyze and attempt to provide mediation to this ambivalence. The pornographic exposition of food images no longer presupposes a transitive form of consumption by the user, but becomes pure and self-reflexing spectacle. The images are obscene (Baudrillard [1981] 1994) and characterized by an excess of transparency on their object which abolishes any form of seduction (Baudrillard [1979] 1990). Barthes ([1980] 1981) defines this kind of image as unary. Pornographic images are an emblematic example. In terms of their self-evident objectivity, these pictures lack any punctum, any piercing sign of a relationship with or openness to the observer (see Eco 1962; 1979). Nevertheless, behind their apparent transparency, the images are always products of specific perspective cuts, and still able to convey mystery, meaning and involvement. The unary image of food is a further fragment in a series of multiple perspectives on the same object. Such potentiality is actualized in our (social) media culture in which sharing and continuous remediation of images and pictures of food constitute a complex storytelling of the object. This, in turn, fosters further participation by the users. The ambivalence between the indifference of the pornographic image and the involvement in the serialization of the detail is synthetized by the notion of fetishism (Baudrillard [1972] 2019). The social (and) media scenery seems to exemplify and radicalize a sort of commodity fetishism, in which social relationships between users are shaped and mediated by (social) media relationships between images of food.


Nutrients ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 4132
Author(s):  
Xiang Chen ◽  
Evelyn Johnson ◽  
Aditya Kulkarni ◽  
Caiwen Ding ◽  
Natalie Ranelli ◽  
...  

Deep learning models can recognize the food item in an image and derive their nutrition information, including calories, macronutrients (carbohydrates, fats, and proteins), and micronutrients (vitamins and minerals). This technology has yet to be implemented for the nutrition assessment of restaurant food. In this paper, we crowdsource 15,908 food images of 470 restaurants in the Greater Hartford region on Tripadvisor and Google Place. These food images are loaded into a proprietary deep learning model (Calorie Mama) for nutrition assessment. We employ manual coding to validate the model accuracy based on the Food and Nutrient Database for Dietary Studies. The derived nutrition information is visualized at both the restaurant level and the census tract level. The deep learning model achieves 75.1% accuracy when compared with manual coding. It has more accurate labels for ethnic foods but cannot identify portion sizes, certain food items (e.g., specialty burgers and salads), and multiple food items in an image. The restaurant nutrition (RN) index is further proposed based on the derived nutrition information. By identifying the nutrition information of restaurant food through crowdsourced food images and a deep learning model, the study provides a pilot approach for large-scale nutrition assessment of the community food environment.


Author(s):  
Akshay Channam ◽  
Bavikati Ram Swarup ◽  
S Govinda Rao
Keyword(s):  

2021 ◽  
Vol 154 ◽  
pp. 105804
Author(s):  
Alina Springer ◽  
Friederike Ohlendorf ◽  
Jörg Schober ◽  
Leon Lange ◽  
Roman Osinsky

Sign in / Sign up

Export Citation Format

Share Document