In Whose Image? Representations of Technology and the 'Ends' of Humanity

Ecotheology ◽  
2006 ◽  
Vol 11 (2) ◽  
pp. 159-182 ◽  
Author(s):  
Elaine Graham
2020 ◽  
pp. 003329412094560
Author(s):  
Jennifer Murray ◽  
Brian Williams

If illness behaviour is to be fully understood, the social and behavioural sciences must work together to understand the wider forms in which illness is experienced and communicated with individuals and society. The current paper synthesised literature across social and behavioural sciences exploring illness experience and communication through physical and mental images. It argues that images may have the capacity to embody and influence beliefs, emotions, and health outcomes. While four commonalities exist, facilitating understandings of illness behaviour across the fields (i.e., understanding the importance of the patient perspective; perception of the cause, sense of identity with the illness, consequences, and level of control; health beliefs influencing illness experience, behaviours, and outcomes; and understanding illness beliefs and experiences through an almost exclusive focus on the written or spoken word), we will focus on exploring the fourth commonality. The choice to focus on the role of images on illness behaviour is due to the proliferation of interventions using image-based approaches. While these novel approaches show merit, there is a scarcity of theoretical underpinnings and explorations into the ways in which these are developed and into how people perceive and understand their own illnesses using image representations. The current paper identified that the use of images can elucidate patient and practitioner understandings of illness, facilitate communication, and potentially influence illness behaviours. It further identified commonalities across the social and behavioural sciences to facilitate theory informed understandings of illness behaviour which could be applied to visual intervention development to improve health outcomes.


Cancers ◽  
2021 ◽  
Vol 13 (13) ◽  
pp. 3106
Author(s):  
Yogesh Kalakoti ◽  
Shashank Yadav ◽  
Durai Sundar

The utility of multi-omics in personalized therapy and cancer survival analysis has been debated and demonstrated extensively in the recent past. Most of the current methods still suffer from data constraints such as high-dimensionality, unexplained interdependence, and subpar integration methods. Here, we propose SurvCNN, an alternative approach to process multi-omics data with robust computer vision architectures, to predict cancer prognosis for Lung Adenocarcinoma patients. Numerical multi-omics data were transformed into their image representations and fed into a Convolutional Neural network with a discrete-time model to predict survival probabilities. The framework also dichotomized patients into risk subgroups based on their survival probabilities over time. SurvCNN was evaluated on multiple performance metrics and outperformed existing methods with a high degree of confidence. Moreover, comprehensive insights into the relative performance of various combinations of omics datasets were probed. Critical biological processes, pathways and cell types identified from downstream processing of differentially expressed genes suggested that the framework could elucidate elements detrimental to a patient’s survival. Such integrative models with high predictive power would have a significant impact and utility in precision oncology.


2020 ◽  
Vol 34 (05) ◽  
pp. 9571-9578 ◽  
Author(s):  
Wei Zhang ◽  
Yue Ying ◽  
Pan Lu ◽  
Hongyuan Zha

Personalized image caption, a natural extension of the standard image caption task, requires to generate brief image descriptions tailored for users' writing style and traits, and is more practical to meet users' real demands. Only a few recent studies shed light on this crucial task and learn static user representations to capture their long-term literal-preference. However, it is insufficient to achieve satisfactory performance due to the intrinsic existence of not only long-term user literal-preference, but also short-term literal-preference which is associated with users' recent states. To bridge this gap, we develop a novel multimodal hierarchical transformer network (MHTN) for personalized image caption in this paper. It learns short-term user literal-preference based on users' recent captions through a short-term user encoder at the low level. And at the high level, the multimodal encoder integrates target image representations with short-term literal-preference, as well as long-term literal-preference learned from user IDs. These two encoders enjoy the advantages of the powerful transformer networks. Extensive experiments on two real datasets show the effectiveness of considering two types of user literal-preference simultaneously and better performance over the state-of-the-art models.


Author(s):  
Wei Zhao ◽  
Benyou Wang ◽  
Jianbo Ye ◽  
Min Yang ◽  
Zhou Zhao ◽  
...  

In this paper, we propose a Multi-task Learning Approach for Image Captioning (MLAIC ), motivated by the fact that humans have no difficulty performing such task because they possess capabilities of multiple domains. Specifically, MLAIC consists of three key components: (i) A multi-object classification model that learns rich category-aware image representations using a CNN image encoder; (ii) A syntax generation model that learns better syntax-aware LSTM based decoder; (iii) An image captioning model that generates image descriptions in text, sharing its CNN encoder and LSTM decoder with the object classification task and the syntax generation task, respectively. In particular, the image captioning model can benefit from the additional object categorization and syntax knowledge. To verify the effectiveness of our approach, we conduct extensive experiments on MS-COCO dataset. The experimental results demonstrate that our model achieves impressive results compared to other strong competitors.


2020 ◽  
Author(s):  
Marvin Chancán

<div>Visual navigation tasks in real-world environments often require both self-motion and place recognition feedback. While deep reinforcement learning has shown success in solving these perception and decision-making problems in an end-to-end manner, these algorithms require large amounts of experience to learn navigation policies from high-dimensional data, which is generally impractical for real robots due to sample complexity. In this paper, we address these problems with two main contributions. We first leverage place recognition and deep learning techniques combined with goal destination feedback to generate compact, bimodal image representations that can then be used to effectively learn control policies from a small amount of experience. Second, we present an interactive framework, CityLearn, that enables for the first time training and deployment of navigation algorithms across city-sized, realistic environments with extreme visual appearance changes. CityLearn features more than 10 benchmark datasets, often used in visual place recognition and autonomous driving research, including over 100 recorded traversals across 60 cities around the world. We evaluate our approach on two CityLearn environments, training our navigation policy on a single traversal. Results show our method can be over 2 orders of magnitude faster than when using raw images, and can also generalize across extreme visual changes including day to night and summer to winter transitions.</div>


Sign in / Sign up

Export Citation Format

Share Document