Bodies-in-Relation: Fine-Tuning Group-Directed Empathy

2021 ◽  
Vol 54 (1) ◽  
pp. 113-132
Author(s):  
Sarah Pawlett-Jackson

Abstract In this paper I analyze Alessandro Salice and Joona Taipale’s account of ‘group-directed empathy.’ I am highly sympathetic to Salice and Taipale’s account and intend this paper to be an endorsement of their project. However, I will argue that a more fine-grained account of group-directed empathy can be offered, and I seek to contribute to this discussion by outlining at least one way in which different types of group-directed empathy may be identified. I argue that while Salice and Taipale are right to claim that an account of group-directed empathy requires a corresponding account of ‘collective bodiliness,’ there is an important form of collective bodiliness that their account does not fully incorporate, namely embodied interaction between others. I argue that a closer look at the perceivability of interactions between others offers a richer and more complete account of how we can empathetically perceive shared emotions between groups of people.

2019 ◽  
Author(s):  
Timo Walter

In the 1980s, central banks around the world stumbled upon a new method for conducting theirmonetary policy: instead of the heavy-handed, „hydraulic“ manipulation of monetary aggregates,they learned to „govern the future“ by managing the expectations of market actors directly.New and better indicators and forecasts would provide the basis for a new communicativecoordination of markets expectations, permitting a more fine-grained and effective implementationof monetary policy, particular in controlling inflation.Focusing on the US Federal Reserve’s prototype development of inflation-targeting, this paper putsthis storyline to the test. Against the recent trend in sociology to conceive of expectations andfuturity as modes of coordination that thrive under conditions of (fundamental) uncertainty that defyrational calculation, I argue that futurity and the formation expectations inextricably depend onprior processes of formalization.Examining the transition to modern ‘inflation targeting’ monetary policy, I show how theeffectiveness of coordination by expectation is achieved by extensive processes of proceduralizationand standardization. While increasing the technical efficiency of fine-tuning expectations, thesegains are only possible because of the procedural narrowing of the scope of communicativeinteraction, which may significantly affect the overall effectiveness of this mode of coordination.I conclude with a call to more closely examine how formal and informal modes of coordination aremutually interdependent – and how the nature of their entanglements affects their effectiveness.


2016 ◽  
Vol 12 (4) ◽  
Author(s):  
Jacek Dygut ◽  
Piotr Piwowar ◽  
Maria Gołda ◽  
Krzysztof Popławski ◽  
Robert Jakubas ◽  
...  

AbstractNowadays, medical simulators and computer simulation programs are used to train various skills required in medicine. The development of medicine, including orthopedics and rehabilitation, has meant that resident physicians, within a much shorter period of time, must acquire the knowledge and skills that their older colleagues gained over years, learning as they operated on patients. For this reason, simulation very often helps the doctor and others engaged in health care train some techniques necessary during the work before they start working in a clinical environment. They have a chance of fine-tuning certain skills under nonclinical environment. On the other hand, simulation techniques are used in medical scientific research to know and explain the different biological processes that can be used for better patient treatment in the future. In this paper (Part I), the authors focused on the presentation of different types of simulators for the following purposes: test (conducted under laboratory conditions), training (incorporated into school, universities syllabus), diagnostic and therapeutic (within the hospital, clinics, private medical practice).


2020 ◽  
Vol 34 (08) ◽  
pp. 13267-13272
Author(s):  
Alex Foo ◽  
Wynne Hsu ◽  
Mong Li Lee ◽  
Gilbert Lim ◽  
Tien Yin Wong

Although deep learning for Diabetic Retinopathy (DR) screening has shown great success in achieving clinically acceptable accuracy for referable versus non-referable DR, there remains a need to provide more fine-grained grading of the DR severity level as well as automated segmentation of lesions (if any) in the retina images. We observe that the DR severity level of an image is dependent on the presence of different types of lesions and their prevalence. In this work, we adopt a multi-task learning approach to perform the DR grading and lesion segmentation tasks. In light of the lack of lesion segmentation mask ground-truths, we further propose a semi-supervised learning process to obtain the segmentation masks for the various datasets. Experiments results on publicly available datasets and a real world dataset obtained from population screening demonstrate the effectiveness of the multi-task solution over state-of-the-art networks.


Author(s):  
Dennis Dijkzeul ◽  
Diana Griesinger

The term “humanitarian crisis” combines two words of controversial meaning and definitions that are often used in very different situations. For example, there is no official definition of “humanitarian crisis” in international humanitarian law. Although some academic disciplines have developed ways of collecting and analyzing data on (potential) crises, all of them have difficulties understanding, defining, and even identifying humanitarian crises. Following an overview of the use of the compound noun “humanitarian crisis,” three perspectives from respectively the disciplines International Humanitarian Law, Public Health, and Humanitarian Studies are discussed in order to explore their different but partly overlapping approaches to (incompletely) defining, representing, and negotiating humanitarian crises. These disciplinary perspectives often paint an incomplete and technocratic picture of crises that is rarely contextualized and, thus, fails to reflect adequately the political causes of crises and the roles of local actors. They center more on defining humanitarian action than on humanitarian crises. They also show four different types of humanitarian action, namely radical, traditional Dunantist, multimandate, and resilience humanitarianism. These humanitarianisms have different strengths and weaknesses in different types of crisis, but none comprehensively and successfully defines humanitarian crises. Finally, a multiperspective and power-sensitive definition of crises, and a more fine-grained language for comprehending the diversity of crises will do more justice to the complexity and longevity of crises and the persons who are surviving—or attempting to survive—them.


2020 ◽  
Vol 179 ◽  
pp. 02027
Author(s):  
Shuaipu Chen

[Purpose / Meaning] Rumors are frequent in the COVID-19 epidemic crisis. In order to unite the power of dispelling rumors of various media platforms to help to break the rumors in a timely and professional manner, this article has designed a new fine-grained classification of rumors about COVID-19 based on the BERT model. [Method / Process] Based on the rumor data of several mainstream rumor refuting platforms, the pre-training model of BERT was used to fine-tuning in the context of COVID-19 events to obtain the feature vector representation of the rumor sentence level to achieve fine-grained classification, and a comparative experiment was conducted with the TextCNN and TextRNN models. [Result / Conclusion] The results show that the classificationF1 value of the model designed in this paper reaches 98.34%, which is higher than the TextCNN and TextRNN models by 2%, indicating that the model in this paper has a good classification judgment ability for COVID-19 rumors, and provides certain reference value for promoting the coordinated refuting of rumors during the public crisis.


2001 ◽  
Vol 23 ◽  
pp. 85-101
Author(s):  
Donka F. Farkas

This paper is concerned with semantic noun phrase typology, focusing on the question of how to draw fine-grained distinctions necessary for an accurate account of natural language phenomena. In the extensive literature on this topic, the most commonly encountered parameters of classification concern the semantic type of the denotation of the noun phrase, the familiarity or novelty of its referent, the quantificational/nonquantificational distinction (connected to the weak/strong dichotomy), as well as, more recently, the question of whether the noun phrase is choice-functional or not (see Reinhart 1997, Winter 1997, Kratzer 1998, Matthewson 1999). In the discussion that follows I will attempt to make the following general points: (i) phenomena involving the behavior of noun phrases both within and across languages point to the need of establishing further distinctions that are too fine-grained to be caught in the net of these typologies; (ii) some of the relevant distinctions can be captured in terms of conditions on assignment functions; (iii) distribution and scopal peculiarities of noun phrases may result from constraints they impose on the way variables they introduce are to be assigned values. Section 2 reviews the typology of definite noun phrases introduced in Farkas 2000 and the way it provides support for the general points above. Section 3 examines some of the problems raised by recognizing the rich variety of 'indefinite' noun phrases found in natural language and by attempting to capture their distribution and interpretation. Common to the typologies discussed in the two sections is the issue of marking different types of variation in the interpretation of a noun phrase. In the light of this discussion, specificity turns out to be an epiphenomenon connected to a family of distinctions that are marked differently in different languages.  


2019 ◽  
Author(s):  
Sophia Crüwell ◽  
Angelika Stefan ◽  
Nathan J. Evans

Recent discussions within the mathematical psychology community have focused on how Open Science practices may apply to cognitive modelling. Lee et al. (2019) sketched an initial approach for adapting Open Science practices that have been developed for experimental psychology research to the unique needs of cognitive modelling. While we welcome the general proposal of Lee et al. (2019), we believe a more fine-grained view is necessary to accommodate the adoption of Open Science practices in the diverse areas of cognitive modelling. Firstly, we suggest a categorisation for the diverse types of cognitive modelling, which we argue will allow researchers to more clearly adapt Open Science practices to different types of cognitive modelling. Secondly, we consider the feasibility and usefulness of preregistration and lab notebooks for each of these categories, and address potential objections to preregistration in cognitive modelling. Finally, we separate several cognitive modelling concepts that we believe Lee et al. (2019) conflated, which should allow for greater consistency and transparency in the modelling process. At a general level, we propose a framework that emphasises local consistency in approaches while allowing for global diversity in modelling practices.


Author(s):  
Chuhan Wu ◽  
Fangzhao Wu ◽  
Yongfeng Huang ◽  
Xing Xie

Accurate user modeling is critical for news recommendation. Existing news recommendation methods usually model users' interest from their behaviors via sequential or attentive models. However, they cannot model the rich relatedness between user behaviors, which can provide useful contexts of these behaviors for user interest modeling. In this paper, we propose a novel user modeling approach for news recommendation, which models each user as a personalized heterogeneous graph built from user behaviors to better capture the fine-grained behavior relatedness. In addition, in order to learn user interest embedding from the personalized heterogeneous graph, we propose a novel heterogeneous graph pooling method, which can summarize both node features and graph topology, and be aware of the varied characteristics of different types of nodes. Experiments on large-scale benchmark dataset show the proposed methods can effectively improve the performance of user modeling for news recommendation.


Author(s):  
Stanley E. Porter

Rhetorical criticism has emerged since the mid-1970s as an important form of criticism of the New Testament. This chapter offers a critical summary and assessment of such research. There are several different types of rhetorical criticism, but the major form practiced in New Testament studies is based upon utilizing the categories of ancient rhetoric as an interpretive tool. The chapter criticizes this approach for failing to assess accurately the ancient context of the New Testament. Then a number of positive ways that rhetoric in various forms—analysis of style, the New Rhetoric, discourse analysis, text linguistics, and socio-rhetorical criticism—can be used in New Testament studies are proposed.


AI & Society ◽  
2020 ◽  
Author(s):  
Johan Rochel ◽  
Florian Evéquoz

Abstract Enacting an AI system typically requires three iterative phases where AI engineers are in command: selection and preparation of the data, selection and configuration of algorithmic tools, and fine-tuning of the different parameters on the basis of intermediate results. Our main hypothesis is that these phases involve practices with ethical questions. This paper maps these ethical questions and proposes a way to address them in light of a neo-republican understanding of freedom, defined as absence of domination. We thereby identify different types of responsibility held by AI engineers and link them to concrete suggestions on how to improve professional practices. This paper contributes to the literature on AI and ethics by focusing on the work necessary to configure AI systems, thereby offering an input to better practices and an input for societal debates.


Sign in / Sign up

Export Citation Format

Share Document