unconscious communication
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 21)

H-INDEX

9
(FIVE YEARS 1)

2022 ◽  
Vol 12 ◽  
Author(s):  
Andrea Jesser ◽  
Johanna Muckenhuber ◽  
Bernd Lunglmayr

The COVID-19-pandemic brought massive changes in the provision of psychotherapy. To contain the pandemic, many therapists switched from face-to-face sessions in personal contact to remote settings. This study focused on psychodynamic therapists practicing Guided Affective Imagery, Hypnosis and Autogenous Relaxation and their subjective experiences with psychotherapy via telephone and videoconferencing during the first COVID-19 related lockdown period in March 2020 in Austria. An online survey completed by 161 therapists produced both quantitative and qualitative data with the latter being subject to a qualitative content analysis. Our research suggests that telephone and videoconferencing are considered valuable treatment formats to deliver psychodynamic psychotherapy. However, therapists’ experiences with remote psychotherapy are multifaceted and ambiguous. In particular, the findings raise questions concerning the maintenance of the therapeutic alliance, the development of the analytic process, the sensitivity to unconscious communication, and the indication for certain types of patients that still need further investigation. Our research indicates that the long-standing reticence toward remote treatments offers among psychodynamic therapists is becoming more differentiated and partially dissolves as therapists gain experiences in their use. Attitudes are becoming more open. At the same time, the way is being prepared to take a closer look at the specific processes and dynamics of remote psychotherapy and to examine them critically in future studies.


BJPsych Open ◽  
2021 ◽  
Vol 7 (S1) ◽  
pp. S160-S161
Author(s):  
Alina Vaida ◽  
Masud Awal

AimsResearch suggests that seeing psychotherapy cases benefits psychiatric trainees’ professional development and clinical capabilities, however there is lack of such evidence for SAS psychiatrists, who require this experience for Certificate of Eligibility for Specialist Registration (CESR) applications.Having provided frequently requested psychotherapy training support to our Trust's CESR training programme in Birmingham we aimed to study nationwide SAS psychiatrists’ psychotherapy case experience, professional benefits and barriers to access.MethodAn online questionnaire was sent to UK-wide SAS Psychiatry doctors, with the support of the RCPsych Speciality Doctors and Associate Specialist Psychiatrists Committee (SASC), whilst being promoted on social media. It asked about psychotherapy-related experience, barriers and plans.Result122 doctors completed the questionnaire, estimated to constitute approximately 8% (or more if considering all vacancies) of SAS psychiatry posts based on the RCPsych Census (2015), from across all UK nations and regions.23% had gained experience in delivering psychotherapy (57% of whom confirmed CESR or training application plans), seeing cases mainly in CBT (52%) and psychodynamic psychotherapy (41%). Those who had delivered psychotherapy agreed or strongly agreed that it helped them become a better listener (82%), become more empathetic (75%), enjoy work more (71%), understand the unconscious communication better (82%), be more confident about referring for psychotherapy (82%) and overall be a better psychiatrist (86%).44% planned to start a psychotherapy case but had not started, of whom only 22% had identified a supervisor and 15% identified a case. Only 11% felt confident they could get the psychotherapy training experiences they needed. Barriers reported included it not being part of their job plan (70%), time constraints (57%), difficulties in accessing psychotherapy supervision (61%), difficulties in identifying suitable cases (32%) and limited knowledge about psychotherapy (30%).ConclusionDoctors who delivered psychotherapy reported benefits on many levels, making a strong case it develops their clinical capabilities, which may facilitate psychologically-informed care.The results indicate interest in psychotherapy training outstripped available opportunity and support. Whilst some barriers mirrored those previously reported for trainees (difficulties accessing supervision and cases) others identified particularly related to SAS workload (not being part of their job plan and time constraints) and lack of support (with trainees prioritised). This may highlight a potential concern given the SAS Charter covers CESR-related support and advocates appropriate Supporting Professional Activities (SPA) time.Trusts need to consider more actively supporting SAS psychotherapy training and including in job planning for those receiving, delivering and supporting these valued experiences.


Proceedings ◽  
2020 ◽  
Vol 47 (1) ◽  
pp. 68
Author(s):  
Gianfranco Basti

In this contribution, I start from Levy’s precious suggestion about the neuroethics of distinguishing between “the slow-conscious responsibility” of us as persons, versus “the fast-unconscious responsiveness” of sub-personal brain mechanisms studied in cognitive neurosciences. However, they are both accountable for how they respond to the environmental (physical, social, and ethical) constraints. I propose to extend Levy’s suggestion to the fundamental distinction between “moral responsibility of conscious communication agents” versus the “ethical responsiveness of unconscious communication agents”, like our brains but also like the AI decisional supports. Both, indeed, can be included in the category of the “sub-personal modules” of our moral agency as persons. I show the relevance of this distinction, also from the logical and computational standpoints, both in neurosciences and computer sciences for the actual debate about an ethically accountable AI. Machine learning algorithms, indeed, when applied to automated supports for decision making processes in several social, political, and economic spheres are not at all “value-free” or “amoral”. They must satisfy an ethical responsiveness to avoid what has been defined as the unintended, but real, “algorithmic injustice”.


Proceedings ◽  
2020 ◽  
Vol 47 (1) ◽  
pp. 68 ◽  
Author(s):  
Gianfranco Basti

In this contribution, I start from Levy’s precious suggestion about the neuroethics of distinguishing between “the slow-conscious responsibility” of us as persons, versus “the fast-unconscious responsiveness” of sub-personal brain mechanisms studied in cognitive neurosciences. However, they are both accountable for how they respond to the environmental (physical, social, and ethical) constraints. I propose to extend Levy’s suggestion to the fundamental distinction between “moral responsibility of conscious communication agents” versus the “ethical responsiveness of unconscious communication agents”, like our brains but also like the AI decisional supports. Both, indeed, can be included in the category of the “sub-personal modules” of our moral agency as persons. I show the relevance of this distinction, also from the logical and computational standpoints, both in neurosciences and computer sciences for the actual debate about an ethically accountable AI. Machine learning algorithms, indeed, when applied to automated supports for decision making processes in several social, political, and economic spheres are not at all “value-free” or “amoral”. They must satisfy an ethical responsiveness to avoid what has been defined as the unintended, but real, “algorithmic injustice”.


Proceedings ◽  
2020 ◽  
Vol 47 (1) ◽  
pp. 47
Author(s):  
Gianfranco Basti

In this contribution, I start from Levy’s precious suggestion about the neuroethics of distinguishing between “the slow-conscious responsibility” of us as persons, versus “the fast-unconscious responsiveness” of sub-personal brain mechanisms studied in cognitive neurosciences. However, they are both accountable for how they respond to the environmental (physical, social, and ethical) constraints. I propose to extend Levy’s suggestion to the fundamental distinction between “moral responsibility of conscious communication agents” versus the “ethical responsiveness of unconscious communication agents”, like our brains but also like the AI decisional supports. Both, indeed, can be included in the category of the “sub-personal modules” of our moral agency as persons. I show the relevance of this distinction, also from the logical and computational standpoints, both in neurosciences and computer sciences for the actual debate about an ethically accountable AI. Machine learning algorithms, indeed, when applied to automated supports for decision making processes in several social, political, and economic spheres are not at all “value-free” or “amoral”. They must satisfy an ethical responsiveness to avoid what has been defined as the unintended, but real, “algorithmic injustice”.


Sign in / Sign up

Export Citation Format

Share Document