A Multi-Aspectual Requirements Analysis for Artificial Intelligence for Well-being

Author(s):  
Mamello Thinyane ◽  
Lauri Goldkind
2019 ◽  
Vol 10 (1) ◽  
Author(s):  
J. Raymond Geis ◽  
Adrian Brady ◽  
Carol C. Wu ◽  
Jack Spencer ◽  
Erik Ranschaert ◽  
...  

Abstract This is a condensed summary of an international multisociety statement on ethics of artificial intelligence (AI) in radiology produced by the ACR, European Society of Radiology, RSNA, Society for Imaging Informatics in Medicine, European Society of Medical Imaging Informatics, Canadian Association of Radiologists, and American Association of Physicists in Medicine. AI has great potential to increase efficiency and accuracy throughout radiology, but also carries inherent pitfalls and biases. Widespread use of AI-based intelligent and autonomous systems in radiology can increase the risk of systemic errors with high consequence, and highlights complex ethical and societal issues. Currently, there is little experience using AI for patient care in diverse clinical settings. Extensive research is needed to understand how to best deploy AI in clinical practice. This statement highlights our consensus that ethical use of AI in radiology should promote well-being, minimize harm, and ensure that the benefits and harms are distributed among stakeholders in a just manner. We believe AI should respect human rights and freedoms, including dignity and privacy. It should be designed for maximum transparency and dependability. Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future. The radiology community should start now to develop codes of ethics and practice for AI which promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes.


Author(s):  
Yoko E. Fukumura ◽  
Julie McLaughlin Gray ◽  
Gale M. Lucas ◽  
Burcin Becerik-Gerber ◽  
Shawn C. Roll

Workplace environments have a significant impact on worker performance, health, and well-being. With machine learning capabilities, artificial intelligence (AI) can be developed to automate individualized adjustments to work environments (e.g., lighting, temperature) and to facilitate healthier worker behaviors (e.g., posture). Worker perspectives on incorporating AI into office workspaces are largely unexplored. Thus, the purpose of this study was to explore office workers’ views on including AI in their office workspace. Six focus group interviews with a total of 45 participants were conducted. Interview questions were designed to generate discussion on benefits, challenges, and pragmatic considerations for incorporating AI into office settings. Sessions were audio-recorded, transcribed, and analyzed using an iterative approach. Two primary constructs emerged. First, participants shared perspectives related to preferences and concerns regarding communication and interactions with the technology. Second, numerous conversations highlighted the dualistic nature of a system that collects large amounts of data; that is, the potential benefits for behavior change to improve health and the pitfalls of trust and privacy. Across both constructs, there was an overarching discussion related to the intersections of AI with the complexity of work performance. Numerous thoughts were shared relative to future AI solutions that could enhance the office workplace. This study’s findings indicate that the acceptability of AI in the workplace is complex and dependent upon the benefits outweighing the potential detriments. Office worker needs are complex and diverse, and AI systems should aim to accommodate individual needs.


2020 ◽  
Vol 11 (1) ◽  
pp. 228-232
Author(s):  
Adamantios Koumpis ◽  
Thomas Gees

AbstractIn this article, we present our experiences from research into the healthy ageing and well-being of older people and we report on our personal opinions of robots that may help the elderly to have sex and to cope with isolation and loneliness. However, and while there is a growing industry for sex robots and other sex toys and gadgets, there is also a growing concern about the ethics of such an industry. As is the case with pornography, the concept of sex robots may be criticized, yet it has deep roots in human civilization, with erotic depictions that date back to the Palaeolithic and Mesolithic Ages. So the need for an artefact that would offer sexually relevant functionality is not new at all. But what might be new and worrying is the potential for using artificial intelligence in sex robots in ways that might cause a repositioning of our entire value system. Such a threat is not related to the proliferation of sex robots per se but to the use of robots in general and in a variety of other fields of application.


2021 ◽  
Author(s):  
Christopher Marshall ◽  
Kate Lanyi ◽  
Rhiannon Green ◽  
Georgie Wilkins ◽  
Fiona Pearson ◽  
...  

BACKGROUND There is increasing need to explore the value of soft-intelligence, leveraged using the latest artificial intelligence (AI) and natural language processing (NLP) techniques, as a source of analysed evidence to support public health research activity and decision-making. OBJECTIVE The aim of this study was to further explore the value of soft-intelligence analysed using AI through a case study, which examined a large collection of UK tweets relating to mental health during the COVID-19 pandemic. METHODS A search strategy comprising a list of terms related to mental health, COVID-19, and lockdown restrictions was developed to prospectively collate relevant tweets via Twitter’s advanced search application programming interface over a 24-week period. We deployed a specialist NLP platform to explore tweet frequency and sentiment across the UK and identify key topics of discussion. A series of keyword filters were used to clean the initial data retrieved and also set up to track specific mental health problems. Qualitative document analysis was carried out to further explore and expand upon the results generated by the NLP platform. All collated tweets were anonymised RESULTS We identified and analysed 286,902 tweets posted from UK user accounts from 23 July 2020 to 6 January 2021. The average sentiment score was 50%, suggesting overall neutral sentiment across all tweets over the study period. Major fluctuations in volume and sentiment appeared to coincide with key changes to any local and/or national social-distancing measures. Tweets around mental health were polarising, discussed with both positive and negative sentiment. Key topics of consistent discussion over the study period included the impact of the pandemic on people’s mental health (both positively and negatively), fear and anxiety over lockdowns, and anger and mistrust toward the government. CONCLUSIONS Through the primary use of an AI-based NLP platform, we were able to rapidly mine and analyse emerging health-related insights from UK tweets into how the pandemic may be impacting people’s mental health and well-being. This type of real-time analysed evidence could act as a useful intelligence source that agencies, local leaders, and health care decision makers can potentially draw from, particularly during a health crisis.


2017 ◽  
Vol 24 (2) ◽  
pp. 239-257 ◽  
Author(s):  
David Brougham ◽  
Jarrod Haar

AbstractFuturists predict that a third of jobs that exist today could be taken by Smart Technology, Artificial Intelligence, Robotics, and Algorithms (STARA) by 2025. However, very little is known about how employees perceive these technological advancements in regards to their own jobs and careers, and how they are preparing for these potential changes. A new measure (STARA awareness) was created for this study that captures the extent to which employees feel their job could be replaced by these types of technology. Due to career progression and technology knowledge associated with age, we also tested age as a moderator of STARA. Using a mixed-methods approach on 120 employees, we tested STARA awareness on a range of job and well-being outcomes. Greater STARA awareness was negatively related to organisational commitment and career satisfaction, and positively related to turnover intentions, cynicism, and depression.


AI Magazine ◽  
2017 ◽  
Vol 37 (4) ◽  
pp. 83-88
Author(s):  
Christopher Amato ◽  
Ofra Amir ◽  
Joanna Bryson ◽  
Barbara Grosz ◽  
Bipin Indurkhya ◽  
...  

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2016 Spring Symposium Series on Monday through Wednesday, March 21-23, 2016 at Stanford University. The titles of the seven symposia were (1) AI and the Mitigation of Human Error: Anomalies, Team Metrics and Thermodynamics; (2) Challenges and Opportunities in Multiagent Learning for the Real World (3) Enabling Computing Research in Socially Intelligent Human-Robot Interaction: A Community-Driven Modular Research Platform; (4) Ethical and Moral Considerations in Non-Human Agents; (5) Intelligent Systems for Supporting Distributed Human Teamwork; (6) Observational Studies through Social Media and Other Human-Generated Content, and (7) Well-Being Computing: AI Meets Health and Happiness Science.


2021 ◽  
Author(s):  
J. Eric T. Taylor ◽  
Graham Taylor

Artificial intelligence powered by deep neural networks has reached a levelof complexity where it can be difficult or impossible to express how a modelmakes its decisions. This black-box problem is especially concerning when themodel makes decisions with consequences for human well-being. In response,an emerging field called explainable artificial intelligence (XAI) aims to increasethe interpretability, fairness, and transparency of machine learning. In thispaper, we describe how cognitive psychologists can make contributions to XAI.The human mind is also a black box, and cognitive psychologists have overone hundred and fifty years of experience modeling it through experimentation.We ought to translate the methods and rigour of cognitive psychology to thestudy of artificial black boxes in the service of explainability. We provide areview of XAI for psychologists, arguing that current methods possess a blindspot that can be complemented by the experimental cognitive tradition. Wealso provide a framework for research in XAI, highlight exemplary cases ofexperimentation within XAI inspired by psychological science, and provide atutorial on experimenting with machines. We end by noting the advantages ofan experimental approach and invite other psychologists to conduct research inthis exciting new field.


Author(s):  
John L. Culliney ◽  
David Jones

The last chapter highlights a sagely person whose work in international and intercultural education has exemplified the principles discussed in this book. The yin and yang of American culture and modern religious expression, however, represent uncertainties for constructive societal progress. Reflecting on future chances of success for humanity and human selves, the chapter points to the need for sagely leadership to promote biospheric conservation, environmental sustainability, and social justice. With urgency, the same prescription will help us to navigate the chaotic edge between future promise and existential risk in new fields such as genetic engineering and artificial intelligence. The chapter concludes with a view of the choice we face at the present moment: an exercise in free will, unique in the history of life. Each human individual has the potential to contribute something worthy and personally satisfying to the future. Our choice is: will we take the cooperative side of our evolutionary past to a new level and embrace the kind of nurturing philosophical wisdom that confirms our shared humanity. Or will we choose to reject that ancestral path in favor of accelerating self-aggrandizement, aggressive religion, and destructive tribal integrity that threatens societal and planetary well-being?


2022 ◽  
pp. 1-21
Author(s):  
Ethel N. Abe ◽  
Isaac Idowu Abe ◽  
Olalekan Adisa

Capitalist corporations seek ever-new opportunities for trade and gain. As competition intensifies within markets, profit-seeking corporations innovate and diversify their products in an unceasing pursuit of new market niches. The incessant changes and unpredictable nature of capitalism often leads to insecurity regarding job loss. Job insecurity has been empirically proven to have negative effects on individuals and organisations. It associates to reduced job satisfaction and decreased mental health. A longitudinal Swedish study showed an indirect effect of trust on job satisfaction and mental health of employees. The advent of AIs, humanoids, robotics, and digitization present reason for employees to worry about the future of their work. A recent study conducted by the McKinsey Global Institute reports that by 2030, a least 14% of employees globally could need changing their careers as a result of the rapid rate of digitization, robotics, and advancement in artificial intelligence disruptions in the world of work.


Sign in / Sign up

Export Citation Format

Share Document