scholarly journals Human factors, artificial intelligence and autonomous cars: Perspectives for a complex implementation

Author(s):  
Sandor B. Pereira ◽  
Róber D. Botelho

The centuries-old near-inseparable human/automobile relationship faces a revolution thanks to artificial intelligence gradually creating new paradigms in terms of personal urban mobility. Still, would we be prepared to relinquish our vehicle control to autonomous systems? The main objective of this work is to elucidate the main elements of the complex relationship between human factors and artificial intelligence in the development and establishment of autonomous vehicles. Thus, this paper adopted a basic methodology with a qualitative approach with an exploratory objective and technical procedures, as well as technical procedures of a documentary and bibliographic nature. Notice that autonomous systems present plausible functioning in controlled environments, even so, in an environment with several variables and an almost infinite possibility of combinations, enforced the occurrence of failures and compromised the structuring of a mental model, based on human factors, applicable to artificial intelligence. That explains the little importance given to human factors in the planning of human/autonomous machine interactions.

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shweta Banerjee

PurposeThere are ethical, legal, social and economic arguments surrounding the subject of autonomous vehicles. This paper aims to discuss some of the arguments to communicate one of the current issues in the rising field of artificial intelligence.Design/methodology/approachMaking use of widely available literature that the author has read and summarised showcasing her viewpoints, the author shows that technology is progressing every day. Artificial intelligence and machine learning are at the forefront of technological advancement today. The manufacture and innovation of new machines have revolutionised our lives and resulted in a world where we are becoming increasingly dependent on artificial intelligence.FindingsTechnology might appear to be getting out of hand, but it can be effectively used to transform lives and convenience.Research limitations/implicationsFrom robotics to autonomous vehicles, countless technologies have and will continue to make the lives of individuals much easier. But, with these advancements also comes something called “future shock”.Practical implicationsFuture shock is the state of being unable to keep up with rapid social or technological change. As a result, the topic of artificial intelligence, and thus autonomous cars, is highly debated.Social implicationsThe study will be of interest to researchers, academics and the public in general. It will encourage further thinking.Originality/valueThis is an original piece of writing informed by reading several current pieces. The study has not been submitted elsewhere.


2016 ◽  
Vol 7 (2) ◽  
pp. 295-296
Author(s):  
Thomas Burri ◽  
Isabelle Wildhaber

This special issue assembles five articles ensuing from a conference on “The Man and the Machine: When Systems Take Decisions Autonomously”, which took place on June 26 and 27, 2015, at the University of St. Gallen in Switzerland.The aim of the conference was to explore the broader implications of artificial intelligence, machine learning and autonomous robots and vehicles. Alphabet's Deep Mind is just one example about Whom we know, at least a little, and who, we are told, will be good. Autonomous vehicles are also about to enter the market and our phones have begun to verbalize at us. Private drones are being regulated by the US Federal Aviation Administration. The five papers in this special issue address some of the legal issues the broader development raises.The first article is on “The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems” and is written by Shawn Bayern.


2021 ◽  
pp. 0739456X2098452
Author(s):  
Eva Kassens-Noor ◽  
Mark Wilson ◽  
Zeenat Kotval-Karamchandani ◽  
Meng Cai ◽  
Travis Decaminada

The arrival of autonomous systems powered by artificial intelligence (AI) offers new possibilities for life, yet the focus tends to be more on the technology than the people it serves. Planners should consider the likely reception awaiting emerging intelligent systems. Using an online survey of 3,249 faculty, staff, and students at a major research university, we tested perceptions of autonomy, including domotics and autonomous vehicles. Embracing the new technology with variations in attitude associated with age, gender, and familiarity with new technology, people’s openness to AI-enabled devices applies if they remain a tool to support work and not replace human-centered interactions.


Author(s):  
Thilo von Pape

This chapter discusses how autonomous vehicles (AVs) may interact with our evolving mobility system and what they mean for mobile communication research. It juxtaposes a conceptualization of AVs as manifestations of automation and artificial intelligence with an analysis of our mobility system as a historically grown hybrid of communication and transportation technologies. Since the emergence of railroad and telegraph, this system has evolved on two layers: an underlying infrastructure to power and coordinate the movements of objects, people, and ideas in industrially scaled speeds, volumes, and complexity and an interface to seamlessly access this infrastructure and control it. AVs are poised to further enhance the seamlessness which mobile phones and cars already lent to mobility. But in assuming increasingly sophisticated control tasks, AVs also disrupt an established shift toward individual control, demanding new interfaces to enable higher levels of individual and collective control over the mobility infrastructure.


2020 ◽  
Vol 31 (3) ◽  
pp. 347-363
Author(s):  
Peter Waring ◽  
Azad Bali ◽  
Chris Vas

The race to develop and implement autonomous systems and artificial intelligence has challenged the responsiveness of governments in many areas and none more so than in the domain of labour market policy. This article draws upon a large survey of Singaporean employees and managers (N = 332) conducted in 2019 to examine the extent and ways in which artificial intelligence and autonomous technologies have begun impacting workplaces in Singapore. Our conclusions reiterate the need for government intervention to facilitate broad-based participation in the productivity benefits of fourth industrial revolution technologies while also offering re-designed social safety nets and employment protections. JEL Codes: J88, K31, O38, M53


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
J. Raymond Geis ◽  
Adrian Brady ◽  
Carol C. Wu ◽  
Jack Spencer ◽  
Erik Ranschaert ◽  
...  

Abstract This is a condensed summary of an international multisociety statement on ethics of artificial intelligence (AI) in radiology produced by the ACR, European Society of Radiology, RSNA, Society for Imaging Informatics in Medicine, European Society of Medical Imaging Informatics, Canadian Association of Radiologists, and American Association of Physicists in Medicine. AI has great potential to increase efficiency and accuracy throughout radiology, but also carries inherent pitfalls and biases. Widespread use of AI-based intelligent and autonomous systems in radiology can increase the risk of systemic errors with high consequence, and highlights complex ethical and societal issues. Currently, there is little experience using AI for patient care in diverse clinical settings. Extensive research is needed to understand how to best deploy AI in clinical practice. This statement highlights our consensus that ethical use of AI in radiology should promote well-being, minimize harm, and ensure that the benefits and harms are distributed among stakeholders in a just manner. We believe AI should respect human rights and freedoms, including dignity and privacy. It should be designed for maximum transparency and dependability. Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future. The radiology community should start now to develop codes of ethics and practice for AI which promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes.


Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1220
Author(s):  
Chee Wei Lee ◽  
Stuart Madnick

Urban mobility is in the midst of a revolution, driven by the convergence of technologies such as artificial intelligence, on-demand ride services, and Internet-connected and self-driving vehicles. Technological advancements often lead to new hazards. Coupled with the increased levels of automation and connectivity in the new generation of autonomous vehicles, cybersecurity is emerging as a key threat affecting these vehicles. Traditional hazard analysis methods treat safety and security in isolation and are limited in their ability to account for interactions among organizational, sociotechnical, human, and technical components. In response to these challenges, the cybersafety method, based on System Theoretic Process Analysis (STPA and STPA-Sec), was developed to meet the growing need to holistically analyze complex sociotechnical systems. We applied cybersafety to coanalyze safety and security hazards, as well as identify mitigation requirements. The results were compared with another promising method known as Combined Harm Analysis of Safety and Security for Information Systems (CHASSIS). Both methods were applied to the Mobility-as-a-Service (MaaS) and Internet of Vehicles (IoV) use cases, focusing on over-the-air software updates feature. Overall, cybersafety identified additional hazards and more effective requirements compared to CHASSIS. In particular, cybersafety demonstrated the ability to identify hazards due to unsafe/unsecure interactions among sociotechnical components. This research also suggested using CHASSIS methods for information lifecycle analysis to complement and generate additional considerations for cybersafety. Finally, results from both methods were backtested against a past cyber hack on a vehicular system, and we found that recommendations from cybersafety were likely to mitigate the risks of the incident.


2020 ◽  
Vol 29 (4) ◽  
pp. 436-451
Author(s):  
Yilang Peng

Applications in artificial intelligence such as self-driving cars may profoundly transform our society, yet emerging technologies are frequently faced with suspicion or even hostility. Meanwhile, public opinions about scientific issues are increasingly polarized along the ideological line. By analyzing a nationally representative panel in the United States, we reveal an emerging ideological divide in public reactions to self-driving cars. Compared with liberals and Democrats, conservatives and Republicans express more concern about autonomous vehicles and more support for restrictively regulating autonomous vehicles. This ideological gap is largely driven by social conservatism. Moreover, both familiarity with driverless vehicles and scientific literacy reduce respondents’ concerns over driverless vehicles and support for regulation policies. Still, the effects of familiarity and scientific literacy are weaker among social conservatives, indicating that people may assimilate new information in a biased manner that promotes their worldviews.


2021 ◽  
Vol 73 (01) ◽  
pp. 12-13
Author(s):  
Manas Pathak ◽  
Tonya Cosby ◽  
Robert K. Perrons

Artificial intelligence (AI) has captivated the imagination of science-fiction movie audiences for many years and has been used in the upstream oil and gas industry for more than a decade (Mohaghegh 2005, 2011). But few industries evolve more quickly than those from Silicon Valley, and it accordingly follows that the technology has grown and changed considerably since this discussion began. The oil and gas industry, therefore, is at a point where it would be prudent to take stock of what has been achieved with AI in the sector, to provide a sober assessment of what has delivered value and what has not among the myriad implementations made so far, and to figure out how best to leverage this technology in the future in light of these learnings. When one looks at the long arc of AI in the oil and gas industry, a few important truths emerge. First among these is the fact that not all AI is the same. There is a spectrum of technological sophistication. Hollywood and the media have always been fascinated by the idea of artificial superintelligence and general intelligence systems capable of mimicking the actions and behaviors of real people. Those kinds of systems would have the ability to learn, perceive, understand, and function in human-like ways (Joshi 2019). As alluring as these types of AI are, however, they bear little resemblance to what actually has been delivered to the upstream industry. Instead, we mostly have seen much less ambitious “narrow AI” applications that very capably handle a specific task, such as quickly digesting thousands of pages of historical reports (Kimbleton and Matson 2018), detecting potential failures in progressive cavity pumps (Jacobs 2018), predicting oil and gas exports (Windarto et al. 2017), offering improvements for reservoir models (Mohaghegh 2011), or estimating oil-recovery factors (Mahmoud et al. 2019). But let’s face it: As impressive and commendable as these applications have been, they fall far short of the ambitious vision of highly autonomous systems that are capable of thinking about things outside of the narrow range of tasks explicitly handed to them. What is more, many of these narrow AI applications have tended to be modified versions of fairly generic solutions that were originally designed for other industries and that were then usefully extended to the oil and gas industry with a modest amount of tailoring. In other words, relatively little AI has been occurring in a way that had the oil and gas sector in mind from the outset. The second important truth is that human judgment still matters. What some technology vendors have referred to as “augmented intelligence” (Kimbleton and Matson 2018), whereby AI supplements human judgment rather than sup-plants it, is not merely an alternative way of approaching AI; rather, it is coming into focus that this is probably the most sensible way forward for this technology.


Sign in / Sign up

Export Citation Format

Share Document