Humor, Ethics, and Dignity: Being Human in the Age of Artificial Intelligence

2019 ◽  
Vol 33 (1) ◽  
pp. 3-12 ◽  
Author(s):  
Sean Kanuck

AbstractThe growing adoption of artificial intelligence (AI) raises questions about what comparative advantage, if any, human beings will have over machines in the future. This essay explores what it means to be human and how those unique characteristics relate to the digital age. Humor and ethics both rely upon higher-level cognition that accounts for unstructured and unrelated data. That capability is also vital to decision-making processes—such as jurisprudence and voting systems. Since machine learning algorithms lack the ability to understand context or nuance, reliance on them could lead to undesired results for society. By way of example, two case studies are used to illustrate the legal and moral considerations regarding the software algorithms used by driverless cars and lethal autonomous weapons systems. Social values must be encoded or introduced into training data sets if AI applications are to be expected to produce results similar to a “human in the loop.” There is a choice to be made, then, about whether we impose limitations on these new technologies in favor of maintaining human control, or whether we seek to replicate ethical reasoning and lateral thinking in the systems we create. The answer will have profound effects not only on how we interact with AI but also on how we interact with one another and perceive ourselves.

Author(s):  
Sotiris Kotsiantis ◽  
Dimitris Kanellopoulos ◽  
Panayotis Pintelas

In classification learning, the learning scheme is presented with a set of classified examples from which it is expected tone can learn a way of classifying unseen examples (see Table 1). Formally, the problem can be stated as follows: Given training data {(x1, y1)…(xn, yn)}, produce a classifier h: X- >Y that maps an object x ? X to its classification label y ? Y. A large number of classification techniques have been developed based on artificial intelligence (logic-based techniques, perception-based techniques) and statistics (Bayesian networks, instance-based techniques). No single learning algorithm can uniformly outperform other algorithms over all data sets. The concept of combining classifiers is proposed as a new direction for the improvement of the performance of individual machine learning algorithms. Numerous methods have been suggested for the creation of ensembles of classi- fiers (Dietterich, 2000). Although, or perhaps because, many methods of ensemble creation have been proposed, there is as yet no clear picture of which method is best.


Diagnostics ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 354
Author(s):  
Octavian Sabin Tătaru ◽  
Mihai Dorin Vartolomei ◽  
Jens J. Rassweiler ◽  
Oșan Virgil ◽  
Giuseppe Lucarelli ◽  
...  

Artificial intelligence (AI) is the field of computer science that aims to build smart devices performing tasks that currently require human intelligence. Through machine learning (ML), the deep learning (DL) model is teaching computers to learn by example, something that human beings are doing naturally. AI is revolutionizing healthcare. Digital pathology is becoming highly assisted by AI to help researchers in analyzing larger data sets and providing faster and more accurate diagnoses of prostate cancer lesions. When applied to diagnostic imaging, AI has shown excellent accuracy in the detection of prostate lesions as well as in the prediction of patient outcomes in terms of survival and treatment response. The enormous quantity of data coming from the prostate tumor genome requires fast, reliable and accurate computing power provided by machine learning algorithms. Radiotherapy is an essential part of the treatment of prostate cancer and it is often difficult to predict its toxicity for the patients. Artificial intelligence could have a future potential role in predicting how a patient will react to the therapy side effects. These technologies could provide doctors with better insights on how to plan radiotherapy treatment. The extension of the capabilities of surgical robots for more autonomous tasks will allow them to use information from the surgical field, recognize issues and implement the proper actions without the need for human intervention.


Filling a vacancy takes a lot of (costly) time. Automated preprocessing of applications using artificial intelligence technology can help to save time, e.g., by analyzing applications using machine learning algorithms. We investigate whether such systems are potentially biased in terms of gender, origin, and nobility. Using a corpus of common German reference letter sentences, we investigate two research questions. First, we test sentiment analysis systems offered by Amazon, Google, IBM and Microsoft. All tested services rate the sentiment of the same template sentences very inconsistently and biased at least with regard to gender. Second, we examine the impact of (im-)balanced training data sets on classifiers, which are trained to estimate the sentiment of sentences from our corpus. This experiment shows that imbalanced data, on the one hand, lead to biased results, but on the other hand, under certain conditions, can lead to fair results.


Author(s):  
Igor I. Kartashov ◽  
Ivan I. Kartashov

For millennia, mankind has dreamed of creating an artificial creature capable of thinking and acting “like human beings”. These dreams are gradually starting to come true. The trends in the development of modern so-ciety, taking into account the increasing level of its informatization, require the use of new technologies for information processing and assistance in de-cision-making. Expanding the boundaries of the use of artificial intelligence requires not only the establishment of ethical restrictions, but also gives rise to the need to promptly resolve legal problems, including criminal and proce-dural ones. This is primarily due to the emergence and spread of legal expert systems that predict the decision on a particular case, based on a variety of parameters. Based on a comprehensive study, we formulate a definition of artificial intelligence suitable for use in law. It is proposed to understand artificial intelligence as systems capable of interpreting the received data, making optimal decisions on their basis using self-learning (adaptation). The main directions of using artificial intelligence in criminal proceedings are: search and generalization of judicial practice; legal advice; preparation of formalized documents or statistical reports; forecasting court decisions; predictive jurisprudence. Despite the promise of using artificial intelligence, there are a number of problems associated with a low level of reliability in predicting rare events, self-excitation of the system, opacity of the algorithms and architecture used, etc.


2021 ◽  
Author(s):  
Ying Hou ◽  
Yi-Hong Zhang ◽  
Jie Bao ◽  
Mei-Ling Bao ◽  
Guang Yang ◽  
...  

Abstract Purpose: A balance between preserving urinary continence and achievement of negative margins is of clinical relevance while implementary difficulty. Preoperatively accurate detection of prostate cancer (PCa) extracapsular extension (ECE) is thus crucial for determining appropriate treatment options. We aimed to develop and clinically validate an artificial intelligence (AI)-assisted tool for the detection of ECE in patients with PCa using multiparametric MRI. Methods: 849 patients with localized PCa underwent multiparametric MRI before radical prostatectomy were retrospectively included from two medical centers. The AI tool was built on a ResNeXt network embedded with a spatial attention map of experts’ prior knowledges (PAGNet) from 596 training data sets. The tool was validated in 150 internal and 103 external data sets, respectively; and its clinical applicability was compared with expert-based interpretation and AI-expert interaction.Results: An index PAGNet model using a single-slice image yielded the highest areas under the receiver operating characteristic curve (AUC) of 0.857 (95% confidence interval [CI], 0.827-0.884), 0.807 (95% CI, 0.735-0.867) and 0.728 (95% CI, 0.631-0.811) in the training, internal test and external test cohorts, compared to the conventional ResNeXt networks. For experts, the inter-reader agreement was observed in only 437/849 (51.5%) patients with a Kappa value 0.343. And the performance of two experts (AUC, 0.632 to 0.741 vs 0.715 to 0.857) was lower (paired comparison, all p values < 0.05) than that of AI assessment. When expert’ interpretations were adjusted by the AI assessments, the performance of both two experts was improved.Conclusion: Our AI tool, showing improved accuracy, offers a promising alternative to human experts for imaging staging of PCa ECE using multiparametric MRI.


2019 ◽  
Author(s):  
Michael Rowe ◽  

About 200 years ago the invention of the steam engine triggered a wave of unprecedented development and growth in human social and economic systems, whereby human labour was either augmented or completely supplanted by machines. The recent emergence of artificially intelligent machines has seen human cognitive capacity enhanced by computational agents that are able to recognise previously hidden patterns within massive data sets. The characteristics of this technological advance are already influencing all aspects of society, creating the conditions for disruption to our social, economic, education, health, legal and moral systems, and which may have a more significant impact on human progress than did the steam engine. As this emerging technology becomes increasingly embedded within devices and systems, the fundamental nature of clinical practice will evolve, resulting in a healthcare system that may require concomitant changes to health professions education. Clinicians in the near future will find themselves working with information networks on a scale well beyond the capacity of human beings to grasp, thereby necessitating the use of artificial intelligence (AI) to analyse and interpret the complex interactions of data, patients and the newly-constituted care teams that will emerge. This paper describes some of the possible influences of AI-based technologies on physiotherapy practice, and the subsequent ways in which physiotherapy education will need to change in order to graduate professionals who are fit for practice in a 21st-century health system.


Author(s):  
Fernando Enrique Lopez Martinez ◽  
Edward Rolando Núñez-Valdez

IoT, big data, and artificial intelligence are currently three of the most relevant and trending pieces for innovation and predictive analysis in healthcare. Many healthcare organizations are already working on developing their own home-centric data collection networks and intelligent big data analytics systems based on machine-learning principles. The benefit of using IoT, big data, and artificial intelligence for community and population health is better health outcomes for the population and communities. The new generation of machine-learning algorithms can use large standardized data sets generated in healthcare to improve the effectiveness of public health interventions. A lot of these data come from sensors, devices, electronic health records (EHR), data generated by public health nurses, mobile data, social media, and the internet. This chapter shows a high-level implementation of a complete solution of IoT, big data, and machine learning implemented in the city of Cartagena, Colombia for hypertensive patients by using an eHealth sensor and Amazon Web Services components.


Author(s):  
Ronald M. Baecker

There have been several challenges to our view of our position and purpose as human beings. The scientist Charles Darwin’s research demonstrated evolutionary links between man and other animals. Psychoanalysis founder Sigmund Freud illuminated the power of the subconscious. Recent advances in artificial intelligence (AI) have challenged our identity as the species with the greatest ability to think. Whether machines can now ‘think’ is no longer interesting. What is important is to critically consider the degree to which they are called upon to make decisions and act in significant and often life-critical situations. We have already discussed the increasing roles of AI in intelligent tutoring, medicine, news stories and fake news, autonomous weapons, smart cars, and automation. Chapter 11 focuses on other ways in which our lives are changing because of advances in AI, and the accompanying opportunities and risks. AI has seen a paradigm shift since the year 2000. Prior to this, the focus was on knowledge representation and the modelling of human expertise in particular domains, in order to develop expert systems that could solve problems and carry out rudimentary tasks. Now, the focus is on the neural networks capable of machine learning (ML). The most successful approach is deep learning, whereby complex hierarchical assemblies of processing elements ‘learn’ using millions of samples of training data. They can then often make correct decisions in new situations. We shall also present a radical, and for most of us a scary, concept of AI with no limits—the technological singularity or superintelligence. Even though superintelligence is for now sciencefiction, humanity is asking if there is any limit to machine intelligence. We shall therefore discuss the social and ethical consequences of widespread use of ML algorithms. It is helpful in this analysis to better understand what intelligence is, so we present two insightful formulations of the concept developed by renowned psychologists.


2020 ◽  
Vol 53 (8) ◽  
pp. 5747-5788
Author(s):  
Julian Hatwell ◽  
Mohamed Medhat Gaber ◽  
R. Muhammad Atif Azad

Abstract Modern machine learning methods typically produce “black box” models that are opaque to interpretation. Yet, their demand has been increasing in the Human-in-the-Loop processes, that is, those processes that require a human agent to verify, approve or reason about the automated decisions before they can be applied. To facilitate this interpretation, we propose Collection of High Importance Random Path Snippets (CHIRPS); a novel algorithm for explaining random forest classification per data instance. CHIRPS extracts a decision path from each tree in the forest that contributes to the majority classification, and then uses frequent pattern mining to identify the most commonly occurring split conditions. Then a simple, conjunctive form rule is constructed where the antecedent terms are derived from the attributes that had the most influence on the classification. This rule is returned alongside estimates of the rule’s precision and coverage on the training data along with counter-factual details. An experimental study involving nine data sets shows that classification rules returned by CHIRPS have a precision at least as high as the state of the art when evaluated on unseen data (0.91–0.99) and offer a much greater coverage (0.04–0.54). Furthermore, CHIRPS uniquely controls against under- and over-fitting solutions by maximising novel objective functions that are better suited to the local (per instance) explanation setting.


Author(s):  
Jie Yuan ◽  
Zhenlong Wu ◽  
Shumin Fei ◽  
YangQuan Chen

Abstract As driverless vehicles becoming more and more popular due to the development of artificial intelligence, human beings will gradually get free from the vehicle driving. However, unexpected oscillations may happen due to the unfamiliarity of the vehicle configuration when humans want to drive themselves, even the vehicle itself is stable. These driver-induced-oscillations are similar with the pilot-induced-oscillations (PIO) which is generally related with actuator rate limit in the aircraft systems. Thus, this study attempts to review the PIO issue briefly and provide a guidance to solve the potential human-in-the-loop unmanned driving challenge associated with rate limit effect.


Sign in / Sign up

Export Citation Format

Share Document