Epilogue

2018 ◽  
pp. 261-264
Author(s):  
Ingmar Weber

Changes in the global digital landscape over the past decade or so have transformed many aspects of society, including how people communicate, socialize, and organize. These transformations have also reconfigured how companies conduct their businesses and altered how states think about security and interact with their citizens. Glancing into the future, there is good reason to believe that nascent technologies such as augmented reality will continue to change how people connect, blurring the lines between our online and offline worlds. Recent breakthroughs in the field of artificial intelligence will also have a profound impact on many aspects of our lives, ranging from the mundane—chat bots as convenient, always available customer support—to the disruptive—replacing medical doctors with automated diagnosis tools....

Author(s):  
Mahesh K. Joshi ◽  
J.R. Klein

The world of work has been impacted by technology. Work is different than it was in the past due to digital innovation. Labor market opportunities are becoming polarized between high-end and low-end skilled jobs. Migration and its effects on employment have become a sensitive political issue. From Buffalo to Beijing public debates are raging about the future of work. Developments like artificial intelligence and machine intelligence are contributing to productivity, efficiency, safety, and convenience but are also having an impact on jobs, skills, wages, and the nature of work. The “undiscovered country” of the workplace today is the combination of the changing landscape of work itself and the availability of ill-fitting tools, platforms, and knowledge to train for the requirements, skills, and structure of this new age.


Author(s):  
Zeenat S. AlKassim ◽  
Nader Mohamed

In this chapter, the authors discuss a unique technology known as the Sixth Sense Technology, highlighting the future opportunities of such technology in integrating the digital world with the real world. Challenges in implementing such technologies are also discussed along with a review of the different possible implementation approaches. This review is performed by exploring the different inventions in areas similar to the Sixth Sense Technology, namely augmented reality (AR), computer vision, image processing, gesture recognition, and artificial intelligence and then categorizing and comparing between them. Lastly, recommendations are discussed for improving such a unique technology that has the potential to create a new trend in human-computer interaction (HCI) in the coming years.


Author(s):  
Heather Dyke

Perhaps the most important dispute in the metaphysics of time is over the passage of time. There are two basic metaphysical theories of time in this dispute. There is the A-theory of time, according to which the common sense distinction between the past, present and future reflects a real ontological distinction, and time is dynamic: what was future, is now present and will be past. Then there is the B-theory of time, according to which there is no ontological distinction between past, present and future. The fact that we draw this distinction in ordinary life is a reflection of our perspective on temporal reality, rather than a reflection of the nature of time itself. A corollary of denying that there is a distinction between past, present and future is that time is not dynamic in the way just described. The A-theory is also variously referred to as the tensed theory, or the dynamic theory of time. The B-theory is also referred to as the tenseless theory, or the static, or block universe theory of time. The A-theory comes in various forms, which take differing positions on the ontological status granted to the past, present and future. According to some versions, events in the past, present and future are all real, but what distinguishes them is their possession of the property of pastness, presentness or futurity. A variation of this view is that events are less real the more distantly past or future they are. Others hold that only the past and present are real; the future has yet to come into existence. Still others, presentists, hold that only the present is real. Events in the past did exist, but exist no longer, and events in the future will exist, but do not yet exist. According to the B-theory, all events, no matter when they occur, are equally real. The temporal location of an event has no effect on its ontological status, just as the spatial location of an event has no effect on its ontological status, although this analogy is controversial. The A-theory has a greater claim to being the theory that reflects the common sense view about time. Consequently, the burden of proof is often thought to be on the B-theorist. If we are to give up the theory of time most closely aligned with common sense, it is argued, there must be overwhelming reasons for doing so. However, the A-theory is not without its problems. McTaggart put forward an argument that an objective passage of time would be incoherent, so any theory that requires one cannot be true. The A-theory also appears to be, prima facie, inconsistent with the special theory of relativity, a well-confirmed scientific theory. Although the B-theory is less in line with common sense than the A-theory, it is more in line with scientific thinking about time. According to the special theory of relativity, time is but one dimension of a four-dimensional entity called spacetime. The B-theory sees time as very similar to space, so it naturally lends itself to this view. However, it faces the problem of reconciling itself with our ordinary experience of time. Because the two theories about time are mutually exclusive, and are also thought to exhaust the possible range of metaphysical theories of time, arguments in favour of one theory often take the form of arguments against the other theory. If there is a good reason for thinking that the A-theory of time is false, then that is equally a good reason for thinking that the B-theory of time is true, and vice versa.


2021 ◽  
Vol 8 (4) ◽  
pp. 1
Author(s):  
Saman Tauqir

Since the birth of science, the most fascinating structure of the human body is the human brain.  Over the past centuries’ researchers have been developing the latest technologies to imitate and explore how the human brain functions. However, to develop a machine that thinks like a human brain is still a dream for researchers. Aristotle’s early efforts to devise logical thinking via his syllogisms (a three-part deductive reasoning) were a source of inspiration for modern computers and technologies1. In the1950, Alan Turing designed a machine to decode encrypted messages, which was a breakthrough of super computers in the days of yore. He designed the “Turing Test” which was coined to assess whether a computer could exhibit intelligence better known as “artificial intelligence” (AI) today2. AI is “a field of science and engineering concerned with the computational understanding of what is commonly called intelligent behavior, and with the creation of artifacts that exhibit such behaviour”3. Since 1980, AI has come a long way, virtual reality is being used in dental education these days to create real life situations and promote clinical work on simulators to eliminate risk factors associated with training on live patients. Recently artificial intelligence has been integrated with tutoring systems like “Unified Medical Language System” (UMLS), which have resulted in a better quality of feedback, which the preclinical virtual patients provide to the students4,5. This interactive phase helps students to evaluate their clinical skills and compare their skills with the standard ones, thus creating an ideal and high-quality training environment. Studies have been carried out regarding the efficacy of AI systems, which have stipulated that preclinical students build higher competencies than with the use of traditional simulator units6-8. Currently AI inbuilt virtual dental assistants are present in the market. They can execute various chair side tasks with greater accuracy and less manpower ensuring minimum error during the procedures. In the world of implantology and maxillofacial surgery AI helps plan and prepare surgeries with smallest details forgoing actual surgery. Some exceptional uses of AI include robotic surgeries in the field of maxillofacial surgery and bioprinting (where tissues and organs can be reconstructed in thin layers)9. The field of AI has flourished to great extent in the past decade; AI systems are an aid to the field of dentistry and dental education.  This narrative attempts to explain possible AI-based applications in the future, it can be used for dental diagnosis, planning out treatments, conducting image analysis, and record keeping. AI-based technologies streamline and reduce laborious workforce to routine tasks, it ensures dental procedures are possible at a lower cost and ultimately makes predictive, preventive, and participatory dentistry possible. The use of AI in dental procedures needs to be guaranteed; its application with human oversight and evidence-based dentistry shall be expected. Dental education needs to be introduced to clinical AI solutions by promoting digital literacy in the future dental liveware.


Author(s):  
James A. Anderson

Hand axes, language, and computers are tools that increase our ability to deal with the world. Computing is a cognitive tool and comes in several kinds: digital, analog, and brain-like. An analog telephone connects two telephones with a wire. Talking causes a current to flow on the wire. In a digital telephone the voltage is converted into groups of ones or zeros and sent at high speed from one telephone to the other. An analog telephone requires one simple step. A digital telephone requires several million discrete steps per second. Digital telephones work because the hardware has gotten much faster. Yet brains constructed of slow devices and using a few watts of power are competitive for many cognitive tasks. The important question is not why machines are becoming so smart but why humans are still so good. Artificial intelligence is missing something important probably based on hardware differences.


2019 ◽  
Author(s):  
Lu Liu ◽  
Ahmed Elazab ◽  
Baiying Lei ◽  
Tianfu Wang

BACKGROUND Echocardiography has a pivotal role in the diagnosis and management of cardiovascular diseases since it is real-time, cost-effective, and non-invasive. The development of artificial intelligence (AI) techniques have led to more intelligent and automatic computer-aided diagnosis (CAD) systems in echocardiography over the past few years. Automatic CAD mainly includes classification, detection of anatomical structures, tissue segmentation, and disease diagnosis, which are mainly completed by machine learning techniques and the recent developed deep learning techniques. OBJECTIVE This review aims to provide a guide for researchers and clinicians on relevant aspects of AI, machine learning, and deep learning. In addition, we review the recent applications of these methods in echocardiography and identify how echocardiography could incorporate AI in the future. METHODS This paper first summarizes the overview of machine learning and deep learning. Second, it reviews current use of AI in echocardiography by searching literature in the main databases for the past 10 years and finally discusses potential limitations and challenges in the future. RESULTS AI has showed promising improvements in analysis and interpretation of echocardiography to a new stage in the fields of standard views detection, automated analysis of chamber size and function, and assessment of cardiovascular diseases. CONCLUSIONS Compared with machine learning, deep learning methods have achieved state-of-the-art performance across different applications in echocardiography. Although there are challenges such as the required large dataset, AI can provide satisfactory results by devising various strategies. We believe AI has the potential to improve accuracy of diagnosis, reduce time consumption, and decrease the load of cardiologists.


2021 ◽  
Vol 31 (4) ◽  
pp. 219-246

The author takes on two interrelated tasks. The first is to justify the philosophy of history as an intellectual enterprise for the modern era and one which is dedicated to finding a positive meaning in the changes that occur within humanity as it moves from the past toward the future. The viability of that enterprise has been called into question by the catastrophes of the twentieth century. The second task is to propose a new concept of historical temporality instead of the “processual” one that was discredited in the previous century. Simon maintains that we are now living in a period similar to the “saddle time” (from 1750 to 1850) described by Reinhart Koselleck. The difference between that period and the current one lies in the replacement of the “processual” temporality that was established in that earlier time by an “evental” temporality, whose structure this article is intended to explain. The future plays a key role in the structure of evental temporality. The future no longer denotes the perspective that maps out the direction of historical changes but is instead synonymous with changes as such — changes so radical that the continued existence of mankind within its former ecological, biological and physiological boundaries is at stake. The author illustrates these changes with references to bioengineering, artificial intelligence, anthropogenic climate change, etc. Expectations about these changes are utopian and dystopian at the same time and can feed one’s wildest hopes and fantasies as well as inspire the darkest fears and dreads. In any case, these changes themselves are in no way determined by the previous course of history. The future they point to undermines the continuity of human experience because it is completely independent of the past.


2004 ◽  
Vol 19 (1) ◽  
pp. 21-27 ◽  
Author(s):  
Igor Aleksander

Is artificial intelligence (AI) just something that is done in laboratories disconnected from the development of the pragmatic computing, which constitutes current information technology or does it contribute to progress in computing and information technology? It has even been suggested that advances in AI are merely a re-branding exercise for promises that are rarely kept. This paper is a personal view of the forces that have driven the development of AI in the past and what might be a serious paradigm shift in the future. The latter points to what appears to be the most abstruse corner of the subject: the modelling of the human brain and the possibility of designing systems with the brain's ability to create conscious thought. There have been accusations that AI is always ahead on promise and behind on delivery. This is an inaccurate view. In broad terms, the argument presented here suggests that as AI developed, progress was achieved by overcoming unforeseen difficulties in the pursuit of very ambitious targets, not just a re-branding of promises. This process not only advanced AI but also fed into the mainstream of computing that underpins the information technology of the present time. While the outcome of the paradigm shift towards conscious machines, which is examined at the end of this paper is still unclear, it is possible to speculate how information technology might be affected in the future.


2022 ◽  
pp. 91-114
Author(s):  
Ambar Yoganingrum ◽  
Rulina Rachmawati ◽  
Koharudin Koharudin

In the past, human imagination about intelligent machines was only found in the science fiction of storybooks and films. Today, artificial intelligence (AI) can be found in people's daily lives. Various professions should prepare to face the automation era in the future. Libraries may be one of the slowest institutions to develop AI. Gradually, the institution adopts it for their services. Many papers focus on AI development in libraries, but the opportunities and challenges for librarians to face the era of automation are essential to discuss. This chapter provides insights into the professions that librarians can offer. First, this chapter provides information on the history and development of AI in library services. Then, based on bibliometric analysis, this chapter discusses AI trends in library services. Next, this chapter conducts a systematic review and presents the types of AI developed over time for library services. Finally, this chapter discusses the types of jobs, expertise, and skills that librarians can develop in the robotics era in the future.


Author(s):  
Stuart O. Schweitzer ◽  
Z. John Lu

Recognizing that the past often does not predict the future well, this chapter nevertheless offers prescience for the pharmaceutical industry in the next five to ten years. Using the standard economics paradigm of supply, demand, and market equilibrium, it considers the future of the industry in the following aspects: industrial organization, the nascent biosimilar sector, the promise of personalized medicine and digital healthcare information, artificial intelligence, the prospects for outpatient bundled payment programs, the setting of pharmaceutical prices, and the role of the FDA. The most important among them will be the scope and nature of health care reform in the United States and the jurisdiction of the FDA in the coming years.


Sign in / Sign up

Export Citation Format

Share Document