scholarly journals The sovereignty of Artificial Intelligence over Human Ethics and Heedfulness

2021 ◽  
Vol 23 (08) ◽  
pp. 657-665
Author(s):  
Sunil Varma Mudundi ◽  
◽  
Tejaswi Pasumathy ◽  
Dr. Raul Villamarin Roudriguez ◽  
◽  
...  

Artificial Intelligence in present days is in extreme growth. We see AI in almost every field in work today. Artificial Intelligence is being introduced in crucial roles like recruiting, Law enforcement and in the Military. To be involved in such crucial roles, it needs lots of trusts and scientific evaluation. With the evolution of artificial intelligence, automatic machines are in a speed run in this decade. Developing a machine/robot with a set of tools/programs will technically sort of some of the challenges. But the problem arises when we completely depend on robots/machines. Artificial intelligence this fast-growing technology will be very helpful when we take help from it for just primary needs like face detection, sensor-controllers, bill counters…etc. But we face real challenges when we involve with decision making, critical thinking…etc. In mere future, automated machines are going to replace many positions of humans. Many firms from small to big are opting for Autonomous means just to make their work simpler and efficient. Using a machine gives more accurate results and outputs in simulated time. As technology is developing fast, they should be developed as per societal rules and conditions. Scientists and analysts predict that singularity in AI can be achieved by 2047. Ray Kurzweil, Director of Technology at Google predicted that AI may achieve singularity in 2047. We all saw the DRDO invention on autonomous fighting drones. They operate without any human assistance. They evaluate target type, its features and eliminate them based on edge detection techniques using computer vision. AI is also into recruiting people for companies. Some companies started using AI Recruiter to evaluate the big pool of applications and select efficient ones into the industry. This is possible through computer vision and machine learning algorithms. In recent times AI is being used as a suggestion tool for judgement too. Apart from all these advancements, some malicious scenarios may affect humankind. When AI is used in the wrong way many lives will fall in danger. Collecting all good and evil from past experiences is it possible to feed a machine to work autonomously. As many philosophers and educated people kept some set of guidelines in society is it practically possible to follow when AI achieves singularity and when we talk about the neural networking of human. They have good decision-making skills, critical thinking…etc. We will briefly discuss the ethics and AI robots / Machines that involve consciousness and cognitive abilities. In this upgrading technological world, AI is ruling a maximum number of operations. So, we will discuss how can ethics be followed. How can we balance ethics and technology in both phases.We will deep dive into some of these interesting areas in this article.

2021 ◽  
Author(s):  
Jon Gustav Vabø ◽  
Evan Thomas Delaney ◽  
Tom Savel ◽  
Norbert Dolle

Abstract This paper describes the transformational application of Artificial Intelligence (AI) in Equinor's annual well planning and maturation process. Well planning is a complex decision-making process, like many other processes in the industry. There are thousands of choices, conflicting business drivers, lots of uncertainty, and hidden bias. These complexities all add up, which makes good decision making very hard. In this application, AI has been used for automated and unbiased evaluation of the full solution space, with the objective to optimize the selection of drilling campaigns while taking into account complex issues such as anti-collision with existing wells, drilling hazards and trade-offs between cost, value and risk. Designing drillable well trajectories involves a sequence of decisions, which makes the process very suitable for AI algorithms. Different solver architectures, or algorithms, can be used to play this game. This is similar to how companies such as Google-owned DeepMind develop customized solvers for games such as Go and StarCraft. The chosen method is a Tree Search algorithm with an evolutionary layer on top, providing a good balance in terms of performance (i.e., speed) vs. exploration capability (i.e., it looks "wide" in the option space). The algorithm has been deployed in a full stack web-based application that allows users to follow an end-2-end workflow: from defining well trajectory design rules and constraints to running the AI engine and evaluating results to the optimization of multi-well drilling campaigns based on risk, value and cost objectives. The full-size paper describes different Norwegian Continental Shelf (NCS) use cases of this AI assisted well trajectory planning. Results to-date indicate significant CAPEX savings potential and step-change improvements in decision speed (months to days) compared to routine manual workflows. There are very limited real transformative examples of Artificial Intelligence in multi- disciplinary workflows. This paper therefore gives a unique insight how a combination of data science, domain expertise and end user feedback can lead to powerful and transformative AI solutions – implemented at scale within an existing organization.


Author(s):  
Viktor Elliot ◽  
Mari Paananen ◽  
Miroslaw Staron

We propose an exercise with the purpose of providing a basic understanding of key concepts within AI and extending the understanding of AI beyond mathematics. The exercise allows participants to carry out analysis based on accounting data using visualization tools as well as to develop their own machine learning algorithms that can mimic their decisions. Finally, we also problematize the use of AI in decision-making, with such aspects as biases in data and/or ethical concerns.


2021 ◽  
Vol 10 (22) ◽  
pp. 5330
Author(s):  
Francesco Paolo Lo Muzio ◽  
Giacomo Rozzi ◽  
Stefano Rossi ◽  
Giovanni Battista Luciani ◽  
Ruben Foresti ◽  
...  

The human right ventricle is barely monitored during open-chest surgery due to the absence of intraoperative imaging techniques capable of elaborating its complex function. Accordingly, artificial intelligence could not be adopted for this specific task. We recently proposed a video-based approach for the real-time evaluation of the epicardial kinematics to support medical decisions. Here, we employed two supervised machine learning algorithms based on our technique to predict the patients’ outcomes before chest closure. Videos of the beating hearts were acquired before and after pulmonary valve replacement in twelve Tetralogy of Fallot patients and recordings were properly labeled as the “unhealthy” and “healthy” classes. We extracted frequency-domain-related features to train different supervised machine learning models and selected their best characteristics via 10-fold cross-validation and optimization processes. Decision surfaces were built to classify two additional patients having good and unfavorable clinical outcomes. The k-nearest neighbors and support vector machine showed the highest prediction accuracy; the patients’ class was identified with a true positive rate ≥95% and the decision surfaces correctly classified the additional patients in the “healthy” (good outcome) or “unhealthy” (unfavorable outcome) classes. We demonstrated that classifiers employed with our video-based technique may aid cardiac surgeons in decision making before chest closure.


2021 ◽  
Vol 29 (Supplement_1) ◽  
pp. i18-i18
Author(s):  
N Hassan ◽  
R Slight ◽  
D Weiand ◽  
A Vellinga ◽  
G Morgan ◽  
...  

Abstract Introduction Sepsis is a life-threatening condition that is associated with increased mortality. Artificial intelligence tools can inform clinical decision making by flagging patients who may be at risk of developing infection and subsequent sepsis and assist clinicians with their care management. Aim To identify the optimal set of predictors used to train machine learning algorithms to predict the likelihood of an infection and subsequent sepsis and inform clinical decision making. Methods This systematic review was registered in PROSPERO database (CRD42020158685). We searched 3 large databases: Medline, Cumulative Index of Nursing and Allied Health Literature, and Embase, using appropriate search terms. We included quantitative primary research studies that focused on sepsis prediction associated with bacterial infection in adult population (>18 years) in all care settings, which included data on predictors to develop machine learning algorithms. The timeframe of the search was 1st January 2000 till the 25th November 2019. Data extraction was performed using a data extraction sheet, and a narrative synthesis of eligible studies was undertaken. Narrative analysis was used to arrange the data into key areas, and compare and contrast between the content of included studies. Quality assessment was performed using Newcastle-Ottawa Quality Assessment scale, which was used to evaluate the quality of non-randomized studies. Bias was not assessed due to the non-randomised nature of the included studies. Results Fifteen articles met our inclusion criteria (Figure 1). We identified 194 predictors that were used to train machine learning algorithms to predict infection and subsequent sepsis, with 13 predictors used on average across all included studies. The most significant predictors included age, gender, smoking, alcohol intake, heart rate, blood pressure, lactate level, cardiovascular disease, endocrine disease, cancer, chronic kidney disease (eGFR<60ml/min), white blood cell count, liver dysfunction, surgical approach (open or minimally invasive), and pre-operative haematocrit < 30%. These predictors were used for the development of all the algorithms in the fifteen articles. All included studies used artificial intelligence techniques to predict the likelihood of sepsis, with average sensitivity 77.5±19.27, and average specificity 69.45±21.25. Conclusion The type of predictors used were found to influence the predictive power and predictive timeframe of the developed machine learning algorithm. Two strengths of our review were that we included studies published since the first definition of sepsis was published in 2001, and identified factors that can improve the predictive ability of algorithms. However, we note that the included studies had some limitations, with three studies not validating the models that they developed, and many tools limited by either their reduced specificity or sensitivity or both. This work has important implications for practice, as predicting the likelihood of sepsis can help inform the management of patients and concentrate finite resources to those patients who are most at risk. Producing a set of predictors can also guide future studies in developing more sensitive and specific algorithms with increased predictive time window to allow for preventive clinical measures.


2021 ◽  
Author(s):  
Jorge Crespo Alvarez ◽  
Bryan Ferreira Hernández ◽  
Sandra Sumalla Cano

This work, developed under the NUTRIX Project, has the objective to develop artificial intelligence algorithms based on the open source platform Knime that allows to characterize and predict the adherence of individuals to diet before starting the treatment. The machine learning algorithms developed under this project have significantly increased the confidence (a priory probability) that a patient leaves the treatment (diet) before starting: from 17,6% up to 96,5% which can be used as valuable guidance during the decision-making process of professionals in the area of ​dietetics and nutrition.


Intelligent technology has touched and improved upon almost every aspect of employee life cycle, Human resource is one of the areas, which has greatly benefited. Transformation of work mainly question the way we work, where we work, how we work and mainly care about the environment and surroundings in which we work. The main goal is to support the organizations to break out their traditional way of work and further move towards to an environment, which brings more pleasing atmosphere, flexible, empowering and communicative. Machine learning, algorithms and artificial intelligence are the latest technology buzzing around the HR professional minds. Artificial intelligence designed to take decisions based on data fed into the programs. The key difference between rhythm and balance is of choice vs adjustment. The choice is made easier, only with the help of priority, quick decision-making, time and communication. To maintain the above scenario digitalisation plays a vital role. In this paper, we suggest the artificial assistants focus on improving the rhythm of individual


Author(s):  
Amit Kumar Tyagi ◽  
Poonam Chahal

With the recent development in technologies and integration of millions of internet of things devices, a lot of data is being generated every day (known as Big Data). This is required to improve the growth of several organizations or in applications like e-healthcare, etc. Also, we are entering into an era of smart world, where robotics is going to take place in most of the applications (to solve the world's problems). Implementing robotics in applications like medical, automobile, etc. is an aim/goal of computer vision. Computer vision (CV) is fulfilled by several components like artificial intelligence (AI), machine learning (ML), and deep learning (DL). Here, machine learning and deep learning techniques/algorithms are used to analyze Big Data. Today's various organizations like Google, Facebook, etc. are using ML techniques to search particular data or recommend any post. Hence, the requirement of a computer vision is fulfilled through these three terms: AI, ML, and DL.


Author(s):  
Deeksha Kaul ◽  
Harika Raju ◽  
B. K. Tripathy

In this chapter, the authors discuss the use of quantum computing concepts to optimize the decision-making capability of classical machine learning algorithms. Machine learning, a subfield of artificial intelligence, implements various techniques to train a computer to learn and adapt to various real-time tasks. With the volume of data exponentially increasing, solving the same problems using classical algorithms becomes more tedious and time consuming. Quantum computing has varied applications in many areas of computer science. One such area which has been transformed a lot through the introduction of quantum computing is machine learning. Quantum computing, with its ability to perform tasks in logarithmic time, aids in overcoming the limitations of classical machine learning algorithms.


2021 ◽  
pp. PP. 18-50
Author(s):  
Ahmed A. Elngar ◽  
◽  
◽  
◽  
◽  
...  

Computer vision is one of the fields of computer science that is one of the most powerful and persuasive types of artificial intelligence. It is similar to the human vision system, as it enables computers to recognize and process objects in pictures and videos in the same way as humans do. Computer vision technology has rapidly evolved in many fields and contributed to solving many problems, as computer vision contributed to self-driving cars, and cars were able to understand their surroundings. The cameras record video from different angles around the car, then a computer vision system gets images from the video, and then processes the images in real-time to find roadside ends, detect other cars, and read traffic lights, pedestrians, and objects. Computer vision also contributed to facial recognition; this technology enables computers to match images of people’s faces to their identities. which these algorithms detect facial features in images and then compare them with databases. Computer vision also play important role in Healthcare, in which algorithms can help automate tasks such as detecting Breast cancer, finding symptoms in x-ray, cancerous moles in skin images, and MRI scans. Computer vision also contributed to many fields such as image classification, object discovery, motion recognition, subject tracking, and medicine. The rapid development of artificial intelligence is making machine learning more important in his field of research. Use algorithms to find out every bit of data and predict the outcome. This has become an important key to unlocking the door to AI. If we had looked to deep learning concept, we find deep learning is a subset of machine learning, algorithms inspired by structure and function of the human brain called artificial neural networks, learn from large amounts of data. Deep learning algorithm perform a task repeatedly, each time tweak it a little to improve the outcome. So, the development of computer vision was due to deep learning. Now we'll take a tour around the convolution neural networks, let us say that convolutional neural networks are one of the most powerful supervised deep learning models (abbreviated as CNN or ConvNet). This name ;convolutional ; is a token from a mathematical linear operation between matrixes called convolution. CNN structure can be used in a variety of real-world problems including, computer vision, image recognition, natural language processing (NLP), anomaly detection, video analysis, drug discovery, recommender systems, health risk assessment, and time-series forecasting. If we look at convolutional neural networks, we see that CNN are similar to normal neural networks, the only difference between CNN and ANN is that CNNs are used in the field of pattern recognition within images mainly. This allows us to encode the features of an image into the structure, making the network more suitable for image-focused tasks, with reducing the parameters required to set-up the model. One of the advantages of CNN that it has an excellent performance in machine learning problems. So, we will use CNN as a classifier for image classification. So, the objective of this paper is that we will talk in detail about image classification in the following sections.


Author(s):  
Aryan Karn

Computer vision is an area of research concerned with assisting computers in seeing. Computer vision issues aim to infer something about the world from observed picture data at the most abstract level. It is a multidisciplinary subject that may be loosely classified as a branch of artificial intelligence and machine learning, both of which may include using specific techniques and using general-purpose learning methods. As an interdisciplinary field of research, it may seem disorganized, with methods taken and reused from various engineering and computer science disciplines. While one specific vision issue may be readily solved with a hand-crafted statistical technique, another may need a vast and sophisticated ensemble of generic machine learning algorithms. Computer vision as a discipline is at the cutting edge of science. As with any frontier, it is thrilling and chaotic, with often no trustworthy authority to turn to. Numerous beneficial concepts lack a theoretical foundation, and some theories are rendered ineffective in reality; developed regions are widely dispersed, and often one seems totally unreachable from the other.


Sign in / Sign up

Export Citation Format

Share Document