scholarly journals A Cutting-Edge Survey of Tribological Behavior Evaluation Using Artificial and Computational Intelligence Models

2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Senthil Kumaran Selvaraj ◽  
Aditya Raj ◽  
Mohit Dharnidharka ◽  
Utkarsh Chadha ◽  
Isha Sachdeva ◽  
...  

Any metal surface’s usefulness is essential in various applications such as machining and welding and aerospace and aerodynamic applications. There is a great deal of wear in metals, used widely in machines and appliances. The gradual loss of the upper metal layers in all metal parts is inevitable over the machine or component’s lifetime. Artificial intelligence implementations and computational models are being studied to evaluate different metals’ tribological behavior, as technological progress has been made in this field. Different neural networks were used for different metals. They are classified in this paper, together with a description of their benefits and inconveniences and an overview and use of the different types of wear. Artificial intelligence is a relatively new term that uses mechanical engineering. There is still no scientific progress to examine various metal wear cases and compare AI and computational models’ accuracy in wear behavior.

Author(s):  
Joshua Bensemann ◽  
Qiming Bao ◽  
Gaël Gendron ◽  
Tim Hartill ◽  
Michael Witbrock

Processes occurring in brains, a.k.a. biological neural networks, can and have been modeled within artificial neural network architectures. Due to this, we have conducted a review of research on the phenomenon of blindsight in an attempt to generate ideas for artificial intelligence models. Blindsight can be considered as a diminished form of visual experience. If we assume that artificial networks have no form of visual experience, then deficits caused by blindsight give us insights into the processes occurring within visual experience that we can incorporate into artificial neural networks. This paper has been structured into three parts. Section 2 is a review of blindsight research, looking specifically at the errors occurring during this condition compared to normal vision. Section 3 identifies overall patterns from Sec. 2 to generate insights for computational models of vision. Section 4 demonstrates the utility of examining biological research to inform artificial intelligence research by examining computational models of visual attention relevant to one of the insights generated in Sec. 3. The research covered in Sec. 4 shows that incorporating one of our insights into computational vision does benefit those models. Future research will be required to determine whether our other insights are as valuable.


1978 ◽  
Vol 1 (1) ◽  
pp. 91-99 ◽  
Author(s):  
Zenon W. Pylyshyn

AbstractIt is argued that the traditional distinction between artificial intelligence and cognitive simulation amounts to little more than a difference in style of research - a different ordering in goal priorities and different methodological allegiances. Both enterprises are constrained by empirical considerations and both are directed at understanding classes of tasks that are defined by essentially psychological criteria. Because of the different ordering of priorities, however, they occasionally take somewhat different stands on such issues as the power/generality trade-off and on the relevance of the sort of data collected in experimental psychology laboratories.Computational systems are more than a tool for checking the consistency and completeness of theoretical ideas. They are ways of empirically exploring the adequacy of methods and of discovering task demands. For psychologists, computational systems should be viewed as functional models quite independent of (and likely not reducible to) neurophysiological systems, and cast at a level of abstraction appropriate for capturing cognitive generalizations. As model objects, however, they do present a serious problem of interpretation and communication since the task of extracting the relevant theoretical principles from a large complex program may be formidable.Methodologies for validating computer programs as cognitive models are briefly described. These may be classified as intermediate state, relative complexity, and component analysis methods. Compared with the constraints imposed by criteria such as sufficiency, breadth, and extendability, these experimentally based methods are relatively weak and may be most useful after some top-down progress is made in the understanding of methods sufficient for relevant tasks - such as may be forthcoming from artificial intelligence research.


Electronics ◽  
2021 ◽  
Vol 10 (23) ◽  
pp. 2901
Author(s):  
Lilia Muñoz ◽  
Vladimir Villarreal ◽  
Mel Nielsen ◽  
Yen Caballero ◽  
Inés Sittón-Candanedo ◽  
...  

The rapid spread of SARS-CoV-2 and the consequent global COVID-19 pandemic has prompted the public administrations of different countries to establish health procedures and protocols based on information generated through predictive techniques and models, which, in turn, are based on technology such as artificial intelligence (AI) and machine learning (ML). This article presents some AI tools and computational models used to collaborate in the control and detection of COVID-19 cases. In addition, the main features of the Epidempredict project regarding COVID-19 in Panama are presented. This initiative consists of the planning and design of a digital platform, with cloud-based technology, to manage the ingestion, analysis, visualization and exportation of data regarding the evolution of COVID-19 in Panama. The methodology for the design of predictive algorithms is based on a hybrid model that combines the dynamics associated with population data of an SIR model of differential equations and extrapolation with recurrent neural networks. The technological solution developed suggests that adjustments can be made to the rules implemented in the expert processes that are considered. Furthermore, the resulting information is displayed and explored through user-friendly dashboards, contributing to more meaningful decision-making processes.


2020 ◽  
Vol 07 (01) ◽  
pp. 51-62
Author(s):  
David Gamez

This paper explores some of the potential connections between natural and artificial intelligence and natural and artificial consciousness. In humans we use batteries of tests to indirectly measure intelligence. This approach breaks down when we try to apply it to radically different animals and to the many varieties of artificial intelligence. To address this issue people are starting to develop algorithms that can measure intelligence in any type of system. Progress is also being made in the scientific study of consciousness: we can neutralize the philosophical problems, we have data about the neural correlates and we have some idea about how we can develop mathematical theories that can map between physical and conscious states. While intelligence is a purely functional property of a system, there are good reasons for thinking that consciousness is linked to particular spatiotemporal patterns in specific physical materials. This paper outlines some of the weak inferences that can be made about the relationships between intelligence and consciousness in natural and artificial systems. To make real scientific progress we need to develop practical universal measures of intelligence and mathematical theories of consciousness that can reliably map between physical and conscious states.


Author(s):  
William B. Rouse

This book discusses the use of models and interactive visualizations to explore designs of systems and policies in determining whether such designs would be effective. Executives and senior managers are very interested in what “data analytics” can do for them and, quite recently, what the prospects are for artificial intelligence and machine learning. They want to understand and then invest wisely. They are reasonably skeptical, having experienced overselling and under-delivery. They ask about reasonable and realistic expectations. Their concern is with the futurity of decisions they are currently entertaining. They cannot fully address this concern empirically. Thus, they need some way to make predictions. The problem is that one rarely can predict exactly what will happen, only what might happen. To overcome this limitation, executives can be provided predictions of possible futures and the conditions under which each scenario is likely to emerge. Models can help them to understand these possible futures. Most executives find such candor refreshing, perhaps even liberating. Their job becomes one of imagining and designing a portfolio of possible futures, assisted by interactive computational models. Understanding and managing uncertainty is central to their job. Indeed, doing this better than competitors is a hallmark of success. This book is intended to help them understand what fundamentally needs to be done, why it needs to be done, and how to do it. The hope is that readers will discuss this book and develop a “shared mental model” of computational modeling in the process, which will greatly enhance their chances of success.


Author(s):  
Tan Yigitcanlar ◽  
Juan M. Corchado ◽  
Rashid Mehmood ◽  
Rita Yi Man Li ◽  
Karen Mossberger ◽  
...  

The urbanization problems we face may be alleviated using innovative digital technology. However, employing these technologies entails the risk of creating new urban problems and/or intensifying the old ones instead of alleviating them. Hence, in a world with immense technological opportunities and at the same time enormous urbanization challenges, it is critical to adopt the principles of responsible urban innovation. These principles assure the delivery of the desired urban outcomes and futures. We contribute to the existing responsible urban innovation discourse by focusing on local government artificial intelligence (AI) systems, providing a literature and practice overview, and a conceptual framework. In this perspective paper, we advocate for the need for balancing the costs, benefits, risks and impacts of developing, adopting, deploying and managing local government AI systems in order to achieve responsible urban innovation. The statements made in this perspective paper are based on a thorough review of the literature, research, developments, trends and applications carefully selected and analyzed by an expert team of investigators. This study provides new insights, develops a conceptual framework and identifies prospective research questions by placing local government AI systems under the microscope through the lens of responsible urban innovation. The presented overview and framework, along with the identified issues and research agenda, offer scholars prospective lines of research and development; where the outcomes of these future studies will help urban policymakers, managers and planners to better understand the crucial role played by local government AI systems in ensuring the achievement of responsible outcomes.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Albert T. Young ◽  
Kristen Fernandez ◽  
Jacob Pfau ◽  
Rasika Reddy ◽  
Nhat Anh Cao ◽  
...  

AbstractArtificial intelligence models match or exceed dermatologists in melanoma image classification. Less is known about their robustness against real-world variations, and clinicians may incorrectly assume that a model with an acceptable area under the receiver operating characteristic curve or related performance metric is ready for clinical use. Here, we systematically assessed the performance of dermatologist-level convolutional neural networks (CNNs) on real-world non-curated images by applying computational “stress tests”. Our goal was to create a proxy environment in which to comprehensively test the generalizability of off-the-shelf CNNs developed without training or evaluation protocols specific to individual clinics. We found inconsistent predictions on images captured repeatedly in the same setting or subjected to simple transformations (e.g., rotation). Such transformations resulted in false positive or negative predictions for 6.5–22% of skin lesions across test datasets. Our findings indicate that models meeting conventionally reported metrics need further validation with computational stress tests to assess clinic readiness.


Urban Studies ◽  
2021 ◽  
pp. 004209802110140
Author(s):  
Sarah Barns

This commentary interrogates what it means for routine urban behaviours to now be replicating themselves computationally. The emergence of autonomous or artificial intelligence points to the powerful role of big data in the city, as increasingly powerful computational models are now capable of replicating and reproducing existing spatial patterns and activities. I discuss these emergent urban systems of learned or trained intelligence as being at once radical and routine. Just as the material and behavioural conditions that give rise to urban big data demand attention, so do the generative design principles of data-driven models of urban behaviour, as they are increasingly put to use in the production of replicable, autonomous urban futures.


Sign in / Sign up

Export Citation Format

Share Document