scholarly journals Interaction between Model Based Signal and Image Processing, Machine Learning and Artificial Intelligence

Proceedings ◽  
2019 ◽  
Vol 33 (1) ◽  
pp. 16
Author(s):  
Ali Mohammad-Djafari

Signale and image processing has always been the main tools in many area and in particular in Medical and Biomedical applications. Nowadays, there are great number of toolboxes, general purpose and very specialized, in which classical techniques are implemented and can be used: all the transformation based methods (Fourier, Wavelets, ...) as well as model based and iterative regularization methods. Statistical methods have also shown their success in some area when parametric models are available. Bayesian inference based methods had great success, in particular, when the data are noisy, uncertain, incomplete (missing values) or with outliers and where there is a need to quantify uncertainties. In some applications, nowadays, we have more and more data. To use these “Big Data” to extract more knowledge, the Machine Learning and Artificial Intelligence tools have shown success and became mandatory. However, even if in many domains of Machine Learning such as classification and clustering these methods have shown success, their use in real scientific problems are limited. The main reasons are twofold: First, the users of these tools cannot explain the reasons when the are successful and when they are not. The second is that, in general, these tools can not quantify the remaining uncertainties. Model based and Bayesian inference approach have been very successful in linear inverse problems. However, adjusting the hyper parameters is complex and the cost of the computation is high. The Convolutional Neural Networks (CNN) and Deep Learning (DL) tools can be useful for pushing farther these limits. At the other side, the Model based methods can be helpful for the selection of the structure of CNN and DL which are crucial in ML success. In this work, I first provide an overview and then a survey of the aforementioned methods and explore the possible interactions between them.

Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 193 ◽  
Author(s):  
Sebastian Raschka ◽  
Joshua Patterson ◽  
Corey Nolet

Smarter applications are making better use of the insights gleaned from data, having an impact on every industry and research discipline. At the core of this revolution lies the tools and the methods that are driving it, from processing the massive piles of data generated each day to learning from and taking useful action. Deep neural networks, along with advancements in classical machine learning and scalable general-purpose graphics processing unit (GPU) computing, have become critical components of artificial intelligence, enabling many of these astounding breakthroughs and lowering the barrier to adoption. Python continues to be the most preferred language for scientific computing, data science, and machine learning, boosting both performance and productivity by enabling the use of low-level libraries and clean high-level APIs. This survey offers insight into the field of machine learning with Python, taking a tour through important topics to identify some of the core hardware and software paradigms that have enabled it. We cover widely-used libraries and concepts, collected together for holistic comparison, with the goal of educating the reader and driving the field of Python machine learning forward.


2020 ◽  
pp. 1-14
Author(s):  
Zhen Huang ◽  
Qiang Li ◽  
Ju Lu ◽  
Junlin Feng ◽  
Jiajia Hu ◽  
...  

<b><i>Background:</i></b> Application and development of the artificial intelligence technology have generated a profound impact in the field of medical imaging. It helps medical personnel to make an early and more accurate diagnosis. Recently, the deep convolution neural network is emerging as a principal machine learning method in computer vision and has received significant attention in medical imaging. <b><i>Key Message:</i></b> In this paper, we will review recent advances in artificial intelligence, machine learning, and deep convolution neural network, focusing on their applications in medical image processing. To illustrate with a concrete example, we discuss in detail the architecture of a convolution neural network through visualization to help understand its internal working mechanism. <b><i>Summary:</i></b> This review discusses several open questions, current trends, and critical challenges faced by medical image processing and artificial intelligence technology.


2020 ◽  
Vol 18 (2) ◽  
Author(s):  
Nedeljko Šikanjić ◽  
Zoran Ž. Avramović ◽  
Esad Jakupović

In today’s world, devices with possibility to communicate, are emerging and growing daily. This advanced technology is bringing ideas of how to use these devices, in order to gain financial benefits for enterprises, business and economy in general. Purpose of research in this scientific paper is to discover, what are the trends in connecting these devices, called internet of things (IoT), what are financial aspects of implementing IoT solutions and how leaders in area of cloud computing and IoT, are implementing additional advanced technologies such as machine learning and artificial intelligence, to improve processes and gain increase in revenue, while bringing automation in place for the end users. Development of informational society is not only bringing innovation to everyday life, but is also providing effect on the economy. This effect reflects on various business platforms, companies and organizations while increasing the quality of the end product or service that is being provided.


2021 ◽  
Vol 74 (2) ◽  
pp. 25-31
Author(s):  
D.R. Rakhimova ◽  
◽  
К. А. Zhakypbayeva ◽  

Machine learning is one of the main branches of artificial intelligence. Its main idea is not only to use an algorithm written by a computer, but also to learn how to solve a problem on your own. Recently, in the field of translation, the issue of using machine learning and its integration with translator fixes has become very relevant. This new direction in professional English translation is called post-edited machine translation (PEMT) or post-edited machine translation (MTPE). Since the collaborative work of man and machine has given good results, this, in turn, sparked interest in post-editing and the development of automated post-editing systems. The article analyzes the advantages, disadvantages of the currently widely used online translation systems from English into Kazakh. The implementation of machine learning requires a large number of corpuses in English and Kazakh. The article contains code, results that allow you to collect corpuses.


2021 ◽  
Vol 36 ◽  
Author(s):  
Alexandros Vassiliades ◽  
Nick Bassiliades ◽  
Theodore Patkos

Abstract Argumentation and eXplainable Artificial Intelligence (XAI) are closely related, as in the recent years, Argumentation has been used for providing Explainability to AI. Argumentation can show step by step how an AI System reaches a decision; it can provide reasoning over uncertainty and can find solutions when conflicting information is faced. In this survey, we elaborate over the topics of Argumentation and XAI combined, by reviewing all the important methods and studies, as well as implementations that use Argumentation to provide Explainability in AI. More specifically, we show how Argumentation can enable Explainability for solving various types of problems in decision-making, justification of an opinion, and dialogues. Subsequently, we elaborate on how Argumentation can help in constructing explainable systems in various applications domains, such as in Medical Informatics, Law, the Semantic Web, Security, Robotics, and some general purpose systems. Finally, we present approaches that combine Machine Learning and Argumentation Theory, toward more interpretable predictive models.


2021 ◽  
Vol 8 (2) ◽  
pp. 1-2
Author(s):  
Julkar Nine

Vision Based systems have become an integral part when it comes to autonomous driving. The autonomous industry has seen a made large progress in the perception of environment as a result of the improvements done towards vision based systems. As the industry moves up the ladder of automation, safety features are coming more and more into the focus. Different safety measurements have to be taken into consideration based on different driving situations. One of the major concerns of the highest level of autonomy is to obtain the ability of understanding both internal and external situations. Most of the research made on vision based systems are focused on image processing and artificial intelligence systems like machine learning and deep learning. Due to the current generation of technology being the generation of “Connected World”, there is no lack of data any more. As a result of the introduction of internet of things, most of these connected devices are able to share and transfer data. Vision based techniques are techniques that are hugely depended on these vision based data.


Author(s):  
Yingce Xia ◽  
Jiang Bian ◽  
Tao Qin ◽  
Nenghai Yu ◽  
Tie-Yan Liu

Recent years have witnessed the rapid development of machine learning in solving artificial intelligence (AI) tasks in many domains, including translation, speech, image, etc. Within these domains, AI tasks are usually not independent. As a specific type of relationship, structural duality does exist between many pairs of AI tasks, such as translation from one language to another vs. its opposite direction, speech recognition vs. speech synthetization, image classification vs. image generation, etc. The importance of such duality has been magnified by some recent studies, which revealed that it can boost the learning of two tasks in the dual form. However, there has been little investigation on how to leverage this invaluable relationship into the inference stage of AI tasks. In this paper, we propose a general framework of dual inference which can take advantage of both existing models from two dual tasks, without re-training, to conduct inference for one individual task. Empirical studies on three pairs of specific dual tasks, including machine translation, sentiment analysis, and image processing have illustrated that dual inference can significantly improve the performance of each of individual tasks.


2021 ◽  
Author(s):  
◽  
M. F. Bouzon

Artificial Neural Networks are a popular machine learning and artificial intelligence technique, proposed since the 1950s. Among their greatest challenges is the training of parameters such as weights, parameters of the activation functions and constants, as well as their yperparameters, such as network architecture and density of neurons per layer. Among the best known algorithms for parametric optimization of networks are Adam and BP, applied mainly in popular architectures such as MLP, RNN, LSTM, Feed-forward Neural Network (FNN), RBFNN, among many others. Recently, the great success of deep neural networks, known as Deep Learnings, as well as fully connected networks, has faced problems with training time and the use of specialized hardware. These challenges gave new impetus to the use of optimization algorithms for the training of these networks, and more recently to the algorithms inspired by nature, also called as NI. This strategy, although not a recent technique, has not yet received much attention from researchers, requiring today a greater number of experimental tests and evaluation, mainly due to the recent appearance of a much larger range of algorithms NI. Some of the elements that need attention, especially for the most recent NI, are mainly related to the time of convergence and studies on the use of different cost functions. Thus, the present master’s dissertation aims to perform tests, comparisons, and studies on algorithms NI applied to the training of neural networks. Both traditional and recent NI algorithms were tested, from many perspectives, including convergence time and cost functions, elements that until now have received little attention from researchers in previous tests. The results showed that the use of NI algorithms for the training of traditional RNAs obtained results with good classification, similar to popular algorithms such as Adam and BPMA, but surpassing these algorithms in terms of convergence time in 20 up to 70%, depending on the network and the parameters involved. This indicates that the strategy of using NI algorithms, especially the most recent ones, for training neural networks is a promising method that can impact the time and quality of the results of recent and future machine learning applications and artificial intelligence


Sign in / Sign up

Export Citation Format

Share Document