AI Communications
Latest Publications


TOTAL DOCUMENTS

760
(FIVE YEARS 54)

H-INDEX

22
(FIVE YEARS 2)

Published By Ios Press

1875-8452, 0921-7126

2021 ◽  
pp. 1-21
Author(s):  
Chu-Min Li ◽  
Zhenxing Xu ◽  
Jordi Coll ◽  
Felip Manyà ◽  
Djamal Habet ◽  
...  

The Maximum Satisfiability Problem, or MaxSAT, offers a suitable problem solving formalism for combinatorial optimization problems. Nevertheless, MaxSAT solvers implementing the Branch-and-Bound (BnB) scheme have not succeeded in solving challenging real-world optimization problems. It is widely believed that BnB MaxSAT solvers are only superior on random and some specific crafted instances. At the same time, SAT-based MaxSAT solvers perform particularly well on real-world instances. To overcome this shortcoming of BnB MaxSAT solvers, this paper proposes a new BnB MaxSAT solver called MaxCDCL. The main feature of MaxCDCL is the combination of clause learning of soft conflicts and an efficient bounding procedure. Moreover, the paper reports on an experimental investigation showing that MaxCDCL is competitive when compared with the best performing solvers of the 2020 MaxSAT Evaluation. MaxCDCL performs very well on real-world instances, and solves a number of instances that other solvers cannot solve. Furthermore, MaxCDCL, when combined with the best performing MaxSAT solvers, solves the highest number of instances of a collection from all the MaxSAT evaluations held so far.


2021 ◽  
pp. 1-21
Author(s):  
Andrei C. Apostol ◽  
Maarten C. Stol ◽  
Patrick Forré

We propose a novel pruning method which uses the oscillations around 0, i.e. sign flips, that a weight has undergone during training in order to determine its saliency. Our method can perform pruning before the network has converged, requires little tuning effort due to having good default values for its hyperparameters, and can directly target the level of sparsity desired by the user. Our experiments, performed on a variety of object classification architectures, show that it is competitive with existing methods and achieves state-of-the-art performance for levels of sparsity of 99.6 % and above for 2 out of 3 of the architectures tested. Moreover, we demonstrate that our method is compatible with quantization, another model compression technique. For reproducibility, we release our code at https://github.com/AndreiXYZ/flipout.


2021 ◽  
pp. 1-19
Author(s):  
Marcella Cornia ◽  
Lorenzo Baraldi ◽  
Rita Cucchiara

Image Captioning is the task of translating an input image into a textual description. As such, it connects Vision and Language in a generative fashion, with applications that range from multi-modal search engines to help visually impaired people. Although recent years have witnessed an increase in accuracy in such models, this has also brought increasing complexity and challenges in interpretability and visualization. In this work, we focus on Transformer-based image captioning models and provide qualitative and quantitative tools to increase interpretability and assess the grounding and temporal alignment capabilities of such models. Firstly, we employ attribution methods to visualize what the model concentrates on in the input image, at each step of the generation. Further, we propose metrics to evaluate the temporal alignment between model predictions and attribution scores, which allows measuring the grounding capabilities of the model and spot hallucination flaws. Experiments are conducted on three different Transformer-based architectures, employing both traditional and Vision Transformer-based visual features.


2021 ◽  
pp. 1-18
Author(s):  
Henri Prade ◽  
Gilles Richard

Analogical proportions are statements of the form “a is to b as c is to d”, denoted a : b : : c : d, that may apply to any type of items a, b, c, d. Analogical proportions, as a building block for analogical reasoning, is then a tool of interest in artificial intelligence. Viewed as a relation between pairs ( a , b ) and ( c , d ), these proportions are supposed to obey three postulates: reflexivity, symmetry, and central permutation (i.e., b and c can be exchanged). The logical modeling of analogical proportions expresses that a and b differ in the same way as c and d, when the four items are represented by vectors encoding Boolean properties. When items are real numbers, numerical proportions – arithmetic and geometric proportions – can be considered as prototypical examples of analogical proportions. Taking inspiration of an old practice where numerical proportions were handled in a vectorial way and where sequences of numerical proportions of the form x 1 : x 2 : ⋯ : x n : : y 1 : y 2 : ⋯ : y n were in use, we emphasize a vectorial treatment of Boolean analogical proportions and we propose a Boolean logic counterpart to such sequences. This provides a linear algebra calculus of analogical inference and acknowledges the fact that analogical proportions should not be considered in isolation. Moreover, this also leads us to reconsider the postulates underlying analogical proportions (since central permutation makes no sense when n ⩾ 3) and then to formalize a weak form of analogical proportion which no longer obeys the central permutation postulate inherited from numerical proportions. But these weak proportions may still be combined in multiple weak analogical proportions.


2021 ◽  
pp. 1-18
Author(s):  
Angeliki Koutsimpela ◽  
Konstantinos D. Koutroumbas

Several well known clustering algorithms have their own online counterparts, in order to deal effectively with the big data issue, as well as with the case where the data become available in a streaming fashion. However, very few of them follow the stochastic gradient descent philosophy, despite the fact that the latter enjoys certain practical advantages (such as the possibility of (a) running faster than their batch processing counterparts and (b) escaping from local minima of the associated cost function), while, in addition, strong theoretical convergence results have been established for it. In this paper a novel stochastic gradient descent possibilistic clustering algorithm, called O- PCM 2 is introduced. The algorithm is presented in detail and it is rigorously proved that the gradient of the associated cost function tends to zero in the L 2 sense, based on general convergence results established for the family of the stochastic gradient descent algorithms. Furthermore, an additional discussion is provided on the nature of the points where the algorithm may converge. Finally, the performance of the proposed algorithm is tested against other related algorithms, on the basis of both synthetic and real data sets.


2021 ◽  
pp. 1-16
Author(s):  
Pegah Alizadeh ◽  
Emiliano Traversi ◽  
Aomar Osmani

Markov Decision Process Models (MDPs) are a powerful tool for planning tasks and sequential decision-making issues. In this work we deal with MDPs with imprecise rewards, often used when dealing with situations where the data is uncertain. In this context, we provide algorithms for finding the policy that minimizes the maximum regret. To the best of our knowledge, all the regret-based methods proposed in the literature focus on providing an optimal stochastic policy. We introduce for the first time a method to calculate an optimal deterministic policy using optimization approaches. Deterministic policies are easily interpretable for users because for a given state they provide a unique choice. To better motivate the use of an exact procedure for finding a deterministic policy, we show some (theoretical and experimental) cases where the intuitive idea of using a deterministic policy obtained after “determinizing” the optimal stochastic policy leads to a policy far from the exact deterministic policy.


2021 ◽  
pp. 1-13
Author(s):  
Md Kamruzzaman Sarker ◽  
Lu Zhou ◽  
Aaron Eberhart ◽  
Pascal Hitzler

Neuro-Symbolic Artificial Intelligence – the combination of symbolic methods with methods that are based on artificial neural networks – has a long-standing history. In this article, we provide a structured overview of current trends, by means of categorizing recent publications from key conferences. The article is meant to serve as a convenient starting point for research on the general topic.


2021 ◽  
pp. 1-14
Author(s):  
Xueyu Liu ◽  
Ming Li ◽  
Yongfei Wu ◽  
Yilin Chen ◽  
Fang Hao ◽  
...  

In the diagnosis of chronic kidney disease, glomerulus as the blood filter provides important information for an accurate disease diagnosis. Thus automatic localization of the glomeruli is the necessary groundwork for future auxiliary kidney disease diagnosis, such as glomerular classification and area measurement. In this paper, we propose an efficient glomerular object locator in kidney whole slide image(WSI) based on proposal-free network and dynamic scale evaluation method. In the training phase, we construct an intensive proposal-free network which can learn efficiently the fine-grained features of the glomerulus. In the evaluation phase, a dynamic scale evaluation method is utilized to help the well-trained model find the most appropriate evaluation scale for each high-resolution WSI. We collect and digitalize 1204 renal biopsy microscope slides containing more than 41000 annotated glomeruli, which is the largest number of dataset to our best knowledge. We validate the each component of the proposed locator via the ablation study. Experimental results confirm that the proposed locator outperforms recently proposed approaches and pathologists by comparing F 1 and run time in localizing glomeruli from WSIs at a resolution of 0.25 μm/pixel and thus achieves state-of-the-art performance. Particularly, the proposed locator can be embedded into the renal intelligent auxiliary diagnosis system for renal clinical diagnosis by localizing glomeruli in high-resolution WSIs effectively.


2021 ◽  
pp. 1-26
Author(s):  
Gauthier Chassang ◽  
Mogens Thomsen ◽  
Pierre Rumeau ◽  
Florence Sèdes ◽  
Alejandra Delfin

We propose a comprehensive analysis of existing concepts of AI coming from different disciplines: Psychology and engineering tackle the notion of intelligence, while ethics and law intend to regulate AI innovations. The aim is to identify shared notions or discrepancies to consider for qualifying AI systems. Relevant concepts are integrated into a matrix intended to help defining more precisely when and how computing tools (programs or devices) may be qualified as AI while highlighting critical features to serve a specific technical, ethical and legal assessment of challenges in AI development. Some adaptations of existing notions of AI characteristics are proposed. The matrix is a risk-based conceptual model designed to allow an empirical, flexible and scalable qualification of AI technologies in the perspective of benefit-risk assessment practices, technological monitoring and regulatory compliance: it offers a structured reflection tool for stakeholders in AI development that are engaged in responsible research and innovation.


2021 ◽  
pp. 1-16
Author(s):  
Kubilay Demir ◽  
Vedat Tümen

Detection and diagnosis of the plant diseases in the early stage significantly minimize yield losses. Image-based automated plant diseases identification (APDI) tools have started to been widely used in pest managements strategies. The current APDI systems rely on images captured in laboratory conditions, which hardens the usage of the APDI systems by smallholder farmers. In this study, we investigate whether the smallholder farmers can exploit APDI systems using their basic and cheap unmanned autonomous vehicles (UAVs) with standard cameras. To create the tomato images like the one taken by UAVs, we build a new dataset from a public dataset by using image processing tools. The dataset includes tomato leaf photographs separated into 10 classes (diseases or healthy). To detect the diseases, we develop a new hybrid detection model, called SpikingTomaNet, which merges a novel deep convolutional neural network model with spiking neural network (SNN) model. This hybrid model provides both better accuracy rates for the plant diseases identification and more energy efficiency for the battery-constrained UAVs due to the SNN’s event-driven architecture. In this hybrid model, the features extracted from the CNN model are used as the input layer for SNNs. To assess our approach’s performance, firstly, we compare the proposed CNN model inside the developed hybrid model with well-known AlexNet, VggNet-5 and LeNet models. Secondly, we compare the developed hybrid model with three hybrid models composed of combinations of the well-known models and SNN model. To train and test the proposed neural network, 32022 images in the dataset are exploited. The results show that the SNN method significantly increases the success, especially in the augmented dataset. The experiment result shows that while the proposed hybrid model provides 97.78% accuracy on original images, its success on the created datasets is between 59.97%–82.98%. In addition, the results shows that the proposed hybrid model provides better overall accuracy in the classification of the diseases in comparison to the well-known models and LeNet and their combination with SNN.


Sign in / Sign up

Export Citation Format

Share Document