computational burden
Recently Published Documents


TOTAL DOCUMENTS

297
(FIVE YEARS 124)

H-INDEX

19
(FIVE YEARS 4)

2022 ◽  
Vol 31 (2) ◽  
pp. 1-32
Author(s):  
Luca Ardito ◽  
Andrea Bottino ◽  
Riccardo Coppola ◽  
Fabrizio Lamberti ◽  
Francesco Manigrasso ◽  
...  

In automated Visual GUI Testing (VGT) for Android devices, the available tools often suffer from low robustness to mobile fragmentation, leading to incorrect results when running the same tests on different devices. To soften these issues, we evaluate two feature matching-based approaches for widget detection in VGT scripts, which use, respectively, the complete full-screen snapshot of the application ( Fullscreen ) and the cropped images of its widgets ( Cropped ) as visual locators to match on emulated devices. Our analysis includes validating the portability of different feature-based visual locators over various apps and devices and evaluating their robustness in terms of cross-device portability and correctly executed interactions. We assessed our results through a comparison with two state-of-the-art tools, EyeAutomate and Sikuli. Despite a limited increase in the computational burden, our Fullscreen approach outperformed state-of-the-art tools in terms of correctly identified locators across a wide range of devices and led to a 30% increase in passing tests. Our work shows that VGT tools’ dependability can be improved by bridging the testing and computer vision communities. This connection enables the design of algorithms targeted to domain-specific needs and thus inherently more usable and robust.


2022 ◽  
Vol 8 ◽  
Author(s):  
Hongyu Wang ◽  
Hong Gu ◽  
Pan Qin ◽  
Jia Wang

Deep learning has achieved considerable success in medical image segmentation. However, applying deep learning in clinical environments often involves two problems: (1) scarcity of annotated data as data annotation is time-consuming and (2) varying attributes of different datasets due to domain shift. To address these problems, we propose an improved generative adversarial network (GAN) segmentation model, called U-shaped GAN, for limited-annotated chest radiograph datasets. The semi-supervised learning approach and unsupervised domain adaptation (UDA) approach are modeled into a unified framework for effective segmentation. We improve GAN by replacing the traditional discriminator with a U-shaped net, which predicts each pixel a label. The proposed U-shaped net is designed with high resolution radiographs (1,024 × 1,024) for effective segmentation while taking computational burden into account. The pointwise convolution is applied to U-shaped GAN for dimensionality reduction, which decreases the number of feature maps while retaining their salient features. Moreover, we design the U-shaped net with a pretrained ResNet-50 as an encoder to reduce the computational burden of training the encoder from scratch. A semi-supervised learning approach is proposed learning from limited annotated data while exploiting additional unannotated data with a pixel-level loss. U-shaped GAN is extended to UDA by taking the source and target domain data as the annotated data and the unannotated data in the semi-supervised learning approach, respectively. Compared to the previous models dealing with the aforementioned problems separately, U-shaped GAN is compatible with varying data distributions of multiple medical centers, with efficient training and optimizing performance. U-shaped GAN can be generalized to chest radiograph segmentation for clinical deployment. We evaluate U-shaped GAN with two chest radiograph datasets. U-shaped GAN is shown to significantly outperform the state-of-the-art models.


2021 ◽  
Vol 13 (2) ◽  
pp. 56-61
Author(s):  
Iwan Setiawan ◽  
Akbari Indra Basuki ◽  
Didi Rosiyadi

High performance computing (HPC) is required for image processing especially for picture element (pixel) with huge size. To avoid dependence to HPC equipment which is very expensive to be provided, the soft approach has been performed in this work. Actually, both hard and soft methods offer similar goal which are to reach time computation as short as possible. The discrete cosine transformation (DCT) and singular values decomposition (SVD) are conventionally performed to original image by consider it as a single matrix. This will result in computational burden for images with huge pixel. To overcome this problem, the second order matrix has been performed as block matrix to be applied on the original image which delivers the DCT-SVD hybrid formula. Hybrid here means the only required parameter shown in formula is intensity of the original pixel as the DCT and SVD formula has been merged in derivation. Result shows that when using Lena as original image, time computation of the singular values using the hybrid formula is almost two seconds faster than the conventional. Instead of pushing hard to provide the equipment, it is possible to overcome computational problem due to the size simply by using the proposed formula.


2021 ◽  
Author(s):  
Luiz Carlos Felix Ribeiro ◽  
Gustavo Henrique de Rosa ◽  
Douglas Rodrigues ◽  
João Paulo Papa

Abstract Convolutional Neural Networks have been widely employed in a diverse range of computer vision-based applications, including image classification, object recognition, and object segmentation. Nevertheless, one weakness of such models concerns their hyperparameters' setting, being highly specific for each particular problem. One common approach is to employ meta-heuristic optimization algorithms to find suitable sets of hyperparameters at the expense of increasing the computational burden, being unfeasible under real-time scenarios. In this paper, we address this problem by creating Convolutional Neural Networks ensembles through Single-Iteration Optimization, a fast optimization composed of only one iteration that is no more effective than a random search. Essentially, the idea is to provide the same capability offered by long-term optimizations, however, without their computational loads. The results among four well-known literature datasets revealed that creating one-iteration optimized ensembles provide promising results while diminishing the time to achieve them.


Author(s):  
Pieter C. Schoonees ◽  
Patrick J. F. Groenen ◽  
Michel van de Velden

AbstractA least-squares bilinear clustering framework for modelling three-way data, where each observation consists of an ordinary two-way matrix, is introduced. The method combines bilinear decompositions of the two-way matrices with clustering over observations. Different clusterings are defined for each part of the bilinear decomposition, which decomposes the matrix-valued observations into overall means, row margins, column margins and row–column interactions. Therefore up to four different classifications are defined jointly, one for each type of effect. The computational burden is greatly reduced by the orthogonality of the bilinear model, such that the joint clustering problem reduces to separate problems which can be handled independently. Three of these sub-problems are specific cases of k-means clustering; a special algorithm is formulated for the row–column interactions, which are displayed in clusterwise biplots. The method is illustrated via an empirical example and interpreting the interaction biplots are discussed. Supplemental materials for this paper are available online, which includes the dedicated R package, .


2021 ◽  
Author(s):  
Nguyen Hoai Nam

This paper provides a solution for a linear command governor (CG) that employs invariant and constraint-admissible ellipsoid. The motivation is to substitute the typical polyhedral set used in almost all CG schemes with the ellipsoidal one, which is much easier to construct. However the price for this offline computational efficiency is that the size of the feasible set can be relatively small, and the online computational burden is heavier than that of polyhedral set based CGs. The proposed solution overcomes these two weaknesses and offers a very attractive alternative to polyhedral set based CG. Two numerical examples with comparison to earlier solutions from the literature illustrate the effectiveness of the proposed algorithm.


2021 ◽  
Author(s):  
Nguyen Hoai Nam

This paper provides a solution for a linear command governor (CG) that employs invariant and constraint-admissible ellipsoid. The motivation is to substitute the typical polyhedral set used in almost all CG schemes with the ellipsoidal one, which is much easier to construct. However the price for this offline computational efficiency is that the size of the feasible set can be relatively small, and the online computational burden is heavier than that of polyhedral set based CGs. The proposed solution overcomes these two weaknesses and offers a very attractive alternative to polyhedral set based CG. Two numerical examples with comparison to earlier solutions from the literature illustrate the effectiveness of the proposed algorithm.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Ambareesh Ravi ◽  
Fakhri Karray

AbstractConvolutional Recurrent architectures are currently preferred for spatio-temporal learning tasks in videos to the 3D convolutional networks which accompany a huge computational burden and it is imperative to understand the working of different architectural configurations. But most of the current works on visual learning, especially for video anomaly detection, predominantly employ ConvLSTM networks and focus less on other possible variants of Convolutional Recurrent configurations for temporal learning which warrants a need to study the different possible variants to make informed, optimal design choices according to the nature of the application at hand. We explore a variety of Convolutional Recurrent architectures and the influence of hyper-parameters on their performance for the task of anomaly detection. Through this work, we also intend to quantify the efficiency of the architectures based on the trade-off between their performance and computational complexity. With comprehensive quantitative and visual evidence, we establish that the ConvGRU based configurations are the most effective and perform better than the popular ConvLSTM configurations on video anomaly detection tasks, in contrast to what is seen from the literature.


2021 ◽  
Vol 13 (18) ◽  
pp. 3592
Author(s):  
Yifei Zhao ◽  
Fengqin Yan

Hyperspectral image (HSI) classification is one of the major problems in the field of remote sensing. Particularly, graph-based HSI classification is a promising topic and has received increasing attention in recent years. However, graphs with pixels as nodes generate large size graphs, thus increasing the computational burden. Moreover, satisfactory classification results are often not obtained without considering spatial information in constructing graph. To address these issues, this study proposes an efficient and effective semi-supervised spectral-spatial HSI classification method based on sparse superpixel graph (SSG). In the constructed sparse superpixels graph, each vertex represents a superpixel instead of a pixel, which greatly reduces the size of graph. Meanwhile, both spectral information and spatial structure are considered by using superpixel, local spatial connection and global spectral connection. To verify the effectiveness of the proposed method, three real hyperspectral images, Indian Pines, Pavia University and Salinas, are chosen to test the performance of our proposal. Experimental results show that the proposed method has good classification completion on the three benchmarks. Compared with several competitive superpixel-based HSI classification approaches, the method has the advantages of high classification accuracy (>97.85%) and rapid implementation (<10 s). This clearly favors the application of the proposed method in practice.


Sign in / Sign up

Export Citation Format

Share Document