computational resources
Recently Published Documents





Jayati Mukherjee ◽  
Swapan K. Parui ◽  
Utpal Roy

Segmentation of text lines and words in an unconstrained handwritten or a machine-printed degraded document is a challenging document analysis problem due to the heterogeneity in the document structure. Often there is un-even skew between the lines and also broken words in a document. In this article, the contribution lies in segmentation of a document page image into lines and words. We have proposed an unsupervised, robust, and simple statistical method to segment a document image that is either handwritten or machine-printed (degraded or otherwise). In our proposed method, the segmentation is treated as a two-class classification problem. The classification is done by considering the distribution of gap size (between lines and between words) in a binary page image. Our method is very simple and easy to implement. Other than the binarization of the input image, no pre-processing is necessary. There is no need of high computational resources. The proposed method is unsupervised in the sense that no annotated document page images are necessary. Thus, the issue of a training database does not arise. In fact, given a document page image, the parameters that are needed for segmentation of text lines and words are learned in an unsupervised manner. We have applied our proposed method on several popular publicly available handwritten and machine-printed datasets (ISIDDI, IAM-Hist, IAM, PBOK) of different Indian and other languages containing different fonts. Several experimental results are presented to show the effectiveness and robustness of our method. We have experimented on ICDAR-2013 handwriting segmentation contest dataset and our method outperforms the winning method. In addition to this, we have suggested a quantitative measure to compute the level of degradation of a document page image.

2023 ◽  
Vol 55 (1) ◽  
pp. 1-46
Rodolfo Meneguette ◽  
Robson De Grande ◽  
Jo Ueyama ◽  
Geraldo P. Rocha Filho ◽  
Edmundo Madeira

Vehicular Edge Computing (VEC), based on the Edge Computing motivation and fundamentals, is a promising technology supporting Intelligent Transport Systems services, smart city applications, and urban computing. VEC can provide and manage computational resources closer to vehicles and end-users, providing access to services at lower latency and meeting the minimum execution requirements for each service type. This survey describes VEC’s concepts and technologies; we also present an overview of existing VEC architectures, discussing them and exemplifying them through layered designs. Besides, we describe the underlying vehicular communication in supporting resource allocation mechanisms. With the intent to overview the risks, breaches, and measures in VEC, we review related security approaches and methods. Finally, we conclude this survey work with an overview and study of VEC’s main challenges. Unlike other surveys in which they are focused on content caching and data offloading, this work proposes a taxonomy based on the architectures in which VEC serves as the central element. VEC supports such architectures in capturing and disseminating data and resources to offer services aimed at a smart city through their aggregation and the allocation in a secure manner.

2022 ◽  
Vol 2022 ◽  
pp. 1-10
Ruizhong Du ◽  
Jingze Wang ◽  
Shuang Li

Internet of Things (IoT) device identification is a key step in the management of IoT devices. The devices connected to the network must be controlled by the manager. For this purpose, many schemes are proposed to identify IoT devices, especially the schemes working on the gateway. However, almost all researchers do not pay close attention to the cost. Thus, considering the gateway’s limited storage and computational resources, a new lightweight IoT device identification scheme is proposed. First, the DFI (deep/dynamic flow inspection) technology is utilized to efficiently extract flow-related statistical features based on in-depth studies. Then, combined with symmetric uncertainty and correlation coefficient, we proposed a novel filter feature selection method based on NSGA-III to select effective features for IoT device identification. We evaluate our proposed method by using a real smart home IoT data set and three different ML algorithms. The experimental results showed that our proposed method is lightweight and the feature selection algorithm is also effective, only using 6 features can achieve 99.5% accuracy with a 3-minute time interval.

2022 ◽  
Vol 12 ◽  
Barnaby E. Walker ◽  
Allan Tucker ◽  
Nicky Nicolson

The mobilization of large-scale datasets of specimen images and metadata through herbarium digitization provide a rich environment for the application and development of machine learning techniques. However, limited access to computational resources and uneven progress in digitization, especially for small herbaria, still present barriers to the wide adoption of these new technologies. Using deep learning to extract representations of herbarium specimens useful for a wide variety of applications, so-called “representation learning,” could help remove these barriers. Despite its recent popularity for camera trap and natural world images, representation learning is not yet as popular for herbarium specimen images. We investigated the potential of representation learning with specimen images by building three neural networks using a publicly available dataset of over 2 million specimen images spanning multiple continents and institutions. We compared the extracted representations and tested their performance in application tasks relevant to research carried out with herbarium specimens. We found a triplet network, a type of neural network that learns distances between images, produced representations that transferred the best across all applications investigated. Our results demonstrate that it is possible to learn representations of specimen images useful in different applications, and we identify some further steps that we believe are necessary for representation learning to harness the rich information held in the worlds’ herbaria.

2022 ◽  
Eliseu Morais de Oliveira ◽  
Rafael F Reale ◽  
Joberto S. B. Martins

The extensive adoption of computer networks, especially the Internet, using services that require extensive data flows, has generated a growing demand for computational resources, mainly bandwidth. Bandwidth Allocation Models (BAM) have proven to be a viable alternative to network management where the bandwidth resource is shared to meet the high demand for the network. However, managing these networks has become an increasingly complex task, and solutions that allow for nearly autonomous configuration with less intervention of the network manager are highly demanded. The use of Case-Based Reasoning (CBR) techniques for network management has proven satisfactory for decision making and network management. This work presents a proposal for network reconfiguration based on the CBR cycle, intelligence, and cognitive module for MPLS (Multi-Protocol Label Switching) networks. The results show that CBR is a feasible solution for auto-configuration with autonomic characteristics in the MPLS using bandwidth allocation models (BAMs). The proposal improved the general network performance.

Damiano Perri ◽  
Marco Simonetti ◽  
Osvaldo Gervasi

This study analyses some of the leading technologies for the construction and configuration of IT infrastructures to provide services to users. For modern applications, guaranteeing service continuity even in very high computational load or network problems is essential. Our configuration has among the main objectives of being highly available (HA) and horizontally scalable, that is, able to increase the computational resources that can be delivered when needed and reduce them when they are no longer necessary. Various architectural possibilities are analysed, and the central schemes used to tackle problems of this type are also described in terms of disaster recovery. The benefits offered by virtualisation technologies are highlighted and are bought with modern techniques for managing Docker containers that will be used to build the back-end of a sample infrastructure related to a use-case we have developed. In addition to this, an in-depth analysis is reported on the central autoscaling policies that can help manage high loads of requests from users to the services provided by the infrastructure. The results we have presented show an average response time of 21.7 milliseconds with a standard deviation of 76.3 milliseconds showing excellent responsiveness. Some peaks are associated with high-stress events for the infrastructure, but the response time does not exceed 2 seconds even in this case. The results of the considered use case studied for nine months are presented and discussed. In the study period, we improved the back-end configuration and defined the main metrics to deploy the web application efficiently.

Olivér Csernyava ◽  
Bálint Péter Horváth ◽  
Zsolt Badics ◽  
Sándor Bilicz

Purpose The purpose of this paper is the development of an analytic computational model for electromagnetic (EM) wave scattering from spherical objects. The main application field is the modeling of electrically large objects, where the standard numerical techniques require huge computational resources. An example is full-wave modeling of the human head in the millimeter-wave regime. Hence, an approximate model or analytical approach is used. Design/methodology/approach The Mie–Debye theorem is used for calculating the EM scattering from a layered dielectric sphere. The evaluation of the analytical expressions involved in the infinite sum has several numerical instabilities, which makes the precise calculation a challenge. The model is validated through an application example with comparing results to numerical calculations (finite element method). The human head model is used with the approximation of a two-layer sphere, where the brain tissues and the cranial bones are represented by homogeneous materials. Findings A significant improvement is introduced for the stable calculation of the Mie coefficients of a core–shell stratified sphere illuminated by a linearly polarized EM plane wave. Using this technique, a semi-analytical expression is derived for the power loss in the sphere resulting in quick and accurate calculations. Originality/value Two methods are introduced in this work with the main objective of estimating the final precision of the results. This is an important aspect for potentially unstable calculations, and the existing implementations have not included this feature so far.

Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 148
Mayuri Sharma ◽  
Keshab Nath ◽  
Rupam Kumar Sharma ◽  
Chandan Jyoti Kumar ◽  
Ankit Chaudhary

Computer vision-based automation has become popular in detecting and monitoring plants’ nutrient deficiencies in recent times. The predictive model developed by various researchers were so designed that it can be used in an embedded system, keeping in mind the availability of computational resources. Nevertheless, the enormous popularity of smart phone technology has opened the door of opportunity to common farmers to have access to high computing resources. To facilitate smart phone users, this study proposes a framework of hosting high end systems in the cloud where processing can be done, and farmers can interact with the cloud-based system. With the availability of high computational power, many studies have been focused on applying convolutional Neural Networks-based Deep Learning (CNN-based DL) architectures, including Transfer learning (TL) models on agricultural research. Ensembling of various TL architectures has the potential to improve the performance of predictive models by a great extent. In this work, six TL architectures viz. InceptionV3, ResNet152V2, Xception, DenseNet201, InceptionResNetV2, and VGG19 are considered, and their various ensemble models are used to carry out the task of deficiency diagnosis in rice plants. Two publicly available datasets from Mendeley and Kaggle are used in this study. The ensemble-based architecture enhanced the highest classification accuracy to 100% from 99.17% in the Mendeley dataset, while for the Kaggle dataset; it was enhanced to 92% from 90%.

Sign in / Sign up

Export Citation Format

Share Document