high computational complexity
Recently Published Documents


TOTAL DOCUMENTS

99
(FIVE YEARS 53)

H-INDEX

6
(FIVE YEARS 2)

Author(s):  
Robabeh Eslami ◽  
Mohammad Khoveyni

Hitherto, the presented models for measuring the efficiency score of multi-stage decision-making units (DMUs) either are nonlinear or require to specify the weights for combining their divisional efficiencies. The nonlinearity leads to high computational complexity for these models, especially when used for problems with enormous dimensions, and also assigning various weights to the divisional efficiencies causes to obtain different efficiency scores for the multi-stage network system. To tackle these problems, this study contributes to network DEA by introducing a novel enhanced Russell graph (ERG) efficiency measure for evaluating the general two-stage series network structures. Then, the proposed model is extended into the general multi-stage series network structures. This study also describes the managerial and economic implications of measuring the efficiency score of the multi-stage DMUs and provides two numerical and empirical examples for illustrating the use of our proposed model.


2021 ◽  
Vol 12 (3) ◽  
pp. 484-489
Author(s):  
Francisca O Nwokoma ◽  
Juliet N Odii ◽  
Ikechukwu I Ayogu ◽  
James C Ogbonna

Camera-based scene text detection and recognition is a research area that has attracted countless attention and had made noticeable progress in the area of deep learning technology, computer vision, and pattern recognition. They are highly recommended for capturing text on-scene images (signboards), documents with a multipart and complex background, images on thick books and documents that are highly fragile. This technology encourages real-time processing since handheld cameras are built with very high processing speed and internal memory, are quite easy and flexible to use than the traditional scanner whose usability is limited as they are not portable in size and cannot be used on images captured by cameras. However, characters captured by traditional scanners pose fewer computational difficulties as compared to camera captured images that are associated with divers’ challenges with consequences of high computational complexity and recognition difficulties. This paper, therefore, reviews the various factors that increase the computational difficulties of Camera-Based OCR, and made some recommendations as per the best practices for Camera-Based OCR systems.


2021 ◽  
Vol 8 ◽  
Author(s):  
Pravin Dangol ◽  
Eric Sihite ◽  
Alireza Ramezani

Fast constraint satisfaction, frontal dynamics stabilization, and avoiding fallovers in dynamic, bipedal walkers can be pretty challenging. The challenges include underactuation, vulnerability to external perturbations, and high computational complexity that arise when accounting for the system full-dynamics and environmental interactions. In this work, we study the potential roles of thrusters in addressing some of these locomotion challenges in bipedal robotics. We will introduce a thruster-assisted bipedal robot called Harpy. We will capitalize on Harpy’s unique design to propose an optimization-free approach to satisfy gait feasibility conditions. In this thruster-assisted legged locomotion, the reference trajectories can be manipulated to fulfill constraints brought on by ground contact and those prescribed for states and inputs. Unintended changes to the trajectories, especially those optimized to produce periodic orbits, can adversely affect gait stability and hybrid invariance. We will show our approach can still guarantee stability and hybrid invariance of the gaits by employing the thrusters in Harpy. We will also show that the thrusters can be leveraged to robustify the gaits by dodging fallovers or jumping over large obstacles.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Xinlei Shi ◽  
Xiaofei Zhang

This work studies the direct position determination (DPD) of noncircular (NC) signals with multiple arrays. Existing DPD algorithms of NC sources ignore the impact of path propagation loss on the performance of the algorithms. In practice, the signal-to-noise ratios (SNRs) of different observation stations are often different and unstable when the NC signal of the same radiation target strikes different observation locations. Besides, NC features of the target signals are applied not only to extend the virtual array manifold but also to bring high-dimensional search. For the sake of addressing the above problems, this study develops a DPD method of NC sources for multiple arrays combing weighted subspace data fusion (SDF) and dimension reduction (RD) search. First, NC features of the target signals are applied to extend the virtual array manifold. Second, we assign a weight to balance the error and obtain higher location accuracy with better robustness. Then, the RD method is used to eliminate the high computational complexity caused by the NC phase search dimension. Finally, the weighted fusion cost function is constructed by using the eigenvalues of the received signal covariance matrixes. It is verified by simulation that the proposed algorithm can effectively improve the location performance, get better robustness, and distinguish more targets compared with two-step location technology and SDF technology. In addition, without losing the estimation performance, the proposed algorithm can significantly reduce the complexity caused by the NC phase search dimension.


2021 ◽  
Vol 2 (1) ◽  
pp. 44
Author(s):  
Alejandro Fernández-Fraga ◽  
Jorge González-Domínguez ◽  
Juan Touriño

Methylation is a chemical process that modifies DNA through the addition of a methyl group to one or several nucleotides. Discovering differentially methylated regions is an important research field in genomics, as it can help to anticipate the risk of suffering from certain diseases. RADMeth is one of the most accurate tools in this field, but it has high computational complexity. In this work, we present a hybrid MPI-OpenMP parallel implementation of RADMeth to accelerate its execution on distributed-memory systems, reaching speedups of up to 189 when running on 256 cores and allowing for its application to large-scale datasets.


Author(s):  
Md. Aaqeel Hasan* ◽  
◽  
Dr. Jaypal Medida ◽  
N. Laxmi Prasanna ◽  
◽  
...  

Internet of Things (IoT) refers to the concept of connecting non-traditional computers and related sources with the help of the internet. This includes incorporating basic computing and communication technologies for daily use into Physical things. Security and Confidentiality are two major challenges in IoT. In the current security mechanisms available for IoT, the limitations in the memory, energy resources, and CPU of IoT devices compromises the critical security specifications in IoT devices. Also, the centralized architectures for security are not appropriate for IoT because of a Single attack point. It is costly to defend against attacks targeted on centralized infrastructure. Therefore, it is important to decentralize the IoT security architecture to meet the requirements of resource constraints. Blockchain is a decentralized encryption system with a large number of uses. However, because of its high computational complexity and poor scalability, the Traditional Blockchain environment is not suitable for IoT applications. So, we introduce a Sliding window protocol to the traditional blockchain so that it will better suit the applications in the IoT environment. Changing the conventional blockchain and introducing a sliding window to it makes it use previous blocks in proof of work to shape the next hash block. SWBC's results are analyzed on a data stream generated from an IoT testbed (Smart Home) in real-time. The results show that the proposed sliding window protocol improves security and reduces memory overhead and consumes fewer resources for Security.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6182
Author(s):  
Joongchol Shin ◽  
Joonki Paik

Physical model-based dehazing methods cannot, in general, avoid environmental variables and undesired artifacts such as non-collected illuminance, halo and saturation since it is difficult to accurately estimate the amount of the illuminance, light transmission and airlight. Furthermore, the haze model estimation process requires very high computational complexity. To solve this problem by directly estimating the radiance of the haze images, we present a novel dehazing and verifying network (DVNet). In the dehazing procedure, we enhanced the clean images by using a correction network (CNet), which uses the ground truth to learn the haze network. Haze images are then restored through a haze network (HNet). Furthermore, a verifying method verifies the error of both CNet and HNet using a self-supervised learning method. Finally, the proposed complementary adversarial learning method can produce results more naturally. Note that the proposed discriminator and generators (HNet & CNet) can be learned via an unpaired dataset. Overall, the proposed DVNet can generate a better dehazed result than state-of-the-art approaches under various hazy conditions. Experimental results show that the DVNet outperforms state-of-the-art dehazing methods in most cases.


2021 ◽  
Vol 10 (2) ◽  
pp. 74-83
Author(s):  
Rudi Kurniawan ◽  
Zahrul Fuadi ◽  
Ramzi Adriman

The perception, localization, and navigation of its environment are essential for autonomous mobile robots and vehicles. For that reason, a 2D Laser rangefinder sensor is used popularly in mobile robot applications to measure the origin of the robot to its surrounding objects. The measurement data generated by the sensor is transmitted to the controller, where the data is processed by one or multiple suitable algorithms in several steps to extract the desired information. Universal Hough Transform (UHT) is one of the appropriate and popular algorithms to extract the primitive geometry such as straight line, which later will be used in the further step of data processing. However, the UHT has high computational complexity and requires the so-called accumulator array, which is less suitable for real-time applications where a high speed and low complexity computation is highly demanded. In this study, an Accumulator-free Hough Transform (AfHT) is proposed to reduce the computational complexity and eliminate the need for the accumulator array. The proposed algorithm is validated using the measurement data from a 2D laser scanner and compared to the standard Hough Transform. As a result, the extracted value of AfHT shows a good agreement with that of UHT but with a significant reduction in the complexity of the computation and the need for computer memory.


Author(s):  
Sunghoon Lee ◽  
Jooyoun Park ◽  
Il-Min Kim ◽  
Jun Heo

AbstractIn this research, we study soft-output decoding of polar codes. Two representative soft-output decoding algorithms are belief propagation (BP) and soft cancellation (SCAN). The BP algorithm has low latency but suffers from high computational complexity. On the other hand, the SCAN algorithm, which is proposed for reduced complexity of soft-output decoding, achieves good decoding performance but suffers from long latency. These two algorithms are suitable only for two extreme cases that need very low latency (but with high complexity) or very low complexity (but with high latency). However, many practical systems may need to work for the moderate cases (i.e., not too high latency and not too high complexity) rather than two extremes. To adapt to the various needs of the systems, we propose a very flexible soft-output decoding framework of polar codes. Depending on which system requirement is most crucial, the proposed scheme can adapt to the systems by controlling the level of parallelism. Numerical results demonstrate that the proposed scheme can effectively adapt to various system requirements by changing the level of parallelism.


Author(s):  
Tarek Sallam ◽  
Ahmed M. Attiya

Abstract Achieving robust and fast two-dimensional adaptive beamforming of phased array antennas is a challenging problem due to its high-computational complexity. To address this problem, a deep-learning-based beamforming method is presented in this paper. In particular, the optimum weight vector is computed by modeling the problem as a convolutional neural network (CNN), which is trained with I/O pairs obtained from the optimum Wiener solution. In order to exhibit the robustness of the new technique, it is applied on an 8 × 8 phased array antenna and compared with a shallow (non-deep) neural network namely, radial basis function neural network. The results reveal that the CNN leads to nearly optimal Wiener weights even in the presence of array imperfections.


Sign in / Sign up

Export Citation Format

Share Document