conventional computer
Recently Published Documents


TOTAL DOCUMENTS

92
(FIVE YEARS 32)

H-INDEX

12
(FIVE YEARS 2)

Electronics ◽  
2021 ◽  
Vol 10 (20) ◽  
pp. 2482
Author(s):  
Soronzonbold Otgonbaatar ◽  
Mihai Datcu

Satellite instruments monitor the Earth’s surface day and night, and, as a result, the size of Earth observation (EO) data is dramatically increasing. Machine Learning (ML) techniques are employed routinely to analyze and process these big EO data, and one well-known ML technique is a Support Vector Machine (SVM). An SVM poses a quadratic programming problem, and quantum computers including quantum annealers (QA) as well as gate-based quantum computers promise to solve an SVM more efficiently than a conventional computer; training the SVM by employing a quantum computer/conventional computer represents a quantum SVM (qSVM)/classical SVM (cSVM) application. However, quantum computers cannot tackle many practical EO problems by using a qSVM due to their very low number of input qubits. Hence, we assembled a coreset (“core of a dataset”) of given EO data for training a weighted SVM on a small quantum computer, a D-Wave quantum annealer with around 5000 input quantum bits. The coreset is a small, representative weighted subset of an original dataset, and its performance can be analyzed by using the proposed weighted SVM on a small quantum computer in contrast to the original dataset. As practical data, we use synthetic data, Iris data, a Hyperspectral Image (HSI) of Indian Pine, and a Polarimetric Synthetic Aperture Radar (PolSAR) image of San Francisco. We measured the closeness between an original dataset and its coreset by employing a Kullback–Leibler (KL) divergence test, and, in addition, we trained a weighted SVM on our coreset data by using both a D-Wave quantum annealer (D-Wave QA) and a conventional computer. Our findings show that the coreset approximates the original dataset with very small KL divergence (smaller is better), and the weighted qSVM even outperforms the weighted cSVM on the coresets for a few instances of our experiments. As a side result (or a by-product result), we also present our KL divergence findings for demonstrating the closeness between our original data (i.e., our synthetic data, Iris data, hyperspectral image, and PolSAR image) and the assembled coreset.


Author(s):  
Robert Kowalski ◽  
Akber Datoo

AbstractIn this paper, we present an informal introduction to Logical English (LE) and illustrate its use to standardise the legal wording of the Automatic Early Termination (AET) clauses of International Swaps and Derivatives Association (ISDA) Agreements. LE can be viewed both as an alternative to conventional legal English for expressing legal documents, and as an alternative to conventional computer languages for automating legal documents. LE is a controlled natural language (CNL), which is designed both to be computer-executable and to be readable by English speakers without special training. The basic form of LE is syntactic sugar for logic programs, in which all sentences have the same standard form, either as rules of the form conclusion if conditions or as unconditional sentences of the form conclusion. However, LE extends normal logic programming by introducing features that are present in other computer languages and other logics. These features include typed variables signalled by common nouns, and existentially quantified variables in the conclusions of sentences signalled by indefinite articles. Although LE translates naturally into a logic programming language such as Prolog or ASP, it can also serve as a neutral standard, which can be compiled into other lower-level computer languages.


Author(s):  
Maxim Kalinin ◽  
Dmitry Zegzhda ◽  
Vasiliy Krundyshev ◽  
Daria Lavrova ◽  
Dmitry Moskvin ◽  
...  

The functionality of any system can be represented as a set of commands that lead to a change in the state of the system. The intrusion detection problem for signature-based intrusion detection systems is equivalent to matching the sequences of operational commands executed by the protected system to known attack signatures. Various mutations in attack vectors (including replacing commands with equivalent ones, rearranging the commands and their blocks, adding garbage and empty commands into the sequence) reduce the effectiveness and accuracy of the intrusion detection. The article analyzes the existing solutions in the field of bioinformatics and considers their applicability for solving the problem of identifying polymorphic attacks by signature-based intrusion detection systems. A new approach to the detection of polymorphic attacks based on the suffix tree technology applied in the assembly and verification of the similarity of genomic sequences is discussed. The use of bioinformatics technology allows us to achieve high accuracy of intrusion detection at the level of modern intrusion detection systems (more than 0.90), while surpassing them in terms of cost-effectiveness of storage resources, speed and readiness to changes in attack vectors. To improve the accuracy indicators, a number of modifications of the developed algorithm have been carried out, as a result of which the accuracy of detecting attacks increased by up to 0.95 with the level of mutations in the sequence up to 10%. The developed approach can be used for intrusion detection both in conventional computer networks and in modern reconfigurable network infrastructures with limited resources (Internet of Things, networks of cyber-physical objects, wireless sensor networks).


2021 ◽  
Vol 10 (2) ◽  
pp. 170-175
Author(s):  
I Putu Agus Eka Pratama ◽  
Kevin Christopher Bakkara

The development of information technology and computer network from time to time is increasing along with the increase in user needs for both from the business, education, industrial, to data security side. Data of network traffic that is getting denser in communication and data exchange between users on computer networks can become a problem when using conventional computer network technology. For that, it needs a new technology that is implemented in computer networks, along with the measurement of Quality of Service (QoS) in it. Software-Defined Networking (SDN) is a solution for this, where the stages of network design, management and implementation, separate the data plane and the control plane. In this research, the implementation of SDN was carried out in the form of a simulation using both of Mininet and OpenDaylight with a Tree Topology, then the QoS measurements were carried out in it. The results of testing and measuring QoS on SDN simulations with Tree topology using Mininet and OpenDaylight, showed a Jitter value of 0.425 ms, a Packet Loss value of 0.266%, a Bandwith value of 9.3925 Mbps, a UDP Throughput value of 2.348 bits/sec, and a TCPThroughput value of 2.335 bits/sec.


2021 ◽  
Author(s):  
Shixiong Zhang ◽  
Wenmin Wang

<div>Event-based vision is a novel bio-inspired vision that has attracted the interest of many researchers. As a neuromorphic vision, the sensor is different from the traditional frame-based cameras. It has such advantages that conventional frame-based cameras can’t match, e.g., high temporal resolution, high dynamic range(HDR), sparse and minimal motion blur. Recently, a lot of computer vision approaches have been proposed with demonstrated success. However, there is a lack of some general methods to expand the scope of the application of event-based vision. To be able to effectively bridge the gap between conventional computer vision and event-based vision, in this paper, we propose an adaptable framework for object detection in event-based vision.</div>


2021 ◽  
Author(s):  
Shixiong Zhang ◽  
Wenmin Wang

<div>Event-based vision is a novel bio-inspired vision that has attracted the interest of many researchers. As a neuromorphic vision, the sensor is different from the traditional frame-based cameras. It has such advantages that conventional frame-based cameras can’t match, e.g., high temporal resolution, high dynamic range(HDR), sparse and minimal motion blur. Recently, a lot of computer vision approaches have been proposed with demonstrated success. However, there is a lack of some general methods to expand the scope of the application of event-based vision. To be able to effectively bridge the gap between conventional computer vision and event-based vision, in this paper, we propose an adaptable framework for object detection in event-based vision.</div>


2021 ◽  
Author(s):  
Xue Hu ◽  
Ferdinando Rodriguez y Baena

Abstract An automatic markerless knee tracking and registration algorithm has been proposed in the literature to avoid the marker insertion required by conventional computer-assisted knee surgery, resulting in a shorter and less invasive surgical workflow. However, such an algorithm considers intact femur geometry only. The bone surface modification is inevitable due to intra-operative intervention. The mismatched correspondences will degrade the reliability of registered target pose. To solve this problem, this work proposed a supervised deep neural network to automatically restore the surface of processed bone. The network was trained on a synthetic dataset that consists of real depth captures of a model leg and simulated realistic femur cutting. According to the evaluation on both synthetic data and real-time captures, the registration quality can be effectively improved by surface reconstruction. The improvement in tracking accuracy is only evident over test data, indicating the need for future enhancement of the dataset and network.


Author(s):  
Jenny Stritzel ◽  
Dominik Wolff ◽  
Klaus-Hendrik Wolf ◽  
Tobias Weller ◽  
Thomas Lenarz ◽  
...  

Against the background of increasing numbers of indications for Cochlea implants (CIs), there is an increasing need for a CI outcome prediction tool to assist the process of deciding on the best possible treatment solution for each individual patient prior to intervention. The hearing outcome depends on several features in cochlear structure, the influence of which is not entirely known as yet. In preparation for surgical planning a preoperative CT scan is recorded. The overall goal is the feature extraction and prediction of the hearing outcome only based on this conventional CT data. Therefore, the aim of our research work for this paper is the preprocessing of the conventional CT data and a following segmentation of the human cochlea. The great challenge is the very small size of the cochlea in combination with a fairly bad resolution. For a better distinction between cochlea and surrounding tissue, the data has to be rotated in a way the typical cochlea shape is observable. Afterwards, a segmentation can be performed which enables a feature detection. We can show the effectiveness of our method compared to results in literature which were based on CT data with a much higher resolution. A further study with a much larger amount of data is planned.


10.6036/10022 ◽  
2021 ◽  
Vol DYNA-ACELERADO (0) ◽  
pp. [ 6 pp.]-[ 6 pp.]
Author(s):  
LUISALBERTO FLORES MONTAÑO ◽  
JUAN CARLOS HERRERA LOZADA ◽  
JACOBO SANDOVAL GUTIERREZ ◽  
RODRIGO VAZQUEZ LOPEZ ◽  
DANIEL LIBRADO MARTINEZ VAZQUEZ

The Internet of Robotic Things (IoRT) is a technology that looks for monitoring, operating, and maintaining the tasks of multiple robots through the cloud. However, using these robots in cyberspace has a risk and an inherent problem in cybersecurity. To analyze the implications of this technology, the objective was to design, operate and submit an IoRT system with the default configuration. The proposed methodology consisted of designing an IoRT architecture; implement three robotic platforms linked to the cloud, applying a sniffing and spoofing cyberattacks, assess the impacts, and propose solutions. The experiment used three prototypes: two servo motors, a 6-degree-of-freedom arm, and a workstation with a robot. Additionally, the tools of the experiment were a conventional computer, a Raspberry Pi microcomputer, the Robotic Operative System middleware, the Kali Linux distribution, and the ThingSpeak cloud service. The contributions of the work were three, first it was proven that four types of links are sufficient to homologate, and ensure the integrity, reliability, and availability in the operation of different types of robots. Also, it was possible the connection of these robots even though they are not designed to work on the internet through a slave-robot node link. Finally, a real list of the consequences was obtained, given the vulnerabilities and the attacks tested, as well as some recommendations.Keywords: Cybersecurity, IoRT, Industry 4.0., Common Vulnerabilities and Exposures, Cloud, ROS.


2021 ◽  
Vol 28 (1) ◽  
Author(s):  
Titu-Marius I. Băjenescu ◽  

The quantum computer, is a "supercomputer" that relies on the phenomena of quantum mechanics to perform operations on data. Object of suppositions, sometimes farfetched, quantum mechanics gave birth to the quantum computer, a machine capable of processing data tens of millions of times faster than a conventional computer. A quantum computer doesn't use the same memory as a conventional computer. Rather than a sequence of 0 and 1, it works with qubits or quantum bits. The quantum computer is a combination of two major scientific fields: quantum mechanics and computer science. Quantum mechanics, on which this computer is based, governs the movement of bodies in the atomic, molecular and corpuscular domains, is a theory whose logic is totally contrary to intuition and it is essential to use mathematics to fully grasp it. Quantum computing is the sub-domain of computer science that deals with quantum computers using quantum mechanical phenomena, as opposed to those of electricity exclusively, for so-called "classical" computing. The quantum phenomena used are quantum entanglement and superposition. The article examines some aspects related to the development, operation, advantages and difficulties, applications and future of the quantum computer.


Sign in / Sign up

Export Citation Format

Share Document