scholarly journals Performance Analysis of 77 GHz mmWave Radar Based Object Behavior

2021 ◽  
pp. 576-582
Author(s):  
Arsyad R. Darlis ◽  
◽  
Nur Ibrahim ◽  
Benyamin Kusumoputro

In this paper, performance analysis of the object behavior using mmwave radar is proposed. The AWR1642 mmwave radar with frequency 77 GHz that has advantages over the other sensors, especially in penetrate materials and high accuracy, is utilized in this research. The paper presents an analysis of object behavior in the experimental method in the indoor environment. The radar will detect an object by sending a chirp signal and receiving it again after it is reflected. The mmwave radar shows the performances in the distance, number of objects, radiation pattern, and velocity. The measurement results show that the object can be detected up to 3 m in the indoor environment with a high level of accuracy and stability. Then, the radar can detect multiple objects in the Line-of-Sight (LOS) condition, where the received power level would be attenuated by about 10 dB after penetrating the first object. The research results showed the beamwidth of the radar is 140 degrees with a directional radiation pattern from 20 degrees to 160 degrees. In this system, radar has been able to identify the velocity of the object accurately. It appears that increasing the speed will affect the Central Processing Unit (CPU) usage on the radar too. The proposed system showed excellent performance in object behavior analysis, and it can be utilized in Synthetic Aperture Radar (SAR) applications.

2016 ◽  
Vol 6 (1) ◽  
pp. 79-90
Author(s):  
Łukasz Syrocki ◽  
Grzegorz Pestka

AbstractThe ready to use set of functions to facilitate solving a generalized eigenvalue problem for symmetric matrices in order to efficiently calculate eigenvalues and eigenvectors, using Compute Unified Device Architecture (CUDA) technology from NVIDIA, is provided. An integral part of the CUDA is the high level programming environment enabling tracking both code executed on Central Processing Unit and on Graphics Processing Unit. The presented matrix structures allow for the analysis of the advantages of using graphics processors in such calculations.


2021 ◽  
Vol 12 ◽  
Author(s):  
Sergio Gálvez ◽  
Federico Agostini ◽  
Javier Caselli ◽  
Pilar Hernandez ◽  
Gabriel Dorado

New High-Performance Computing architectures have been recently developed for commercial central processing unit (CPU). Yet, that has not improved the execution time of widely used bioinformatics applications, like BLAST+. This is due to a lack of optimization between the bases of the existing algorithms and the internals of the hardware that allows taking full advantage of the available CPU cores. To optimize the new architectures, algorithms must be revised and redesigned; usually rewritten from scratch. BLVector adapts the high-level concepts of BLAST+ to the x86 architectures with AVX-512, to harness their capabilities. A deep comprehensive study has been carried out to optimize the approach, with a significant reduction in time execution. BLVector reduces the execution time of BLAST+ when aligning up to mid-size protein sequences (∼750 amino acids). The gain in real scenario cases is 3.2-fold. When applied to longer proteins, BLVector consumes more time than BLAST+, but retrieves a much larger set of results. BLVector and BLAST+ are fine-tuned heuristics. Therefore, the relevant results returned by both are the same, although they behave differently specially when performing alignments with low scores. Hence, they can be considered complementary bioinformatics tools.


Electronics ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. 1239
Author(s):  
Yung-Hao Tung ◽  
Hung-Chuan Wei ◽  
Yen-Wu Ti ◽  
Yao-Tung Tsou ◽  
Neetesh Saxena ◽  
...  

Software-defined networking (SDN) is a new networking architecture with a centralized control mechanism. SDN has proven to be successful in improving not only the network performance, but also security. However, centralized control in the SDN architecture is associated with new security vulnerabilities. In particular, user-datagram-protocol (UDP) flooding attacks can be easily launched and cause serious packet-transmission delays, controller-performance loss, and even network shutdown. In response to applications in the Internet of Things (IoT) field, this study considers UDP flooding attacks in SDN and proposes two lightweight countermeasures. The first method sometimes sacrifices address-resolution-protocol (ARP) requests to achieve a high level of security. In the second method, although packets must sometimes be sacrificed when undergoing an attack before starting to defend, the detection of the network state can prevent normal packets from being sacrificed. When blocking a network attack, attacks from the affected port are directly blocked without affecting normal ports. The performance and security of the proposed methods were confirmed by means of extensive experiments. Compared with the situation where no defense is implemented, or similar defense methods are implemented, after simulating a UDP flooding attack, our proposed method performed better in terms of the available bandwidth, central-processing-unit (CPU) consumption, and network delay time.


2021 ◽  
Vol 15 (1) ◽  
pp. 20
Author(s):  
Robert Karamagi

Phishing has become the most convenient technique that hackers use nowadays to gain access to protected systems. This is because cybersecurity has evolved and low-cost systems with the least security investments will need quite advanced and sophisticated mechanisms to be able to penetrate technically. Systems currently are equipped with at least some level of security, imposed by security firms with a very high level of expertise in managing the common and well-known attacks. This decreases the possible technical attack surface. Nation-states or advanced persistent threats (APTs), organized crime, and black hats possess the finance and skills to penetrate many different systems. However, they are always in need of the most available computing resources, such as central processing unit (CPU) and random-access memory (RAM), so they normally hack and hook computers into a botnet. This may allow them to perform dangerous distributed denial of service (DDoS) attacks and perform brute force cracking algorithms, which are highly CPU intensive. They may also use the zombie or drone systems they have hacked to hide their location on the net and gain anonymity by bouncing off around them many times a minute. Phishing allows them to gain their stretch of compromised systems to increase their power. For a normal hacker without the money to invest in sophisticated techniques, exploiting the human factor, which is the weakest link to security, comes in handy. The possibility of successfully manipulating the human into releasing the security that they set up makes the life of the hacker very easy, because they do not have to try to break into the system with force, rather the owner will just open the door for them. The objective of the research is to review factors that enhance phishing and improve the probability of its success. We have discovered that hackers rely on triggering the emotional effects of their victims through their phishing attacks. We have applied the use of artificial intelligence to be able to detect the emotion associated with a phrase or sentence. Our model had a good accuracy which could be improved with the use of a larger dataset with more emotional sentiments for various phrases and sentences. Our technique may be used to check for emotional manipulation in suspicious emails to improve the confidence interval of suspected phishing emails.


2008 ◽  
Vol 112 (1136) ◽  
pp. 599-607
Author(s):  
K. Takeda ◽  
S. J. Newman ◽  
J. Kenny ◽  
M. Zyskowski

Abstract The development of commodity flight simulation, in the form of PC game technology, continues to advance at a rapid pace. Indeed, the software industry is now being driven primarily by the requirements of gaming, digital media, and other entertainment applications. This has largely been due to the commoditisation of computer hardware, which is apparent when considering recent trends in central processing unit and graphics processor development. The flight simulation industry has benefited from this trend of hardware commoditisation, and will continue to do so for the foreseeable future. It is, however, yet to fully realise the potential for leveraging commodity-off-the-shelf (COTS) software. In this paper the opportunities presenting themselves for the next 25 years of flight simulation are discussed, as the aviation and games software industry’s requirements converge. A SWOT (strengths-weaknesses-opportunities-threats) analysis of the commodity flight simulation software industry is presented, including flight modelling, scenery generation, multiplayer technology, artificial intelligence, mission planning, and event handling. Issues such as data portability, economics, licensing, intellectual-property, interoperability, developer extensibility, robustness, qualification, and maintainability are addressed. Microsoft Flight Simulator is used as a case study of how commodity flight simulation has been extended to include extensive programmatic access to its core engine. Examples are given on how the base platform of this application can be extended by third-party developers and the power this extensibility model provides to the industry. This paper is presented to highlight particular technology trends in the commodity flight simulation industry, the fidelity that commodity flight simulations can provide, and to provide a high-level overview of the strengths and weaknesses thereof.


Author(s):  
Christof Koch

Animals live in an ever-changing environment to which they must continuously adapt. Adaptation in the nervous system occurs at every level, from ion channels and synapses to single neurons and whole networks. It operates in many different forms and on many time scales. Retinal adaptation, for example, permits us to adjust within minutes to changes of over eight orders of magnitude of brightness, from the dark of a moonless night to high noon. High-level memory—the storage and recognition of a person's face, for example—can also be seen as a specialized form of adaptation (see Squire, 1987). The ubiquity of adaptation in the nervous system is a radical but often underappreciated difference between brains and computers. With few exceptions, all modern computers are patterned according to the architecture laid out by von Neumann (1956). Here the adaptive elements—the random access memory (RAM)—are both physically and conceptually distinct from the processing elements, the central processing unit (CPU). Even proposals to incorporate massive amounts of so-called intelligent RAM (IRAM) directly onto any future processor chip fall well short of the degree of intermixing present in nervous systems (Kozyrakis et al., 1997). It is only within the last few years that a few pioneers have begun to demonstrate the advantages of incorporating adaptive elements at all stages of the computation into electronic circuits (Mead, 1990; Koch and Mathur, 1996; Diorio et al.,1996). For over a century (Tanzi, 1893; Ramón y Cajal, 1909, 1991), the leading hypothesis among both theoreticians and experimentalists has been that synoptic plasticity underlies most long-term behavioral plasticity. It has nevertheless been extremely difficult to establish a direct link between behavioral plasticity and its biophysical substrate, in part because most biophysical research is conducted with in vitro preparations in which a slice of the brain is removed from the organism, while behavior is best studied in the intact animal. In mammalian systems the problem is particularly acute, but combined pharmacological, behavioral, and genetic approaches are yielding promising if as yet incomplete results (Saucier and Cain, 1995; Cain, 1997; Davis, Butcher, and Morris, 1992; Tonegawa, 1995; McHugh et al., 1996; Rogan, Stäubli, LeDoux, 1997).


2021 ◽  
Vol 23 (06) ◽  
pp. 546-555
Author(s):  
Darshan Haragi L ◽  
◽  
Mahith S ◽  
Prof. Sahana B ◽  
◽  
...  

Kubernetes is a compact, extensible, open-source stage for overseeing containerized responsibilities and administrations, that works with both decisive setup and robotization. Kubernetes is like VMs, however having loosened up isolation properties to share the Operating System (OS) among the applications. The container conversely with VM, has its own document framework, a portion of Central Processing Unit(CPU), memory, process space, and much more. Kubernetes cluster is a bunch of node machines for running containerized applications. Each cluster contains a control plane and at least one node. Infrastructure Optimization is the process of analyzing and arranging the portion of cloud resources that power applications and workloads to augment the presentation and limit squander due to over-provisioning. In the paper, a “Movie Review System” web application is designed using GoLang for backend components and HTML, CSS, and JS for frontend components. Using AWS, an EC2 instance is created and the web application is deployed onto EC2 and hosted in the instance server. The web application is also deployed on Kubernetes locally using the MiniKube tool. A performance analysis is performed for both the deployments on considering common performance metrics for both AWS EC2 / Virtual Machine (VM) and Kubernetes.


Algorithms ◽  
2019 ◽  
Vol 12 (8) ◽  
pp. 149
Author(s):  
Thomas Faict ◽  
Erik H. D’Hollander ◽  
Bart Goossens

Intel recently introduced the Heterogeneous Architecture Research Platform, HARP. In this platform, the Central Processing Unit and a Field-Programmable Gate Array are connected through a high-bandwidth, low-latency interconnect and both share DRAM memory. For this platform, Open Computing Language (OpenCL), a High-Level Synthesis (HLS) language, is made available. By making use of HLS, a faster design cycle can be achieved compared to programming in a traditional hardware description language. This, however, comes at the cost of having less control over the hardware implementation. We will investigate how OpenCL can be applied to implement a real-time guided image filter on the HARP platform. In the first phase, the performance-critical parameters of the OpenCL programming model are defined using several specialized benchmarks. In a second phase, the guided image filter algorithm is implemented using the insights gained in the first phase. Both a floating-point and a fixed-point implementation were developed for this algorithm, based on a sliding window implementation. This resulted in a maximum floating-point performance of 135 GFLOPS, a maximum fixed-point performance of 430 GOPS and a throughput of HD color images at 74 frames per second.


2020 ◽  
Author(s):  
Roudati jannah

Perangkat keras komputer adalah bagian dari sistem komputer sebagai perangkat yang dapat diraba, dilihat secara fisik, dan bertindak untuk menjalankan instruksi dari perangkat lunak (software). Perangkat keras komputer juga disebut dengan hardware. Hardware berperan secara menyeluruh terhadap kinerja suatu sistem komputer. Prinsipnya sistem komputer selalu memiliki perangkat keras masukan (input/input device system) – perangkat keras premprosesan (processing/central processing unit) – perangkat keras luaran (output/output device system) – perangkat tambahan yang sifatnya opsional (peripheral) dan tempat penyimpanan data (storage device system/external memory).


Sign in / Sign up

Export Citation Format

Share Document