scholarly journals A Review of Factors Affecting the Effectiveness of Phishing

2021 ◽  
Vol 15 (1) ◽  
pp. 20
Author(s):  
Robert Karamagi

Phishing has become the most convenient technique that hackers use nowadays to gain access to protected systems. This is because cybersecurity has evolved and low-cost systems with the least security investments will need quite advanced and sophisticated mechanisms to be able to penetrate technically. Systems currently are equipped with at least some level of security, imposed by security firms with a very high level of expertise in managing the common and well-known attacks. This decreases the possible technical attack surface. Nation-states or advanced persistent threats (APTs), organized crime, and black hats possess the finance and skills to penetrate many different systems. However, they are always in need of the most available computing resources, such as central processing unit (CPU) and random-access memory (RAM), so they normally hack and hook computers into a botnet. This may allow them to perform dangerous distributed denial of service (DDoS) attacks and perform brute force cracking algorithms, which are highly CPU intensive. They may also use the zombie or drone systems they have hacked to hide their location on the net and gain anonymity by bouncing off around them many times a minute. Phishing allows them to gain their stretch of compromised systems to increase their power. For a normal hacker without the money to invest in sophisticated techniques, exploiting the human factor, which is the weakest link to security, comes in handy. The possibility of successfully manipulating the human into releasing the security that they set up makes the life of the hacker very easy, because they do not have to try to break into the system with force, rather the owner will just open the door for them. The objective of the research is to review factors that enhance phishing and improve the probability of its success. We have discovered that hackers rely on triggering the emotional effects of their victims through their phishing attacks. We have applied the use of artificial intelligence to be able to detect the emotion associated with a phrase or sentence. Our model had a good accuracy which could be improved with the use of a larger dataset with more emotional sentiments for various phrases and sentences. Our technique may be used to check for emotional manipulation in suspicious emails to improve the confidence interval of suspected phishing emails.

Author(s):  
Christof Koch

Animals live in an ever-changing environment to which they must continuously adapt. Adaptation in the nervous system occurs at every level, from ion channels and synapses to single neurons and whole networks. It operates in many different forms and on many time scales. Retinal adaptation, for example, permits us to adjust within minutes to changes of over eight orders of magnitude of brightness, from the dark of a moonless night to high noon. High-level memory—the storage and recognition of a person's face, for example—can also be seen as a specialized form of adaptation (see Squire, 1987). The ubiquity of adaptation in the nervous system is a radical but often underappreciated difference between brains and computers. With few exceptions, all modern computers are patterned according to the architecture laid out by von Neumann (1956). Here the adaptive elements—the random access memory (RAM)—are both physically and conceptually distinct from the processing elements, the central processing unit (CPU). Even proposals to incorporate massive amounts of so-called intelligent RAM (IRAM) directly onto any future processor chip fall well short of the degree of intermixing present in nervous systems (Kozyrakis et al., 1997). It is only within the last few years that a few pioneers have begun to demonstrate the advantages of incorporating adaptive elements at all stages of the computation into electronic circuits (Mead, 1990; Koch and Mathur, 1996; Diorio et al.,1996). For over a century (Tanzi, 1893; Ramón y Cajal, 1909, 1991), the leading hypothesis among both theoreticians and experimentalists has been that synoptic plasticity underlies most long-term behavioral plasticity. It has nevertheless been extremely difficult to establish a direct link between behavioral plasticity and its biophysical substrate, in part because most biophysical research is conducted with in vitro preparations in which a slice of the brain is removed from the organism, while behavior is best studied in the intact animal. In mammalian systems the problem is particularly acute, but combined pharmacological, behavioral, and genetic approaches are yielding promising if as yet incomplete results (Saucier and Cain, 1995; Cain, 1997; Davis, Butcher, and Morris, 1992; Tonegawa, 1995; McHugh et al., 1996; Rogan, Stäubli, LeDoux, 1997).


2016 ◽  
Vol 6 (1) ◽  
pp. 79-90
Author(s):  
Łukasz Syrocki ◽  
Grzegorz Pestka

AbstractThe ready to use set of functions to facilitate solving a generalized eigenvalue problem for symmetric matrices in order to efficiently calculate eigenvalues and eigenvectors, using Compute Unified Device Architecture (CUDA) technology from NVIDIA, is provided. An integral part of the CUDA is the high level programming environment enabling tracking both code executed on Central Processing Unit and on Graphics Processing Unit. The presented matrix structures allow for the analysis of the advantages of using graphics processors in such calculations.


Informatics ◽  
2021 ◽  
Vol 18 (4) ◽  
pp. 17-25
Author(s):  
A. N. Markov ◽  
R. O. Ihnatovich ◽  
A. I. Paramonov

Objectives. The authors aimed to demonstrate the need for implementation of video conferencing service into the learning process, to select a video conferencing service, and to conduct a computer experiment with the selected BigBlueButton video conferencing service.Methods. The problems of choosing a video conferencing service from the list of video conferencing and video conferencing software are considered. At the stage of software selection, the features of its operation, requirements for hardware and for integration into internal information systems are indicated. Load testing of the video conferencing service was carried out by the method of volume and stability testing.Results. The load graphs for hardware components of the virtual server in the long term period are presented. The article describes the results of the graphs analysis in order to identify the key features of the video conferencing service during the test and trial operations.Conclusion. Taking into account the cost of licensing, as well as integration into the e-learning system, a choice of video conferencing service was made. A computer experiment was carried out with the selected BigBlueButton video conferencing service. The features of the hardware operation of the virtual server (on which the BigBlueButton system is located) have been determined. The load graphs for the central processing unit, random access memory and local computer network are presented. Problems of service operation at the stage of load increase are formulated.


2021 ◽  
Vol 12 ◽  
Author(s):  
Sergio Gálvez ◽  
Federico Agostini ◽  
Javier Caselli ◽  
Pilar Hernandez ◽  
Gabriel Dorado

New High-Performance Computing architectures have been recently developed for commercial central processing unit (CPU). Yet, that has not improved the execution time of widely used bioinformatics applications, like BLAST+. This is due to a lack of optimization between the bases of the existing algorithms and the internals of the hardware that allows taking full advantage of the available CPU cores. To optimize the new architectures, algorithms must be revised and redesigned; usually rewritten from scratch. BLVector adapts the high-level concepts of BLAST+ to the x86 architectures with AVX-512, to harness their capabilities. A deep comprehensive study has been carried out to optimize the approach, with a significant reduction in time execution. BLVector reduces the execution time of BLAST+ when aligning up to mid-size protein sequences (∼750 amino acids). The gain in real scenario cases is 3.2-fold. When applied to longer proteins, BLVector consumes more time than BLAST+, but retrieves a much larger set of results. BLVector and BLAST+ are fine-tuned heuristics. Therefore, the relevant results returned by both are the same, although they behave differently specially when performing alignments with low scores. Hence, they can be considered complementary bioinformatics tools.


2021 ◽  
Vol 13 (11) ◽  
pp. 2107
Author(s):  
Shiyu Wu ◽  
Zhichao Xu ◽  
Feng Wang ◽  
Dongkai Yang ◽  
Gongjian Guo

Global Navigation Satellite System Reflectometry Bistatic Synthetic Aperture Radar (GNSS-R BSAR) is becoming more and more important in remote sensing because of its low power, low mass, low cost, and real-time global coverage capability. The Back Projection Algorithm (BPA) was usually selected as the GNSS-R BSAR imaging algorithm because it can process echo signals of complex geometric configurations. However, the huge computational cost is a challenge for its application in GNSS-R BSAR. Graphics Processing Units (GPU) provides an efficient computing platform for GNSS-R BSAR processing. In this paper, a solution accelerating the BPA of GNSS-R BSAR using GPU is proposed to improve imaging efficiency, and a matching pre-processing program was proposed to synchronize direct and echo signals to improve imaging quality. To process hundreds of gigabytes of data collected by a long-time synthetic aperture in fixed station mode, a stream processing structure was used to process such a large amount of data to solve the problem of limited GPU memory. In the improvement of the imaging efficiency, the imaging task is divided into pre-processing and BPA, which are performed in the Central Processing Unit (CPU) and GPU, respectively, and a pixel-oriented parallel processing method in back projection is adopted to avoid memory access conflicts caused by excessive data volume. The improved BPA with the long synthetic aperture time is verified through the simulation of and experimenting on the GPS-L5 signal. The results show that the proposed accelerating solution is capable of taking approximately 128.04 s, which is 156 times lower than pure CPU framework for producing a size of 600 m × 600 m image with 1800 s synthetic aperture time; in addition, the same imaging quality with the existing processing solution can be retained.


2020 ◽  
Vol 14 (3) ◽  
pp. 364-377
Author(s):  
Diego Didona ◽  
Nikolas Ioannou ◽  
Radu Stoica ◽  
Kornilios Kourtis

Solid-state drives (SSDs) are extensively used to deploy persistent data stores, as they provide low latency random access, high write throughput, high data density, and low cost. Tree-based data structures are widely used to build persistent data stores, and indeed they lie at the backbone of many of the data management systems used in production and research today. We show that benchmarking a persistent tree-based data structure on an SSD is a complex process, which may easily incur subtle pitfalls that can lead to an inaccurate performance assessment. At a high-level, these pitfalls stem from the interaction of complex software running on complex hardware. On the one hand, tree structures implement internal operations that have non-trivial effects on performance. On the other hand, SSDs employ firmware logic to deal with the idiosyncrasies of the underlying flash memory, which are well known to also lead to complex performance dynamics. We identify seven benchmarking pitfalls using RocksDB and WiredTiger, two widespread implementations of an LSM-Tree and a B+Tree, respectively. We show that such pitfalls can lead to incorrect measurements of key performance indicators, hinder the reproducibility and the representativeness of the results, and lead to suboptimal deployments in production environments. We also provide guidelines on how to avoid these pitfalls to obtain more reliable performance measurements, and to perform more thorough and fair comparisons among different design points.


English Today ◽  
2001 ◽  
Vol 17 (3) ◽  
pp. 24-30
Author(s):  
Paul Bruthiaux

The rapid spread of Information Technology (IT) in recent years and the role it plays in many aspects of our lives has not left language use untouched. A manifestation of this role is the degree of linguistic creativity that has accompanied technological innovation. In English, this creativity is seen in the semantic relabeling of established terms such as web, bug, virus, firewall, etc. Another strategy favored by IT lexifiers is the use of lexical items clustered in heavy premodifying groups, as in random access memory, disk operating system, central processing unit, and countless others (White, 1999). In brief, IT technology – and in particular, the World Wide Web – has made it possible for users to break free of many linguistic codes and conventions (Lemke, 1999).For the linguist, the happy outcome of the spread of IT is that it has created an opportunity to analyze the simultaneous development of technology and the language that encodes it and the influence of one on the other (Stubbs, 1997). To linguists of a broadly functional disposition, this is a chance to confirm the observation that scientific language differs substantially from everyday language. More importantly, it is also a chance to verify the claim made chiefly by Halliday & Martin (1993) that this difference in the characteristics of each of these discourses stems from a radical difference between scientific and common sense construals of the world around us.


Electronics ◽  
2020 ◽  
Vol 9 (8) ◽  
pp. 1239
Author(s):  
Yung-Hao Tung ◽  
Hung-Chuan Wei ◽  
Yen-Wu Ti ◽  
Yao-Tung Tsou ◽  
Neetesh Saxena ◽  
...  

Software-defined networking (SDN) is a new networking architecture with a centralized control mechanism. SDN has proven to be successful in improving not only the network performance, but also security. However, centralized control in the SDN architecture is associated with new security vulnerabilities. In particular, user-datagram-protocol (UDP) flooding attacks can be easily launched and cause serious packet-transmission delays, controller-performance loss, and even network shutdown. In response to applications in the Internet of Things (IoT) field, this study considers UDP flooding attacks in SDN and proposes two lightweight countermeasures. The first method sometimes sacrifices address-resolution-protocol (ARP) requests to achieve a high level of security. In the second method, although packets must sometimes be sacrificed when undergoing an attack before starting to defend, the detection of the network state can prevent normal packets from being sacrificed. When blocking a network attack, attacks from the affected port are directly blocked without affecting normal ports. The performance and security of the proposed methods were confirmed by means of extensive experiments. Compared with the situation where no defense is implemented, or similar defense methods are implemented, after simulating a UDP flooding attack, our proposed method performed better in terms of the available bandwidth, central-processing-unit (CPU) consumption, and network delay time.


2021 ◽  
Vol 3 ◽  
Author(s):  
Yudi Zhao ◽  
Ruiqi Chen ◽  
Peng Huang ◽  
Jinfeng Kang

Resistive switching random access memory (RRAM) has emerged for non-volatile memory application with the features of simple structure, low cost, high density, high speed, low power, and CMOS compatibility. In recent years, RRAM technology has made significant progress in brain-inspired computing paradigms by exploiting its unique physical characteristics, which attempts to eliminate the energy-intensive and time-consuming data transfer between the processing unit and the memory unit. The design of RRAM-based computing paradigms, however, requires a detailed description of the dominant physical effects correlated with the resistive switching processes to realize the interaction and optimization between devices and algorithms or architectures. This work provides an overview of the current progress on device-level resistive switching behaviors with detailed insights into the physical effects in the resistive switching layer and the multifunctional assistant layer. Then the circuit-level physics-based compact models will be reviewed in terms of typical binary RRAM and the emerging analog synaptic RRAM, which act as an interface between the device and circuit design. After that, the interaction between device and system performances will finally be addressed by reviewing the specific applications of brain-inspired computing systems including neuromorphic computing, in-memory logic, and stochastic computing.


2020 ◽  
Author(s):  
Mark Crawford

AbstractA positive pressure protective hood system was purposefully constructed only from materials commonly found worldwide, including bendable aluminum mesh, elastic head straps, velcro tape, a plastic sheet, a furnace filter and two computer central processing unit (CPU) cooling fans. The practical advantages of this system are that the materials are readily available in the inventories of most electronics and hardware outlets, ease of assembly (particularly if choosing to employ 3D printing for the fan enclosure and/or making several units at once with a defined workflow), and high probability of the materials being available in current or prospective personal protective equipment (PPE)-deplete regions. An experiment with identical fire detectors showed adequate inner isolation of the hood prototype from paper combustion particulates, which have a size range slightly smaller than putative coronavirus aerosols, for at least 90 seconds. The theoretical advantages of this system include significant reduction in healthcare provider exposure to coronavirus-containing respiratory fomites, respiratory droplets and aerosols (vs. traditional static masks and shields) during high risk procedures such as endotracheal intubation or routine care of an upright and coughing patient. Additionally, the assembly eliminates contact exposure to coronavirus fomites due to whole-head coverage from a hood system.


Sign in / Sign up

Export Citation Format

Share Document