scholarly journals Choosing a video conferencing service and its adaptation for educational institution

Informatics ◽  
2021 ◽  
Vol 18 (4) ◽  
pp. 17-25
Author(s):  
A. N. Markov ◽  
R. O. Ihnatovich ◽  
A. I. Paramonov

Objectives. The authors aimed to demonstrate the need for implementation of video conferencing service into the learning process, to select a video conferencing service, and to conduct a computer experiment with the selected BigBlueButton video conferencing service.Methods. The problems of choosing a video conferencing service from the list of video conferencing and video conferencing software are considered. At the stage of software selection, the features of its operation, requirements for hardware and for integration into internal information systems are indicated. Load testing of the video conferencing service was carried out by the method of volume and stability testing.Results. The load graphs for hardware components of the virtual server in the long term period are presented. The article describes the results of the graphs analysis in order to identify the key features of the video conferencing service during the test and trial operations.Conclusion. Taking into account the cost of licensing, as well as integration into the e-learning system, a choice of video conferencing service was made. A computer experiment was carried out with the selected BigBlueButton video conferencing service. The features of the hardware operation of the virtual server (on which the BigBlueButton system is located) have been determined. The load graphs for the central processing unit, random access memory and local computer network are presented. Problems of service operation at the stage of load increase are formulated.

Author(s):  
Yu Wang

In this chapter we will focus on examining computer network traffic and data. A computer network combines a set of computers and physically and logically connects them together to exchange information. Network traffic acquired from a network system provides information on data communications within the network and between networks or individual computers. The most common data types are log data, such as Kerberos logs, transmission control protocol/Internet protocol (TCP/IP) logs, Central processing unit (CPU) usage data, event logs, user command data, Internet visit data, operating system audit trail data, intrusion detection and prevention service (IDS/IPS) logs, Netflow1 data, and the simple network management protocol (SNMP) reporting data. Such information is unique and valuable for network security, specifically for intrusion detection and prevention. Although we have already presented some essential challenges in collecting such data in Chapter I, we will discuss traffic data, as well as other related data, in greater detail in this chapter. Specifically, we will describe system-specific and user-specific data types in Sections System- Specific Data and User-Specific Data, respectively, and provide detailed information on publicly available data in Section Publicly Available Data.


English Today ◽  
2001 ◽  
Vol 17 (3) ◽  
pp. 24-30
Author(s):  
Paul Bruthiaux

The rapid spread of Information Technology (IT) in recent years and the role it plays in many aspects of our lives has not left language use untouched. A manifestation of this role is the degree of linguistic creativity that has accompanied technological innovation. In English, this creativity is seen in the semantic relabeling of established terms such as web, bug, virus, firewall, etc. Another strategy favored by IT lexifiers is the use of lexical items clustered in heavy premodifying groups, as in random access memory, disk operating system, central processing unit, and countless others (White, 1999). In brief, IT technology – and in particular, the World Wide Web – has made it possible for users to break free of many linguistic codes and conventions (Lemke, 1999).For the linguist, the happy outcome of the spread of IT is that it has created an opportunity to analyze the simultaneous development of technology and the language that encodes it and the influence of one on the other (Stubbs, 1997). To linguists of a broadly functional disposition, this is a chance to confirm the observation that scientific language differs substantially from everyday language. More importantly, it is also a chance to verify the claim made chiefly by Halliday & Martin (1993) that this difference in the characteristics of each of these discourses stems from a radical difference between scientific and common sense construals of the world around us.


2021 ◽  
Vol 15 (1) ◽  
pp. 20
Author(s):  
Robert Karamagi

Phishing has become the most convenient technique that hackers use nowadays to gain access to protected systems. This is because cybersecurity has evolved and low-cost systems with the least security investments will need quite advanced and sophisticated mechanisms to be able to penetrate technically. Systems currently are equipped with at least some level of security, imposed by security firms with a very high level of expertise in managing the common and well-known attacks. This decreases the possible technical attack surface. Nation-states or advanced persistent threats (APTs), organized crime, and black hats possess the finance and skills to penetrate many different systems. However, they are always in need of the most available computing resources, such as central processing unit (CPU) and random-access memory (RAM), so they normally hack and hook computers into a botnet. This may allow them to perform dangerous distributed denial of service (DDoS) attacks and perform brute force cracking algorithms, which are highly CPU intensive. They may also use the zombie or drone systems they have hacked to hide their location on the net and gain anonymity by bouncing off around them many times a minute. Phishing allows them to gain their stretch of compromised systems to increase their power. For a normal hacker without the money to invest in sophisticated techniques, exploiting the human factor, which is the weakest link to security, comes in handy. The possibility of successfully manipulating the human into releasing the security that they set up makes the life of the hacker very easy, because they do not have to try to break into the system with force, rather the owner will just open the door for them. The objective of the research is to review factors that enhance phishing and improve the probability of its success. We have discovered that hackers rely on triggering the emotional effects of their victims through their phishing attacks. We have applied the use of artificial intelligence to be able to detect the emotion associated with a phrase or sentence. Our model had a good accuracy which could be improved with the use of a larger dataset with more emotional sentiments for various phrases and sentences. Our technique may be used to check for emotional manipulation in suspicious emails to improve the confidence interval of suspected phishing emails.


Author(s):  
Christof Koch

Animals live in an ever-changing environment to which they must continuously adapt. Adaptation in the nervous system occurs at every level, from ion channels and synapses to single neurons and whole networks. It operates in many different forms and on many time scales. Retinal adaptation, for example, permits us to adjust within minutes to changes of over eight orders of magnitude of brightness, from the dark of a moonless night to high noon. High-level memory—the storage and recognition of a person's face, for example—can also be seen as a specialized form of adaptation (see Squire, 1987). The ubiquity of adaptation in the nervous system is a radical but often underappreciated difference between brains and computers. With few exceptions, all modern computers are patterned according to the architecture laid out by von Neumann (1956). Here the adaptive elements—the random access memory (RAM)—are both physically and conceptually distinct from the processing elements, the central processing unit (CPU). Even proposals to incorporate massive amounts of so-called intelligent RAM (IRAM) directly onto any future processor chip fall well short of the degree of intermixing present in nervous systems (Kozyrakis et al., 1997). It is only within the last few years that a few pioneers have begun to demonstrate the advantages of incorporating adaptive elements at all stages of the computation into electronic circuits (Mead, 1990; Koch and Mathur, 1996; Diorio et al.,1996). For over a century (Tanzi, 1893; Ramón y Cajal, 1909, 1991), the leading hypothesis among both theoreticians and experimentalists has been that synoptic plasticity underlies most long-term behavioral plasticity. It has nevertheless been extremely difficult to establish a direct link between behavioral plasticity and its biophysical substrate, in part because most biophysical research is conducted with in vitro preparations in which a slice of the brain is removed from the organism, while behavior is best studied in the intact animal. In mammalian systems the problem is particularly acute, but combined pharmacological, behavioral, and genetic approaches are yielding promising if as yet incomplete results (Saucier and Cain, 1995; Cain, 1997; Davis, Butcher, and Morris, 1992; Tonegawa, 1995; McHugh et al., 1996; Rogan, Stäubli, LeDoux, 1997).


Author(s):  
Wesley Petersen ◽  
Peter Arbenz

Since first proposed by Gordon Moore (an Intel founder) in 1965, his law [107] that the number of transistors on microprocessors doubles roughly every one to two years has proven remarkably astute. Its corollary, that central processing unit (CPU) performance would also double every two years or so has also remained prescient. Figure 1.1 shows Intel microprocessor data on the number of transistors beginning with the 4004 in 1972. Figure 1.2 indicates that when one includes multi-processor machines and algorithmic development, computer performance is actually better than Moore’s 2-year performance doubling time estimate. Alas, however, in recent years there has developed a disagreeable mismatch between CPU and memory performance: CPUs now outperform memory systems by orders of magnitude according to some reckoning [71]. This is not completely accurate, of course: it is mostly a matter of cost. In the 1980s and 1990s, Cray Research Y-MP series machines had well balanced CPU to memory performance. Likewise, NEC (Nippon Electric Corp.), using CMOS (see glossary, Appendix F) and direct memory access, has well balanced CPU/Memory performance. ECL (see glossary, Appendix F) and CMOS static random access memory (SRAM) systems were and remain expensive and like their CPU counterparts have to be carefully kept cool. Worse, because they have to be cooled, close packing is difficult and such systems tend to have small storage per volume. Almost any personal computer (PC) these days has a much larger memory than supercomputer memory systems of the 1980s or early 1990s. In consequence, nearly all memory systems these days are hierarchical, frequently with multiple levels of cache. Figure 1.3 shows the diverging trends between CPUs and memory performance. Dynamic random access memory (DRAM) in some variety has become standard for bulk memory. There are many projects and ideas about how to close this performance gap, for example, the IRAM [78] and RDRAM projects [85]. We are confident that this disparity between CPU and memory access performance will eventually be tightened, but in the meantime, we must deal with the world as it is.


Materials ◽  
2020 ◽  
Vol 13 (16) ◽  
pp. 3532 ◽  
Author(s):  
Qiao-Feng Ou ◽  
Bang-Shu Xiong ◽  
Lei Yu ◽  
Jing Wen ◽  
Lei Wang ◽  
...  

Recent progress in the development of artificial intelligence technologies, aided by deep learning algorithms, has led to an unprecedented revolution in neuromorphic circuits, bringing us ever closer to brain-like computers. However, the vast majority of advanced algorithms still have to run on conventional computers. Thus, their capacities are limited by what is known as the von-Neumann bottleneck, where the central processing unit for data computation and the main memory for data storage are separated. Emerging forms of non-volatile random access memory, such as ferroelectric random access memory, phase-change random access memory, magnetic random access memory, and resistive random access memory, are widely considered to offer the best prospect of circumventing the von-Neumann bottleneck. This is due to their ability to merge storage and computational operations, such as Boolean logic. This paper reviews the most common kinds of non-volatile random access memory and their physical principles, together with their relative pros and cons when compared with conventional CMOS-based circuits (Complementary Metal Oxide Semiconductor). Their potential application to Boolean logic computation is then considered in terms of their working mechanism, circuit design and performance metrics. The paper concludes by envisaging the prospects offered by non-volatile devices for future brain-inspired and neuromorphic computation.


Author(s):  
Aparna Shashikant Joshi ◽  
Shayamala Devi Munisamy

In cloud computing, load balancing among the resources is required to schedule a task, which is a key challenge. This paper proposes a dynamic degree memory balanced allocation (D2MBA) algorithm which allocate virtual machine (VM) to a best suitable host, based on availability of random-access memory (RAM) and microprocessor without interlocked pipelined stages (MIPS) of host and allocate task to a best suitable VM by considering balanced condition of VM. The proposed D2MBA algorithm has been simulated using a simulation tool CloudSim by varying number of tasks and keeping number of VMs constant and vice versa. The D2MBA algorithm is compared with the other load balancing algorithms viz. Round Robin (RR) and dynamic degree balance with central processing unit (CPU) based (D2B_CPU based) with respect to performance parameters such as execution cost, degree of imbalance and makespan time. It is found that the D2MBA algorithm has a large reduction in the performance parameters such as execution cost, degree of imbalance and makespan time as compared with RR and D2B CPU based algorithms


2020 ◽  
Vol 6 (2) ◽  
pp. 61-66
Author(s):  
Shumaya Resty Ramadhani

Perkembangan teknologi yang pesat membawa perubahan kebiasaan pada pengguna teknologi. Perangkat teknologi ikut berevolusi, mulai dari super komputer hingga smartphone berukuran kecil dengan performa yang sepadan. Banyaknya penikmat teknologi yang beralih kepada piranti cerdas ini membuka peluang pengembangan aplikasi yang cukup besar. Aplikasi mobile harus tetap mampu bekerja secara cepat dan ringan meski dijalankan pada smartphone dengan tipe lama atau spesifikasi terbatas. Terutama aplikasi yang mengusung visualisasi dan animasi untuk menarik minat pengguna. Sistem operasi iOS menyediakan CoreFramework yang mendukung proses pembuatan objek dan animasi dalam jumlah banyak dengan cepat dan ringan. Oleh sebab itu, dibentuklah sebuah aplikasi visualisasi general-graph sederhana dengan implementasi CoreFramework guna menguji seberapa besar pengaruh framework tersebut terhadap kualitas aplikasi, terutama pada perangkat seri lama. Kriteria pengujian menggunakan tiga variabel dasar, yaitu waktu, alokasi Central Processing Unit (CPU) dan Random Access Memory (RAM) yang digunakan. Hasil dari pengujian menunjukkan bahwa meski CoreFramework menggunakan Graphic Processing Unit (GPU) untuk pemrosesannya, tapi setidaknya aplikasi membutuhkan minimal RAM berukuran 2GB pada perangkat smartphone agar responsifitas terjaga. Hal ini disebabkan karena ketika kapasitas RAM kecil, maka aplikasi akan menggunakan alokasi CPU dengan cukup signifikan agar bisa berjalan dengan baik.


2020 ◽  
Author(s):  
Roudati jannah

Perangkat keras komputer adalah bagian dari sistem komputer sebagai perangkat yang dapat diraba, dilihat secara fisik, dan bertindak untuk menjalankan instruksi dari perangkat lunak (software). Perangkat keras komputer juga disebut dengan hardware. Hardware berperan secara menyeluruh terhadap kinerja suatu sistem komputer. Prinsipnya sistem komputer selalu memiliki perangkat keras masukan (input/input device system) – perangkat keras premprosesan (processing/central processing unit) – perangkat keras luaran (output/output device system) – perangkat tambahan yang sifatnya opsional (peripheral) dan tempat penyimpanan data (storage device system/external memory).


Sign in / Sign up

Export Citation Format

Share Document