scholarly journals Employing Dynamic Logic in Cybersecurity

10.28945/3927 ◽  
2017 ◽  
Vol 6 ◽  
pp. 11
Author(s):  
Grandon Gill ◽  
Bernardo Rodrigues

A physicist who studies the human brain has adapted dynamic logic, a machine learning algorithm he developed, to run on a test database of network traffic. The algorithm has proven surprisingly adept at identifying malware traffic. Now he ponders how the project might move forward, given that cybersecurity is entirely outside of his domain of expertise (and interest). Dr. Leonid Perlovsky, distinguished physicist and cognitive scientist, pondered this question, which could have a significant impact on his research direction in the years to come. Over the past few decades, he had developed and refined algorithms for distinguishing objects in images, an approach that had found its way into various classified U.S. Department of Defense (DoD) applications. Now he was looking for new potential opportunities to see his research applied, allowing it to evolve further. One of the most interesting aspects of Perlovsky’s approach was that it was very similar to that taken by the human brain in processing sensory information. It began with a very vague model of what might or might not be present in the data being examined. Through successive iterations, analogous to the layers of processing used in human sensory systems, the patterns in the data corresponding to objects would grow more and more distinct until, finally, they became recognizable. Unlike most statistical techniques, this approach—termed “dynamic logic” by Perlovsky—did not require that a model be specified in advance. As such, it was well suited for contexts that required discovery. One application of dynamic logic that particularly impressed him involved the detection of malware in network packet data. Using an externally provided database of this traffic, his algorithm had successfully identified the presence of malware with almost eerie precision, and with substantially less processing than competing techniques. This suggested that dynamic logic could well become a powerful tool in the arsenal of IT professionals seeking to protect their systems from hackers. What other possible cybersecurity-related opportunities might be well suited to this tool? Identifying potential opportunities represented only part of the challenge of putting dynamic logic to work. After letting the project lay dormant for several years, he had recently been approached by an energetic Brazilian master’s student who had identified ways that DL (dynamic logic) could be used. The student had also established a DL open source project on his own initiative. If that project were to move forward, Perlovsky would need to provide some encouragement and guidance. But he had his own set of questions. Was the open source path the right way to proceed? What potential application should be given highest priority? Should government or commercial funding be pursued? And the big question… Perlovsky readily acknowledged that he was no cybersecurity expert. Given that he was already actively pursuing grants from the DoD and National Institute of Health (NIH), would it really make sense to split his attention further, and look towards tackling an entirely new class of problems?


10.28945/3915 ◽  
2017 ◽  
Vol 2 ◽  
pp. 001-023

Dr. Leonid Perlovsky, distinguished physicist and cognitive scientist, pondered this question, which could have a significant impact on his research direction in the years to come. Over the past few decades, he had developed and refined algorithms for distinguishing objects in images, an approach that had found its way into various classified U.S. Department of Defense (DoD) applications. Now he was looking for new potential opportunities to see his research applied, allowing it to evolve further. One of the most interesting aspects of Perlovsky’s approach was that it was very similar to that taken by the human brain in processing sensory information. It began with a very vague model of what might or might not be present in the data being examined. Through successive iterations, analogous to the layers of processing used in human sensory systems, the patterns in the data corresponding to objects would grow more and more distinct until, finally, they became recognizable. Unlike most statistical techniques, this approach—termed “dynamic logic” by Perlovsky—did not require that a model be specified in advance. As such, it was well suited for contexts that required discovery. One application of dynamic logic that particularly impressed him involved the detection of malware in network packet data. Using an externally provided database of this traffic, his algorithm had successfully identified the presence of malware with almost eerie precision, and with substantially less processing than competing techniques. This suggested that dynamic logic could well become a powerful tool in the arsenal of IT professionals seeking to protect their systems from hackers. What other possible cybersecurity-related opportunities might be well suited to this tool? Identifying potential opportunities represented only part of the challenge of putting dynamic logic to work. After letting the project lay dormant for several years, he had recently been approached by an energetic Brazilian master’s student who had identified ways that DL (dynamic logic) could be used. The student had also established a DL open source project on his own initiative. If that project were to move forward, Perlovsky would need to provide some encouragement and guidance. But he had his own set of questions. Was the open source path the right way to proceed? What potential application should be given highest priority? Should government or commercial funding be pursued? And the big question… Perlovsky readily acknowledged that he was no cybersecurity expert. Given that he was already actively pursuing grants from the DoD and National Institute of Health (NIH), would it really make sense to split his attention further, and look towards tackling an entirely new class of problems?



Author(s):  
J. Anthony VanDuzer

SummaryRecently, there has been a proliferation of international agreements imposing minimum standards on states in respect of their treatment of foreign investors and allowing investors to initiate dispute settlement proceedings where a state violates these standards. Of greatest significance to Canada is Chapter 11 of the North American Free Trade Agreement, which provides both standards for state behaviour and the right to initiate binding arbitration. Since 1996, four cases have been brought under Chapter 11. This note describes the Chapter 11 process and suggests some of the issues that may arise as it is increasingly resorted to by investors.



Author(s):  
Jonathan Hopkin

Recent elections in the advanced Western democracies have undermined the basic foundations of political systems that had previously beaten back all challenges—from both the Left and the Right. The election of Donald Trump to the US presidency, only months after the United Kingdom voted to leave the European Union, signaled a dramatic shift in the politics of the rich democracies. This book traces the evolution of this shift and argues that it is a long-term result of abandoning the postwar model of egalitarian capitalism in the 1970s. That shift entailed weakening the democratic process in favor of an opaque, technocratic form of governance that allows voters little opportunity to influence policy. With the financial crisis of the late 2000s, these arrangements became unsustainable, as incumbent politicians were unable to provide solutions to economic hardship. Electorates demanded change, and it had to come from outside the system. Using a comparative approach, the text explains why different kinds of anti-system politics emerge in different countries and how political and economic factors impact the degree of electoral instability that emerges. Finally, it discusses the implications of these changes, arguing that the only way for mainstream political forces to survive is for them to embrace a more activist role for government in protecting societies from economic turbulence.



Agronomy ◽  
2021 ◽  
Vol 11 (5) ◽  
pp. 952
Author(s):  
Lia Duarte ◽  
Ana Cláudia Teodoro ◽  
Joaquim J. Sousa ◽  
Luís Pádua

In a precision agriculture context, the amount of geospatial data available can be difficult to interpret in order to understand the crop variability within a given terrain parcel, raising the need for specific tools for data processing and analysis. This is the case for data acquired from Unmanned Aerial Vehicles (UAV), in which the high spatial resolution along with data from several spectral wavelengths makes data interpretation a complex process regarding vegetation monitoring. Vegetation Indices (VIs) are usually computed, helping in the vegetation monitoring process. However, a crop plot is generally composed of several non-crop elements, which can bias the data analysis and interpretation. By discarding non-crop data, it is possible to compute the vigour distribution for a specific crop within the area under analysis. This article presents QVigourMaps, a new open source application developed to generate useful outputs for precision agriculture purposes. The application was developed in the form of a QGIS plugin, allowing the creation of vigour maps, vegetation distribution maps and prescription maps based on the combination of different VIs and height information. Multi-temporal data from a vineyard plot and a maize field were used as case studies in order to demonstrate the potential and effectiveness of the QVigourMaps tool. The presented application can contribute to making the right management decisions by providing indicators of crop variability, and the outcomes can be used in the field to apply site-specific treatments according to the levels of vigour.



2021 ◽  
Vol 11 (8) ◽  
pp. 960
Author(s):  
Mina Kheirkhah ◽  
Philipp Baumbach ◽  
Lutz Leistritz ◽  
Otto W. Witte ◽  
Martin Walter ◽  
...  

Studies investigating human brain response to emotional stimuli—particularly high-arousing versus neutral stimuli—have obtained inconsistent results. The present study was the first to combine magnetoencephalography (MEG) with the bootstrapping method to examine the whole brain and identify the cortical regions involved in this differential response. Seventeen healthy participants (11 females, aged 19 to 33 years; mean age, 26.9 years) were presented with high-arousing emotional (pleasant and unpleasant) and neutral pictures, and their brain responses were measured using MEG. When random resampling bootstrapping was performed for each participant, the greatest differences between high-arousing emotional and neutral stimuli during M300 (270–320 ms) were found to occur in the right temporo-parietal region. This finding was observed in response to both pleasant and unpleasant stimuli. The results, which may be more robust than previous studies because of bootstrapping and examination of the whole brain, reinforce the essential role of the right hemisphere in emotion processing.



2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
S Mehta ◽  
R Botelho ◽  
F Fernandez ◽  
C Villagran ◽  
A Frauenfelder ◽  
...  

Abstract Background We have previously reported the use of Artificial Intelligence (AI) guided EKG analysis for detection of ST-Elevation Myocardial Infarction (STEMI). To demonstrate the diagnostic value of our algorithm, we compared AI predictions with reports that were confirmed as STEMI. Purpose To demonstrate the absolute proficiency of AI for detecting STEMI in a standard12-lead EKG. Methods An observational, retrospective, case-control study. Sample: 5,087 EKG records, including 2,543 confirmed STEMI cases obtained via feedback from health centers following appropriate patient management (thrombolysis, primary Percutaneous Coronary Intervention (PCI), pharmacoinvasive therapy or coronary artery bypass surgery). Records excluded patient and medical information. The sample was derived from the International Telemedical Systems (ITMS) database. LUMENGT-AI Algorithm was employed. Preprocessing: detection of QRS complexes by wavelet system, segmentation of each EKG into individual heartbeats (53,667 total beats) with fixed window of 0.4s to the left and 0.9s to the right of main QRS; Classification: A 1-D convolutional neural network was implemented, “STEMI” and “Not-STEMI” classes were considered for each heartbeat, individual probabilities were aggregated to generate the final label for each record. Training & Testing: 90% and 10% of the sample were used, respectively. Experiments: Intel PC i7 8750H processor at 2.21GHz, 16GB RAM, Windows 10 OS with NVIDIA GTX 1070 GPU, 8GB RAM. Results The model yielded an accuracy of 97.2%, a sensitivity of 95.8%, and a specificity of 98.5%. Conclusion(s) Our AI-based algorithm can reliably diagnose STEMI and will preclude the role of a cardiologist for screening and diagnosis, especially in the pre-hospital setting.



1991 ◽  
Vol 1 (1) ◽  
pp. 19-24 ◽  
Author(s):  
Frank E. Musiek ◽  
Suzanne Lenz ◽  
Karen M. Gollegly

1. There appears to be a relationship among the improved overall behavior of this patient, anatomical changes in the brain, and enhanced performance of both psychophysical and electrophysiological central auditory tests. 2. The right-sided peripheral hearing loss was one of the primary indicators for further diagnostic workup, but probably is unrelated to the lesion that was later discovered. 3. In demonstrating structural as well as functional improvement, this case demonstrates the plasticity of the young human brain.



2010 ◽  
Vol 7 (2) ◽  
pp. 381-402
Author(s):  
Ingrid Monson ◽  
John Gennari ◽  
Travis A. Jackson

Do not miss Robin D. G. Kelley's Thelonious Monk: The Life and Times of an American Original, for it will stand as the definitive biography of the great American composer and pianist for many years to come. What distinguishes Kelley's treatment of Monk's complicated and enigmatic life is the sheer depth and breadth of primary research, including, for the first time, the active cooperation and involvement of Thelonious Monk's family. In his acknowledgments, Kelley describes a long process of convincing Thelonious Monk, III to grant permission culminating in a six-hour meeting in which his knowledge, credentials, and commitment were thoroughly tested and challenged. Once he had secured “Toot's” blessings, as well as that of his wife Gale and brother-in-law Peter Grain, Kelley was introduced to Nellie Monk, Thelonious Monk's wife, and a wide range of family and friends who shared their memories and personal archives of photos, recordings, and papers. This is not an authorized biography, however, since Thelonious Monk, Jr. never demanded the right to see drafts or dictate the content. Rather Kelley was admonished to “dig deep and tell the truth.”



Author(s):  
Ruben Brondeel ◽  
Yan Kestens ◽  
Javad Rahimipour Anaraki ◽  
Kevin Stanley ◽  
Benoit Thierry ◽  
...  

Background: Closed-source software for processing and analyzing accelerometer data provides little to no information about the algorithms used to transform acceleration data into physical activity indicators. Recently, an algorithm was developed in MATLAB that replicates the frequently used proprietary ActiLife activity counts. The aim of this software profile was (a) to translate the MATLAB algorithm into R and Python and (b) to test the accuracy of the algorithm on free-living data. Methods: As part of the INTErventions, Research, and Action in Cities Team, data were collected from 86 participants in Victoria (Canada). The participants were asked to wear an integrated global positioning system and accelerometer sensor (SenseDoc) for 10 days on the right hip. Raw accelerometer data were processed in ActiLife, MATLAB, R, and Python and compared using Pearson correlation, interclass correlation, and visual inspection. Results: Data were collected for a combined 749 valid days (>10 hr wear time). MATLAB, Python, and R counts per minute on the vertical axis had Pearson correlations with the ActiLife counts per minute of .998, .998, and .999, respectively. All three algorithms overestimated ActiLife counts per minute, some by up to 2.8%. Conclusions: A MATLAB algorithm for deriving ActiLife counts was implemented in R and Python. The different implementations provide similar results to ActiLife counts produced in the closed source software and can, for all practical purposes, be used interchangeably. This opens up possibilities to comparing studies using similar accelerometers from different suppliers, and to using free, open-source software.



2003 ◽  
Vol 89 (1) ◽  
pp. 390-400 ◽  
Author(s):  
L. H. Zupan ◽  
D. M. Merfeld

Sensory systems often provide ambiguous information. For example, otolith organs measure gravito-inertial force (GIF), the sum of gravitational force and inertial force due to linear acceleration. However, according to Einstein's equivalence principle, a change in gravitational force due to tilt is indistinguishable from a change in inertial force due to translation. Therefore the central nervous system (CNS) must use other sensory cues to distinguish tilt from translation. For example, the CNS might use dynamic visual cues indicating rotation to help determine the orientation of gravity (tilt). This, in turn, might influence the neural processes that estimate linear acceleration, since the CNS might estimate gravity and linear acceleration such that the difference between these estimates matches the measured GIF. Depending on specific sensory information inflow, inaccurate estimates of gravity and linear acceleration can occur. Specifically, we predict that illusory tilt caused by roll optokinetic cues should lead to a horizontal vestibuloocular reflex compensatory for an interaural estimate of linear acceleration, even in the absence of actual linear acceleration. To investigate these predictions, we measured eye movements binocularly using infrared video methods in 17 subjects during and after optokinetic stimulation about the subject's nasooccipital (roll) axis (60°/s, clockwise or counterclockwise). The optokinetic stimulation was applied for 60 s followed by 30 s in darkness. We simultaneously measured subjective roll tilt using a somatosensory bar. Each subject was tested in three different orientations: upright, pitched forward 10°, and pitched backward 10°. Five subjects reported significant subjective roll tilt (>10°) in directions consistent with the direction of the optokinetic stimulation. In addition to torsional optokinetic nystagmus and afternystagmus, we measured a horizontal nystagmus to the right during and following clockwise (CW) stimulation and to the left during and following counterclockwise (CCW) stimulation. These measurements match predictions that subjective tilt in the absence of real tilt should induce a nonzero estimate of interaural linear acceleration and, therefore, a horizontal eye response. Furthermore, as predicted, the horizontal response in the dark was larger for Tilters ( n = 5) than for Non-Tilters ( n= 12).



Sign in / Sign up

Export Citation Format

Share Document