memory transfer
Recently Published Documents


TOTAL DOCUMENTS

84
(FIVE YEARS 17)

H-INDEX

10
(FIVE YEARS 0)

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Pisit Makpaisit ◽  
Chantana Chantrapornchai

AbstractResource Description Framework (RDF) is commonly used as a standard for data interchange on the web. The collection of RDF data sets can form a large graph which consumes time to query. It is known that modern Graphic Processing Units (GPUs) can be employed to execute parallel programs in order to speedup the running time. In this paper, we propose a novel RDF data representation along with the query processing algorithm that is suitable for GPU processing. Since the main challenges of GPU architecture are the limited memory sizes, the memory transfer latency, and the vast number of GPU cores. Our system is designed to strengthen the use of GPU cores and reduce the effect of memory transfer. We propose a representation consists of indices and column-based RDF ID data that can reduce the GPU memory requirement. The indexing and pre-upload filtering techniques are then applied to reduce the data transfer between the host and GPU memory. We add the index swapping process to facilitate the sorting and joining data process based on the given variable and add the pre-upload step to reduce the size of results’ storage, and the data transfer time. The experimental results show that our representation is about 35% smaller than the traditional NT format and 40% less compared to that of gStore. The query processing time can be speedup ranging from 1.95 to 397.03 when compared with RDF3X and gStore processing time with WatDiv test suite. It achieves speedup 578.57 and 62.97 for LUBM benchmark when compared to RDF-3X and gStore. The analysis shows the query cases which can gain benefits from our approach.


2021 ◽  
Author(s):  
Kenneth Samuel ◽  
Easter S Suviseshamuthu ◽  
Maria E Fichera

Memory retention and transfer in organisms happen at either the neural or genetic level. In humans, addictive behavior is known to pass from parents to offspring. In flatworm planaria (Dugesia tigrina), memory transfer has been claimed to be horizontal, i.e., through cannibalism. Our study is a preliminary step to understand the mechanisms underlying the transfer of addictive behavior to offspring. Since the neural and neurochemical responses of planaria share similarities with humans, it is possible to induce addictions and get predictable behavioral responses. Addiction can be induced in planaria, and decapitation will reveal if the addictive memories are solely stored in the brain. The primary objective was to test the hypothesis that addictive memory is also retained in the brainless posterior region of planaria. The surface preference of the planaria was first determined between smooth and rough surfaces. Through Pavlovian conditioning, the preferred surface was paired with water and the unpreferred surface with sucrose. After the planaria were trained and addicted, their surface preference shifted as a conditioned place preference (CPP) was established. When decapitated, the regenerated segment from the anterior part containing the brain retained the addiction, thus maintaining a shift in the surface preference. Importantly, we observed that the posterior part preserved this CPP as well, suggesting that memory retention is not attributed exclusively to the brain but might also occur at the genetic level. As a secondary objective, the effects of neurotransmitter blocking agents in preventing addiction were studied by administering a D1 dopamine antagonist to planaria, which could provide pointers to treat addictions in humans.


2021 ◽  
Author(s):  
Pisit Makpaisit ◽  
chantana chantrapornchai

Abstract Resource Description Framework (RDF) is commonly used as a standard for data interchange on the web. The collection of RDF data sets can form a large graph which consume time to query. It is known that modern Graphic Processing Units (GPUs) can be employed to execute parallel programs in order to speedup the running time. In this paper, we propose a novel RDF data representation along with the query processing algorithm that is suitable for GPU processing. Since the main challenges of GPU architecture are the limited memory sizes, the memory transfer latency, and the vast number of GPU cores. Our system is designed to strengthen the use of GPU cores and reduce the effect of memory transfer. We propose a representation consists of indices and column-based RDF ID data that can save GPU memory requirement. The indices and pre-upload filtering technique are then applied to reduce the data transfer between host and GPU memory. We add the index swapping process to facilitate the sort and join the data with the given variable and add the pre-upload step to reduce the size of results’ storage, and the data transfer time. The experimental results show that our representation is about 35% smaller that the traditional NT format and 40% less compared to that of gStore. The query processing time can be speedup ranging from 1.95 to 397.03 when compared with RDF3X and gStore processing time with WatDiv testsuite. It achieves speedup 578.57 and 62.97 for LUBM benchmark when compared to RDF-3X and gStore. The analysis shows the query cases which can gain benefits from our approach.


Author(s):  
Samson Immanuel J Et.al

The field of deep learning, artificial intelligence has arisen due to the later advancements in computerized innovation and the accessibility of data information, has exhibited its ability and adequacy in taking care of complex issues in learning that were not previously conceivable. The viability in emotional detection and acknowledging specific applications have demonstrated by Convolution neural networks (CNNs). In any case, concentrated Processor activities and memory transfer speed are necessitated that cause general CPUs to neglect to accomplish the ideal degrees of execution. Subsequently, to build the throughput of CNNs, equipment quickening agents utilizing General Processing Units (GPUs), Field Programmable Gate Array (FPGAs) and Application Specific Integrated circuits (ASICs) has been used. We feature the primary highlights utilized for productivity improving by various techniques for speeding up. Likewise, we offer rules to upgrade the utilization of FPGAs for the speeding up of CNNs. The proposed algorithm on to an FPGA platform and show that emotions recognition utterance duration 1.5s is identified in 1.75ms, while utilizing 75% of the resources. This further demonstrates the suitability of our approach for real-time applications on Emotional Recognition system.


PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0245849
Author(s):  
Rosemary J. Marsh ◽  
Martin J. Dorahy ◽  
Chandele Butler ◽  
Warwick Middleton ◽  
Peter J. de Jong ◽  
...  

Amnesia is a core diagnostic criterion for Dissociative Identity Disorder (DID), however previous research has indicated memory transfer. As DID has been conceptualised as being a disorder of distinct identities, in this experiment, behavioral tasks were used to assess the nature of amnesia for episodic 1) self-referential and 2) autobiographical memories across identities. Nineteen DID participants, 16 DID simulators, 21 partial information, and 20 full information comparison participants from the general population were recruited. In the first study, participants were presented with two vignettes (DID and simulator participants received one in each of two identities) and asked to imagine themselves in the situations outlined. The second study used a similar methodology but with tasks assessing autobiographical experience. Subjectively, all DID participants reported amnesia for events that occurred in the other identity. On free recall and recognition tasks they presented a memory profile of amnesia similar to simulators instructed to feign amnesia and partial information comparisons. Yet, on tests of recognition, DID participants recognized significantly more of the event that occurred in another identity than simulator and partial information comparisons. As such, results indicate that the DID performance profile was not accounted for by true or feigned amnesia, lending support to the idea that reported amnesia may be more of a perceived than actual memory impairment.


Author(s):  
Loh Teng-Hern Tan ◽  
Hooi-Leng Ser ◽  
Yong Sze Ong ◽  
Kooi Yeong Khaw ◽  
Priyia Pusparajah ◽  
...  

Memory formation occurs within the central nervous system (CNS), specifically in the hippocampal region of brain. The notion that memories are only located within the brain has been challenged by reports of some patients that they have “inherited memories” from their donor after organ transplantation; some even experienced personality changes and picked up hobbies or preferences similar to their donor. Recently, a research team has reignited the embers of this theory by using scientific method to show that memory can be genetically transferred from one sea snail to another. Nevertheless, even as more and more scientific mysteries are being unravelled, memory remains an elusive entity shrouded in the haze of many unresolved hypotheses. To seek clarity on what is currently known, this write-up summarizes and consolidates records associated with the theory of “cellular memory” and experiments evaluating the possibility of memory transference by genetic materials like RNA.


Author(s):  
Piotr Sowa ◽  
Jacek Izydorczyk

The article’s goal is to overview challenges and problems on the way from the state of the art CUDA accelerated neural networks code to multi-GPU code. For this purpose, the authors describe the journey of porting the existing in the GitHub, fully-featured CUDA accelerated Darknet engine to OpenCL. The article presents lessons learned and the techniques that were put in place to make this port happen. There are few other implementations on the GitHub that leverage the OpenCL standard, and a few have tried to port Darknet as well. Darknet is a well known convolutional neural network (CNN) framework. The authors of this article investigated all aspects of the porting and achieved the fully-featured Darknet engine on OpenCL. The effort was focused not only on the classification with the use of YOLO1, YOLO2, and YOLO3 CNN models. They also covered other aspects, such as training neural networks, and benchmarks to look for the weak points in the implementation. The GPU computing code substantially improves Darknet computing time compared to the standard CPU version by using underused hardware in existing systems. If the system is OpenCL-based, then it is practically hardware independent. In this article, the authors report comparisons of the computation and training performance compared to the existing CUDA-based Darknet engine in the various computers, including single board computers, and, different CNN use-cases. The authors found that the OpenCL version could perform as fast as the CUDA version in the compute aspect, but it is slower in memory transfer between RAM (CPU memory) and VRAM (GPU memory). It depends on the quality of OpenCL implementation only. Moreover, loosening hardware requirements by the OpenCL Darknet can boost applications of DNN, especially in the energy-sensitive applications of Artificial Intelligence (AI) and Machine Learning (ML).


Sign in / Sign up

Export Citation Format

Share Document