primary memory
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 8)

H-INDEX

20
(FIVE YEARS 0)

2022 ◽  
Vol 26 (6) ◽  
pp. 4-15
Author(s):  
A. A. Smirnova ◽  
L. N. Prakhova ◽  
A. G. Ilves ◽  
N. A. Seliverstova ◽  
T. N. Reznikova ◽  
...  

Abstract. Despite a high prevalence of mild cognitive impairment (MCI), there are no accepted algorithms of diff erentiating the syndrome and the prognosis evaluation of later cognitive decline at this time. Objective. To identify biomarkers of poor prognosis in the various MCI types by optimizing neuropsychological examination in combination with MRI morphometry of brain structures. Patients and methods. We examined 45 patients (9 men, 36 women, mean age 72 ± 6.7 years) with MCI according to the modifi ed Petersen’s criteria and the DSM-5 criteria. All patients underwent the MMSE scale, the Detailed Neuropsychological Testing (DNT), which included a Ten Words Test (TWT), a “Double Test” (DT), a visual acuity test, a high-fi eld magnetic resonance imaging (MRI) of the brain with morphometry of cerebral structures (FreeSurfer, FSL). Results. According to the MMSE score, MCI were found in 26 (58%) patients. During the DNT, depending on the state of memory, 14 participants of the study identifi ed a non-amnestic type of MCI (na-MCI), 15 — an amnestic variant with impaired reproduction (ar-MCI), and 16 people — an amnestic type with a primary memory defect (apm-MCI). Volume changes of the anterior corpus callosum segment (CCA) were signifi cantly associated with the Immediate Recall after 4th reading and the Delayed Recall in the general MCI group (rho = 0.58; 0.58; p < 0.05) and the apmMCI group (rho = 0.6; 0.56; p < 0.05). Kruskal–Wallis Test showed that there were signifi cant group diff erences in the volumes of the CCA, right caudate nucleus, left cerebellar hemisphere cortex, posterior corpus callosum segment and left thalamus. At the same time, the fi rst three structures were combined into a set of informative features for differentiating the type of MCI based on the results of Forward stepwise Discriminant Analysis with a 77.3% accurate classifi cation rate (Wilks’s Lambda: 0.35962; approx. F (6.78) = 8.678, p < 0.001). ROC-analysis established the threshold values of the CCA volumes of ≤ 0.05% and the right caudate nucleus volumes of ≤ 0.23% (81.25% sensitivity in both cases; 62.1% and 60.7% specifi city; AUC 0.787 and 0.767; 95% CI 0.639–0.865 and 0.615–0.881; OR 7.1 and 6.7 (95% CI 1.6–30.6 and 1.6–29), associated with a memory defect in persons with MCI, while the ORs are 7.1 and 6.7 (95% CI 1.6–30.6 and 1.6–29), respectively. When both cerebral structures were included in the logit model, 88.6% classifi cation accuracy, 92.6% sensitivity, and 82.4% specifi city of the method were achieved. Conclusion. It has been demonstrated that classifying patients into the various types of MCI based on the data of memory function refl ected by the DNT and supplemented with MRI morphometry of the brain areas may be used as a sensitive and specifi c instrument for determining the category of patients with a high risk of Alzheimer’s disease. A neuropsychological profi le with a defect in primary memory, atrophic changes in anterior segment of the corpus callosum and the right caudate nucleus have been proposed as biomarkers of poor prognosis. Further longitudinal studies are necessary to clarify the proposed biomarkers of poor prognosis information and to detail the mechanisms of the neurodegenerative process.


Author(s):  
Gajanan Digambar Gaikwad

Abstract: Operating system offers a service known as memory management which manages and guides primary memory. It moves processes between disk and main memory during the execution back and forth. The process in which we provisionally moves process from primary memory to the hard disk so the memory is available for other processes. This process is known as swapping. Page replacement techniques are the methods by which the operating system concludes which memory pages to be swapped out and write to disk, whenever a page of main memory is required to be allocated. There are different policies regarding how to select a page to be swapped out when a page fault occurs to create space for new page. These Policies are called page replacement algorithms. In this paper the strategy for identifying the refresh rate for ‘Aging’ page replacement algorithm is presented and evaluated. Keywords: Aging algorithm, page replacement algorithm, refresh rate, virtual memory management.


2020 ◽  
Vol 75 (9-10) ◽  
pp. 549-561
Author(s):  
Christian Beyer ◽  
Vishnu Unnikrishnan ◽  
Robert Brüggemann ◽  
Vincent Toulouse ◽  
Hafez Kader Omar ◽  
...  

Abstract Many current and future applications plan to provide entity-specific predictions. These range from individualized healthcare applications to user-specific purchase recommendations. In our previous stream-based work on Amazon review data, we could show that error-weighted ensembles that combine entity-centric classifiers, which are only trained on reviews of one particular product (entity), and entity-ignorant classifiers, which are trained on all reviews irrespective of the product, can improve prediction quality. This came at the cost of storing multiple entity-centric models in primary memory, many of which would never be used again as their entities would not receive future instances in the stream. To overcome this drawback and make entity-centric learning viable in these scenarios, we investigated two different methods of reducing the primary memory requirement of our entity-centric approach. Our first method uses the lossy counting algorithm for data streams to identify entities whose instances make up a certain percentage of the total data stream within an error-margin. We then store all models which do not fulfil this requirement in secondary memory, from which they can be retrieved in case future instances belonging to them should arrive later in the stream. The second method replaces entity-centric models with a much more naive model which only stores the past labels and predicts the majority label seen so far. We applied our methods on the previously used Amazon data sets which contained up to 1.4M reviews and added two subsets of the Yelp data set which contain up to 4.2M reviews. Both methods were successful in reducing the primary memory requirements while still outperforming an entity-ignorant model.


Virtual memory plays an important role in memory management of an operating system. A process or a set of processes may have a requirement of memory space that may exceed the capacity of main memory. This situation is addressed by virtual memory where a certain memory space in secondary memory is treated as primary memory, i.e., main memory is virtually extended to secondary memory. When a process requires a page, it first scans in primary memory. If it is found then, process continues to execute, otherwise a situation arises, called page fault, which is addressed by page replacement algorithms. This algorithms swaps out a page from main memory to secondary memory and replaced it with another page from secondary memory in addition to the fact that it should have minimum page faults so that considerable amount of I/O operations, required for swapping in/out of pages, can be reduced. Several algorithms for page replacement have been formulated to increase the efficiency of page replacement technique. In this paper, mainly three page replacement algorithms: FIFO, Optimal and LRU are discussed, their behavioural pattern is analysed with systematic approach and a comparative analysis of these algorithms is recorded with proper diagram.


Author(s):  
Pallab Banerjee ◽  
Biresh Kumar ◽  
Amarnath Singh ◽  
Shipra Sinha ◽  
Medha Sawan

Programming codes are of variable length. When the size of codes becomes greater than that of primary memory, the concept of virtual memory comes into play. As the name suggests, virtual memory allows to outstretch the use of primary memory by using storage devices such as disks. The implementation of virtual memory can be done by using the paging approach. Allocation of memory frames to each and every program is done by the operating system while loading them into the memory. Each program is segregated into pages as per the size of frames. Equal size of pages and frames enhance the usability of memory. As, the process or program which is being executed is provided with a certain amount of memory frames; therefore, swap out technique is necessary for the execution of each and every page. This swap out technique is termed as Page Replacement. There are many algorithms proposed to decide which page needs to be replaced from the frames when new pages come. In this paper, we have proposed a new page replacement technique. This new technique is based on the approach of reading and counting of the pages from secondary storage. Whenever the page fault is detected, the needed page is fetched from the secondary storage. This process of accessing the disc is slow as compared to the process in which the required page is retrieved from the primary storage. In the proposed technique, the pages having least occurrence will be replaced by the new page and the pages having same count will be replaced on the basis of LRU page replacement algorithm. In this method, the paged are retrieved from the secondary storage hence, possibility of page hit will be increased and as a result, the execution time of the processes will be decreased as the possibility of page miss will be decreased.


2020 ◽  
Vol 16 (2) ◽  
pp. r1-r7
Author(s):  
Marilou Poitras ◽  
Lucie Péléja ◽  
Gardy Lavertu ◽  
Anouck Langlois ◽  
Katia Boulerice ◽  
...  
Keyword(s):  

MapReduce is a programming model used for processing Big Data. There are had been considerable research in improvement of performance of MapReduce model. This paper examines performance of MapReduce model based on K Means algorithm inside the Hadoop cluster. Different input size had been taken on various configurations to discover the impact of CPU cores and primary memory size. Results of this evaluation had been shown that the number of cores had maximum impact of the performance of MapReduce model.


2018 ◽  
Vol 7 (4.5) ◽  
pp. 32 ◽  
Author(s):  
Govind Prasad Arya ◽  
Devendra Prasad ◽  
Sandeep Singh Rana

The computer programmer write programming codes of any length without keeping in mind the available primary memory. This is possible if we use the concept of virtual memory. As the name suggests, virtual memory is a concept of executing a programming code of any size even having a primary memory of smaller size than the size of program to be executed. The virtual memory can be implemented using the concept of paging. The operating system allocates a number of memory frames to each program while loading into the memory. The programming code is equally divided into pages of same size as frame size. The size of pages and memory frames are retained equal for the better utilization of the memory. During the execution of program, every process is allocated limited number of memory frames; hence there is  a need of page replacements. To overcome this limitation, a number of page replacement techniques had suggested by the researchers. In this paper, we have proposed an modified page replacement technique, which is based on the concept of block reading of pages from the secondary storage. The disc access is very slow as compared to the access from primary memory. Whenever there is a page fault, the required page is retrived from the secondary storage. The numerous page faults increase the execution time of process. In the proposed methodology, a number of pages, which is equal to the allotted memory frames, are read every time when there is a page fault instead of reading a single page at a time. If a block of pages has fetched from secondary storage, it will definitely increases the possibilities of page hit and as a result, it will improve the hit ratio for the processes.  


Author(s):  
S. V. Shevtsov

The creative character of reading is revealed through the elucidation of its temporal constitutions and consideration of some practical aspects of this phenomenon. Reading isn’t intellectual, aesthetical procedure, but co-being between a text and a reader. Reading is one of the ways of becoming and forming in human being specific metaphysical organs. Thanks to them there are some actual conditions of freedom, love, faith, virtue, responsibility etc. Impression as point-wise part of time, orienting on presence and changing with every new phase of reading text is shown. Impression isn’t feeling, but invasion, that includes intensity, completeness of action. That’s why reading text should impress and invade in limits of being of a reader, catch them, hold them by its energy his attention. Retention as primary memory of read text and holding some information during its distancing from the point of impression is researched. Possibilities of using of some technics of reading – reading out loud of dialogues of Plato, reflexive reading, close reading etc.


Sign in / Sign up

Export Citation Format

Share Document