storage management
Recently Published Documents


TOTAL DOCUMENTS

734
(FIVE YEARS 156)

H-INDEX

24
(FIVE YEARS 6)

2021 ◽  
Author(s):  
Fatimah Alsayoud

Big data ecosystems contain a mix of sophisticated hardware storage components to support heterogeneous workloads. Storage components and the workloads interact and affect each other; therefore, their relationship has to consider when modeling workloads or managing storage. Efficient workload modeling guides optimal storage management decisions, and the right decisions help guarantee the workload’s needs. The first part of this thesis focuses on workload modeling efficiency, and the second part focuses on cost-effective storage management.<div>Workload performance modeling is an essential step in management decisions. The standard modeling approach constructs the model based on a historical dataset collected from one set of setups (scenario). The standard modeling approach requires the model to be reconstructed from scratch with every time the setups changes. To address this issue, we propose a cross-scenario modeling approach that improves the workload’s performance classification accuracy by up to 78% through adopting the Transfer Learning (TL).<br></div><div>The storage system is the most crucial component of the big data ecosystem, where the workload’s execution process starts by fetching data from it and ends by storing data into it. Thus, the workload’s performance is directly affected by storage capability. To provide a high I/O performance in the ecosystems, Solid State Drive (SSD) are utilized as a tier or as a cache on big data distributed ecosystems. SSDs have a short lifespan that is affected by data size and the number of writing operations. Balancing performance requirements and SSD’s lifespan consumption is never easy, and it’s even harder when interacting with a huge amount of data and with heterogeneous I/O patterns. In this thesis, we analysis big data workloads I/O pattern impacts on SSD’s lifespan when SSD is used as a tier or as a cache. Then, we design a Hidden Markov Model (HMM) based I/O pattern controller that manages workload placement and guarantees cost-effective storage that enhances the workload performance by up to 60%, and improves SSD’s lifespan by up to 40%. </div><div>The designed transfer learning modeling approach and the storage management solutions improve workload modeling accuracy, and the quality of the storage management policies while the testing setup changes.<br></div>


2021 ◽  
Author(s):  
Fatimah Alsayoud

Big data ecosystems contain a mix of sophisticated hardware storage components to support heterogeneous workloads. Storage components and the workloads interact and affect each other; therefore, their relationship has to consider when modeling workloads or managing storage. Efficient workload modeling guides optimal storage management decisions, and the right decisions help guarantee the workload’s needs. The first part of this thesis focuses on workload modeling efficiency, and the second part focuses on cost-effective storage management.<div>Workload performance modeling is an essential step in management decisions. The standard modeling approach constructs the model based on a historical dataset collected from one set of setups (scenario). The standard modeling approach requires the model to be reconstructed from scratch with every time the setups changes. To address this issue, we propose a cross-scenario modeling approach that improves the workload’s performance classification accuracy by up to 78% through adopting the Transfer Learning (TL).<br></div><div>The storage system is the most crucial component of the big data ecosystem, where the workload’s execution process starts by fetching data from it and ends by storing data into it. Thus, the workload’s performance is directly affected by storage capability. To provide a high I/O performance in the ecosystems, Solid State Drive (SSD) are utilized as a tier or as a cache on big data distributed ecosystems. SSDs have a short lifespan that is affected by data size and the number of writing operations. Balancing performance requirements and SSD’s lifespan consumption is never easy, and it’s even harder when interacting with a huge amount of data and with heterogeneous I/O patterns. In this thesis, we analysis big data workloads I/O pattern impacts on SSD’s lifespan when SSD is used as a tier or as a cache. Then, we design a Hidden Markov Model (HMM) based I/O pattern controller that manages workload placement and guarantees cost-effective storage that enhances the workload performance by up to 60%, and improves SSD’s lifespan by up to 40%. </div><div>The designed transfer learning modeling approach and the storage management solutions improve workload modeling accuracy, and the quality of the storage management policies while the testing setup changes.<br></div>


Horticulturae ◽  
2021 ◽  
Vol 7 (12) ◽  
pp. 577
Author(s):  
Mi-Hyun Lee ◽  
Jin-Hyun Lim ◽  
Cho-Hee Park ◽  
Jun-Hyeok Kim ◽  
Chae-Sun Na

In this study, we determined the germination response in the seeds of the rare plant Pseudolysimachion pusanensis (Y. N. Lee) Y. N. Lee to different temperatures. P. pusanensis seeds were collected from the Baekdudaegan National Arboretum, South Korea, in November 2019, and dried. Dry seeds were placed at constant and alternating temperatures (5 °C, 10 °C, 15 °C, 20 °C, 25 °C, 30 °C, and 35 °C) to determine their germination percentage (GP). The seeds were exposed to 59 temperature combinations ranging from 5 °C to 43 °C using a thermal gradient plate. The photoperiod was set at 12:12 h (light:dark) and germination assays were performed five times a week. Subsequently, the seed GP and the number of days required to reach 50% of the germination (T50) were determined. The highest final GP was 94.38%, with a T50 value of 9.26 d at 15 °C. However, the mean germination time was 12.5 d at 15 °C, and linear regression using 1/T50 revealed that the base temperature ranged from 2.69 °C to 4.68 °C. These results for P. pusanensis seeds stored in a seed bank provide useful data for the native plants horticulture industry and can also be utilized for storage management.


2021 ◽  
Vol 4 ◽  
pp. 1-6
Author(s):  
Gema Martín-Asín López ◽  
Lorenzo Camón Soteres ◽  
Gonzalo Moreno Vergara ◽  
Andrés Arístegui Cortijo

Abstract. The increasingly widespread implementation of databases with geographical component, as well as the impregnation of geolocation culture, is driving a transformation in the storage, management and exploitation of geospatial information. Real-world elements go from being modeled as mere geometric representations, with just cartographic purposes, to be features with their own entity. Unique identifiers and lifecycle management are assigned to these features, allowing interactions between feature instances from different databases, that is, facilitating digital transformation and, therefore, increasing exponentially the exploitation possibilities.In this regard, the National Geographic Institute of Spain (IGN, by its Spanish acronym) have implemented several processes in its National Topographic Database, such as the connection with the cadastral information, in order to take advantage of its updates and give feedback to improve cadastral data; or the link with the information, in addresses form, provided from different public administration, that is processed to geolocate features in the topographic database. Likewise, work is being done in order to implement new processes that allow linking with other data sets.These processes, in addition to reusing information produced by different public administrations, constitute an advance towards the objective of geospatial information databases continuous updating.


Author(s):  
Nur Syahela Hussien ◽  
Sarina Sulaiman ◽  
Abdulaziz Aborujilah ◽  
Merlinda Wibowo ◽  
Hussein Samma

<span>Today, there are high demands on Mobile Cloud Storage (MCS) services that need to manage the increasing number of works with stable performance. This situation brings a challenge for data management systems because when the number of works increased MCS needs to manage the data wisely to avoid latency occur. If latency occurs it will slow down the data performance and it should avoid that problem when using MCS. Moreover, MCS should provide users access to data faster and correctly. Hence, the research focuses on the scalability of mobile cloud data storage management, which is study the scalable on how deep the data folder itself that increase the number of works.</span>


2021 ◽  
Author(s):  
Reika Kinoshita ◽  
Satoshi Imamura ◽  
Lukas Vogel ◽  
Satoshi Kazama ◽  
Eiji Yoshida

Author(s):  
Can Burak Şişman ◽  
Ayşen Köktaş Keskin

Nowadays increasing agricultural production is based on a unit area to obtain further product. But the addition of increasing agricultural production and consumption of products derived from the evaluation are submitted in accordance with the storage is also important. Aim of storage is to preserve properties of products and their freshness. If suitable storage conditions aren’t supplied, according to product variety, quality and quantity losses increase. Decreasing these losses are possible with providing suitable storage conditions and storage management. In this study, the Turkish Grain Boards’ Hayrabolu region has a significant share of production of wheat used the storage of Open Temporary Store, storage conditions and tried to determine the effects on losses during storage.


Sign in / Sign up

Export Citation Format

Share Document