data item
Recently Published Documents


TOTAL DOCUMENTS

104
(FIVE YEARS 34)

H-INDEX

5
(FIVE YEARS 2)

2022 ◽  
Vol 16 (4) ◽  
pp. 1-22
Author(s):  
Mu Yuan ◽  
Lan Zhang ◽  
Xiang-Yang Li ◽  
Lin-Zhuo Yang ◽  
Hui Xiong

Labeling data (e.g., labeling the people, objects, actions, and scene in images) comprehensively and efficiently is a widely needed but challenging task. Numerous models were proposed to label various data and many approaches were designed to enhance the ability of deep learning models or accelerate them. Unfortunately, a single machine-learning model is not powerful enough to extract various semantic information from data. Given certain applications, such as image retrieval platforms and photo album management apps, it is often required to execute a collection of models to obtain sufficient labels. With limited computing resources and stringent delay, given a data stream and a collection of applicable resource-hungry deep-learning models, we design a novel approach to adaptively schedule a subset of these models to execute on each data item, aiming to maximize the value of the model output (e.g., the number of high-confidence labels). Achieving this lofty goal is nontrivial since a model’s output on any data item is content-dependent and unknown until we execute it. To tackle this, we propose an Adaptive Model Scheduling framework, consisting of (1) a deep reinforcement learning-based approach to predict the value of unexecuted models by mining semantic relationship among diverse models, and (2) two heuristic algorithms to adaptively schedule the model execution order under a deadline or deadline-memory constraints, respectively. The proposed framework does not require any prior knowledge of the data, which works as a powerful complement to existing model optimization technologies. We conduct extensive evaluations on five diverse image datasets and 30 popular image labeling models to demonstrate the effectiveness of our design: our design could save around 53% execution time without loss of any valuable labels.


2022 ◽  
pp. 133-158
Keyword(s):  

This chapter shows the interrelationships between the processes to analyze information, identify assumptions, or apply concepts. This chapter uses an example to show how Socrates DigitalTM examines the assumptions and the information they rest on simultaneously. If the user begins with analyzing information, then Socrates DigitalTM asks the user to provide a data item that is relevant to the analysis. Next, it asks the user to identify the underlying assumptions used to analyze the data. After identifying the assumptions, Socrates DigitalTM asks for the evidence that the assumption holds for a larger dataset.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Bochun Yin ◽  
Lei Fu

Aiming at the problems of poor data quality and low application rate in the construction of existing media corpus, this paper proposes the construction and application research of media corpus based on big data. Media corpus data are collected, the data are divided into four categories, the heuristic data item column sorting algorithm is introduced to sort all collection processes, the minimum value of data item collection rate is determined, on this basis, the maximum value of quantity in media corpus is determined, and data collection is realized in media corpus data through sliding window. Then, the state characteristics and probability distribution of feature data are determined by dynamic Bayesian network, the relationship between the state variables and dimensions of media corpus data is determined, and the media corpus data state is processed by component to complete the preprocessing of media corpus data; finally, through the application research of storage and encryption of the designed database through big data technology, the storage structure data and encryption secret key are designed to realize the construction and application of media corpus. The experimental results show that the data quality of the media corpus constructed by the proposed method is high, and its application rate has been improved to a certain extent.


2021 ◽  
Author(s):  
A.K. Shah ◽  
et. al

Item S1: Listing and index of geologic maps used in images and statistical analyses with age correlations for different map unit definitions. Item S2: Visual heavy mineral sand and phosphate content for over 1000 auger samples collected during previous mapping efforts. Item S3: Heavy mineral sand weight percent and economic mineral grade and tonnage estimates by Force et al. (1982) with overlays of sample positions on the new data. Item S4: Radiometric eTh, eU, and K draped over lidar (three PDF files).


2021 ◽  
Author(s):  
A.K. Shah ◽  
et. al

Item S1: Listing and index of geologic maps used in images and statistical analyses with age correlations for different map unit definitions. Item S2: Visual heavy mineral sand and phosphate content for over 1000 auger samples collected during previous mapping efforts. Item S3: Heavy mineral sand weight percent and economic mineral grade and tonnage estimates by Force et al. (1982) with overlays of sample positions on the new data. Item S4: Radiometric eTh, eU, and K draped over lidar (three PDF files).


Author(s):  
Yoshioka Tsuyoshi

Market capitalization is one of the most important indicators for gauging the value of a company. Normally, improving financial data increases market capitalization. However, because there are numerous financial data, it is important to derive high-priority financial data items whose improvement can increase market capitalization. To achieve this, in this study, a method was developed using a remodeled customer satisfaction analysis model with financial data of the companies that make up the Nikkei 225. A graph was created with the correlation coefficient on the horizontal axis and the deviation value of each financial data item on the vertical axis, and then the financial data items plotted in the lower right corner of the graph were extracted. Using this method, it is possible to derive high-priority financial data items to increase market capitalization from numerous financial data.


Author(s):  
Yoshioka Tsuyoshi

Market capitalization is one of the most important indicators for gauging the value of a company. Normally, improving financial data increases market capitalization. However, because there are numerous financial data, it is important to derive high-priority financial data items whose improvement can increase market capitalization. To achieve this, in this study, a method was developed using a remodeled customer satisfaction analysis model with financial data of the companies that make up the Nikkei 225. A graph was created with the correlation coefficient on the horizontal axis and the deviation value of each financial data item on the vertical axis, and then the financial data items plotted in the lower right corner of the graph were extracted. Using this method, it is possible to derive high-priority financial data items to increase market capitalization from numerous financial data.


2021 ◽  
Author(s):  
Daphnee Tuzlak ◽  
Joel Pederson ◽  
et al.

Item 1: Surficial map of Alpine Canyon, Item 2: OSL data, Item 3: Bedrock Strength.


2021 ◽  
Author(s):  
Daphnee Tuzlak ◽  
Joel Pederson ◽  
et al.

Item 1: Surficial map of Alpine Canyon, Item 2: OSL data, Item 3: Bedrock Strength.


2021 ◽  
Author(s):  
Srimanchari P ◽  
Anandharaj G

Abstract Caching is a well established technique to improve the efficiency of data access. This research paper introduces a Hybrid and Adaptive Caching (HAC) approach to cache the data item based on the varying size, and, Time-to-Live (TTL) based invalidation of the data item in a mobile computing environment. Mobile nodes establish single-hop communication with the base station and ad-hoc peer to peer communication with other neighbor nodes in the network to access data items. The proposed work adjusts the caching functionality level based on the size of the data item and stores the cached data item in two different storage systems. The cache of each node is separated into Temporary Buffer (TB) and Permanent Buffer (PB) to improve the data access efficiency. This approach is based on the fact that the smaller size data (e.g. stocks) are updated for shorter Time-to-Live (TTL) whereas the larger size data (e.g. video) are updated only for longer TTL. This proposed work also suggests an adaptive cache replacement and cache invalidation technique to resolve the issues regarding bandwidth utilization and data availability. In cache replacement technique, the cached data item is effectively replaced based on the size of the data item and TTL value. A timestamp-based cache invalidation strategy where the cached data is validated according to the update history of the data items has also been introduced in this paper. The threshold values have greater impact on the system performance. Therefore, the threshold values are fine tuned such that they do not affect the system performance. The proposed approaches significantly improve the query latency, cache hit ratio and efficiently utilize the broadcast bandwidth. The simulation result proves that the proposed work outperforms the existing caching techniques.


Sign in / Sign up

Export Citation Format

Share Document