EVALUATION OF GAMUT COMPRESSION ALGORITHMS WITH DIFFERENT NEUTRAL CONVERGENT POINTS IN DIFFERENT COLOUR SPACES

Author(s):  
Baiyue Zhao ◽  
Ming Ronnier Luo ◽  
Liuhao Xu
2011 ◽  
Vol 1 (4) ◽  
pp. 1-8
Author(s):  
G. Chenchu Krishnaiah ◽  
T. Jayachandraprasad ◽  
M.N. Giri Prasad

2018 ◽  
Vol 5 (4) ◽  
pp. 34
Author(s):  
R. PANDIAN ◽  
KUMARI S. LALITHA ◽  
KUMAR R. RAJA ◽  
RAVIKUMAR D. N. S. ◽  
◽  
...  

2021 ◽  
Author(s):  
Antonios Makris ◽  
Camila Leite da Silva ◽  
Vania Bogorny ◽  
Luis Otavio Alvares ◽  
Jose Antonio Macedo ◽  
...  

AbstractDuring the last few years the volumes of the data that synthesize trajectories have expanded to unparalleled quantities. This growth is challenging traditional trajectory analysis approaches and solutions are sought in other domains. In this work, we focus on data compression techniques with the intention to minimize the size of trajectory data, while, at the same time, minimizing the impact on the trajectory analysis methods. To this extent, we evaluate five lossy compression algorithms: Douglas-Peucker (DP), Time Ratio (TR), Speed Based (SP), Time Ratio Speed Based (TR_SP) and Speed Based Time Ratio (SP_TR). The comparison is performed using four distinct real world datasets against six different dynamically assigned thresholds. The effectiveness of the compression is evaluated using classification techniques and similarity measures. The results showed that there is a trade-off between the compression rate and the achieved quality. The is no “best algorithm” for every case and the choice of the proper compression algorithm is an application-dependent process.


Information ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 264
Author(s):  
Jinghan Wang ◽  
Guangyue Li ◽  
Wenzhao Zhang

The powerful performance of deep learning is evident to all. With the deepening of research, neural networks have become more complex and not easily generalized to resource-constrained devices. The emergence of a series of model compression algorithms makes artificial intelligence on edge possible. Among them, structured model pruning is widely utilized because of its versatility. Structured pruning prunes the neural network itself and discards some relatively unimportant structures to compress the model’s size. However, in the previous pruning work, problems such as evaluation errors of networks, empirical determination of pruning rate, and low retraining efficiency remain. Therefore, we propose an accurate, objective, and efficient pruning algorithm—Combine-Net, introducing Adaptive BN to eliminate evaluation errors, the Kneedle algorithm to determine the pruning rate objectively, and knowledge distillation to improve the efficiency of retraining. Results show that, without precision loss, Combine-Net achieves 95% parameter compression and 83% computation compression on VGG16 on CIFAR10, 71% of parameter compression and 41% computation compression on ResNet50 on CIFAR100. Experiments on different datasets and models have proved that Combine-Net can efficiently compress the neural network’s parameters and computation.


Sign in / Sign up

Export Citation Format

Share Document