Several kinds of pretrained convolutional neural networks (CNN) exist nowadays. Utilizing these networks with the new classification task requires the retraining with new data sets. With the small embedded device, large network cannot be implemented. The authors study the use of pretrained models and customizing them towards accuracy and size against face recognition tasks. The results show 1) the performance of existing pretrained networks (e.g., AlexNet, GoogLeNet, CaffeNet, SqueezeNet), as well as size, and 2) demonstrate the layers customization towards the model size and accuracy. The studied results show that among the various networks with different data sets, SqueezeNet can achieve the same accuracy (0.99) as others with small size (up to 25 times smaller). Secondly, the two customizations with layer skipping are presented. The experiments show the example of SqueezeNet layer customizing, reducing the network size while keeping the accuracy (i.e., reducing the size by 7% with the slower convergence time). The experiments are measured based on Caffe 0.15.14.