SIMPLEX: An Activation Function with Improved Loss Function Results in Validation
An activation function is a mathematical function used for squashing purposes in artificial neural networks, whose domain and the range are two important most features to judge its potency. Overfitting of a neural network, is an issue that has gained considerable importance. This is a consequence of a function developing some complex relationship during the training phase and then these do not show up during the testing phase due to which these relationships aren’t actually relations, but are merely a consequence of sampling noise that arises during the training phase and is absent during testing phase. This creates a significant gap in accuracy which if minimized could result in better results in terms of overall performance of an ANN (Artificial Neural Network). The activation function proposed in this work is called SIMPLEX. Over a set of experiments, it was observed, to have the least overfitting issue among the rest of the analyzed activation functions over the MNIST dataset, selected as the experimental problem.