Simulation studies, in general, heavily rely upon the internal variables of the system / entity in the studies. In case of simulation study of the Spiking Neural Networks (SNNs), the major internal system variables are membrane potentials of the neurons and their respective synaptic inputs which demand to be updated at a sub-millisecond resolution. It would be very apt here to note that this requires thousands of updates to simulate one second of an activity per neuron and this factor makes it imperative to have a highly scalable model to derive some inferences from the simulation. Conventionally, high performance CPUs with high degree of multi-threading were leveraged to conduct simulations and derive inferences. With the advances in the hardware, the degree of parallelism has also increased, especially the GPUs have opened a multitude of avenues to perform SNN simulations at scale. In our pervious works [1, 2, 3], we have demonstrated how GPUs can be leveraged to achieve scalability and performance by using hybrid CPU-GPU approach which have improved the performance as compared to multi-threading on high performance CPUs. In this work, we have focused on hyper parameter tuning of some of the key parameters such as delay insensitivity, time step grouping and the active synapse grouping to achieve greater simulation speed of scalable spiking neural networks