An important part of training simple neural networks for students and researchers alike is choosing an adequate activation function. Activation functions serve a number of important purposes, including but not limited to: preventing dead neurons, eliminating the vanishing gradient problem, and allowing networks to learn complex patterns. Currently, students are taught to use the "guess-and-check" method in choosing the best activation functions for their neural networks. Although literature and knowns exist for choosing activation functions for specific tasks, not all literature applies. The purpose of this presentation is to report an algorithm that can help choose the best activation function by eliminating human-handed checking and increasing time efficiency.