We present a Boolean algebra based algorithm to extract if-then classification rules from supervised learning feedforward neural networks to solve the black-box problem of the decision process of the neural networks. This algorithm is called the BAB-G rule extraction algorithm, which stands for Boolean Algebra Based for General inputs. According to the concept of discretizing continuous hidden neuron activation values, we present the BAB-G rule extraction algorithm, which can be applied to threelayer feedforward neural networks with discrete, continuous, or mixed inputs. The antecedent parts of the if-then rules obtained from this algorithm are slanting hyperplanes. If there are n-kind of different classes, we could find n-1 distinct hyperplanes. During the rule extraction procedure, redundant hidden neurons can be removed without affecting the functionality of the neural networks. By representing each interval as a single bit, we can interpret the training of neural networks as a computation involving dynamic bits. Some empirical results on the data sets from the UCI repository of machine learning database are given for comparing our rule extraction algorithm and C5.0 decision tree algorithm. For these datasets, statistical hypothesis tests show that the rules obtained from our algorithm achieve the same classification accuracy as the neural networks. Moreover, our rules are better than the C5.0 decision tree both on comprehensibility and on accuracy for these datasets.