We propose a novel batch gradient descent algorithm for parameterized quantum circuits that significantly reduces the time complexity in terms of batch size for training quantum neural networks. Batch data constructed to quantum random access memory (qRAM) structure is mapped to one circuit that estimates average loss. As the number of circuits decreases, the range to which quantum amplitude estimation can be applied increases, speeding up with a quadratic scale in batch size.