The expansion of machine learning applications across dif- ferent domains has given rise to a growing interest in tap- ping into the vast reserves of data that are being generated by edge devices. To preserve data privacy, federated learn- ing was developed as a collaboratively decentralized privacy- preserving technology to overcome the challenges of data silos and data sensibility. This technology faces certain lim- itations due to the limited network connectivity of mobile devices and malicious attackers. In addition, data samples across all devices are typically not independent and iden- tically distributed (non-IID), which presents additional chal- lenges to achieving convergence in fewer communication rounds. In this paper, we have simulated attacks, namely Byzantine, label flipping, and noisy data attacks, besides non-IID data. We proposed Robust federated learning against poisoning attacks (RFCaps) to increase safety and accelerate conver- gence. RFCaps incorporates a prediction-based clustering and a gradient quality evaluation method to prevent attack- ers from the aggregation phase by applying multiple filters and also accelerate convergence by using the highest quality gradients. In comparison to MKRUM, COMED, TMean, and FedAvg algorithms, RFCap has high robustness in the pres- ence of attackers and has achieved a higher accuracy of up to 80% on both MNIST and Fashion-MNIST datasets.