Cross-silo federated learning (FL) allows organizations to collaboratively train machine learning (ML) models by sending their local gradients to a server for aggregation, without having to disclose their data. This paper proposes SVFL, an efficient protocol for cross-silo FL, that supports both secure gradient aggregation and verification. We evaluate the performance of SVFL and show, by complexity analysis and experimental evaluations, that its computation and communication overhead remain low even on large datasets, with a negligible accuracy loss (less than $1\%$). Furthermore, we conduct an experimental comparison between SVFL and other existing FL protocols, and show that SVFL achieves significant efficiency improvements in both computation and communication.