This paper investigates the occurrence of the double descent phenomenon within the domain of federated learning. It derives a closed-form solution to calculate the mean excess risk in the coefficients of a linear regression model, with the theoretical findings verified through simulations. The results confirm the presence of the double descent effect in the case of federated learning. This effect is especially pronounced in a heterogeneous scenario where local datasets of collaborating users have different sizes and properties. Furthermore, the results unveil unexpected and non-trivial performance behaviors. Interestingly, federated learning even outperforms centralized learning in particular scenarios when the expected excess risk of the former is lower than that of the latter. Finally, our analysis reveals that the typically used random user selection in federated learning exhibits asymptotically lower performance in contrast to the case of excluding only a single carefully chosen user from the learning process.