Forecasting resource usage in cloud computing environments is crucial for enhancing operational efficiency and optimizing resource management strategies such as autoscaling and overcommitment. While machine learning (ML) models, especially Long Short-Term Memory (LSTM) networks, have demonstrated high predictive accuracy, this paper questions the necessity of such complex models by revealing an interesting insight: LSTM predictions often resemble shifted versions of the original data. We explore the effectiveness of simpler models that leverage data persistence properties, showing comparable performance to ML approaches. Extensive experiments across multiple cloud datasets validate that resource usage exhibits strong temporal correlation, suggesting that basic shift-based predictions can suffice in many scenarios. This paper advocates for a balanced approach, utilizing ML models only when simpler solutions fall short, thereby promoting cost-effective and practical resource forecasting strategies. The methodology presented in this study focuses on experimental analysis using real-world cloud datasets, highlighting the practicality of persistent forecasting techniques and their applicability in large-scale data centers. Additionally, the paper addresses the scalability and computational efficiency of lightweight forecasting methods compared to complex LSTM networks. By analyzing multiple performance metrics such as mean absolute error (MAE) and average relative delta (ARD), the study underscores the potential of non-ML techniques in reducing forecasting overhead without compromising prediction accuracy.