In recent years, advancements in Artificial Intelligence (AI) have accelerated, edging us closer to achieving Artificial General Intelligence (AGI). However, alongside these developments, there has been a surge in public concern regarding potential unpredictability and adverse effects of AGI. We hypothesize that the laws of physics, particularly those of Nonequilibrium Thermodynamics, may introduce unexplored constraints on AI. These constraints could be constructively leveraged to predict, monitor and limit potential future paths in the development of AGI. The proposed physics-based approach to understanding AGI includes viewing AGI as a part of a bigger system (the real world), emphasizing the significance of computing costs, establishing links to biological phenomena and applying mathematical tools for rigorous analysis. We hope that this framework will promote an innovative direction in research, adding another powerful method to the toolkit for understanding possible trajectories of AI, alleviating public apprehension and shaping the future discourse of AI research.