Decision-Making is a fundamental topic in the domain of Autonomous Driving where significant challenges must be tackled due to the variable behaviours of surrounding agents and the wide array of encountered scenarios. The primary aim of this work is to develop a hybrid Decision-Making architecture able to be validated on a real vehicle that marries the reliability of classical techniques with the adaptability of Deep Reinforcement Learning approaches. To address the crucial transition from simulated training environments to real-world applications, this research employs a Curriculum Learning approach, facilitated by the deployment of Digital Twins and Parallel Intelligence technologies, significantly narrowing the Reality Gap and enhancing the applicability of learned behaviors. The viability of this approach is evidenced through Parallel Execution, wherein simulated and real-world tests are conducted simultaneously. Specifically, our approach consistently exceeds the performance benchmarks established by existing frameworks in the Reinforcement Learning literature within SMARTS, achieving success rates greater than 91%. Furthermore, it completes various scenarios in CARLA up to 50% faster than Autopilot, demonstrating improved comfort and safety.