Deep learning has revolutionized computer vision, yet its application to specialized computational imaging—such as depth estimation and geometric surface analysis—remains constrained by the scarcity of high-quality, annotated real-world data. This paper presents SynDepth-Net, a novel neural architecture designed to extract high-fidelity geometric features from limited consumer-grade imaging data. By leveraging a photorealistic synthetic dataset and a task-driven domain adaptation module, our method improves depth precision and feature detection in diverse lighting conditions. We demonstrate superior performance in recovering intrinsic surface details, bridging the gap between synthetic training environments and real-world deployment. Our framework integrates a multi-scale attention mechanism and an adaptive gain control module, enabling robust feature extraction under significant domain shift. Extensive mathematical formulation and experimental validation on both synthetic benchmarks and real-world captures confirm the efficacy of our approach, achieving state-of-the-art results in depth accuracy and feature localization.