This paper introduces LuminaDepth, a unified deep learning framework for the joint recovery of dense three-dimensional geometry and intrinsic material properties from sparse, heterogeneous, and lowresolution sensor data. Conventional computer vision pipelines, heavily reliant on high-quality RGB or specialized depth sensors (e.g., Li-DAR), fail in adverse conditions typical of low-power consumer devices, thermal imagers, or medical sensors. LuminaDepth addresses this by formulating a coupled problem of signal enhancement and geometry estimation as an intrinsic decomposition task within a neural field representation. The core innovation is an attention-guided mechanism that modulates multi-scale features to preserve high-frequency details critical for depth disambiguation, alongside a novel neural gain control layer for sensor-agnostic input normalization. We pre-train the model on a large-scale synthetic dataset to learn robust priors for shape and reflectance, enabling effective transfer to real-world sparse data without requiring ground-truth 3D scans. Extensive mathematical formulation and experimental validation across two distinct domains-medical surface topology reconstruction from clinical photos and object detection enhancement in autonomous thermal imaging-demonstrate state-ofthe-art performance. LuminaDepth significantly outperforms monocular depth estimators and traditional image enhancement operators, offering a versatile solution for resource-constrained and non-standard imaging applications.