Current computer vision systems deployed on consumer hardware often suffer significant performance degradation when subjected to non-ideal environmental conditions, such as extreme lighting, low-resolution sensors, or high-dynamic-range (HDR) thermal inputs. Conventional pipelines treat Image Signal Processing (ISP) and downstream feature extraction as distinct, disjoint stages, leading to suboptimal information retention during bit-depth reduction (e.g., 16-bit to 8-bit conversion). This paper proposes the Adaptive Signal-to-Feature Network (AS2F-Net), a unified, end-to-end Hybrid Neural Architecture that jointly learns optimal signal tone-mapping and semantic feature extraction. By introducing a differentiable Neural ISP Block, the system adapts raw sensor inputs to maximize the accuracy of downstream tasks-specifically biometric authentication and hazard detection—on resource-constrained edge devices. We validate our approach through extensive experimentation, demonstrating that AS2F-Net outperforms static ISP baselines by substantial margins in low-light and high-contrast scenarios while reducing inference latency by approximately 15% compared to standard ResNet backbones. Furthermore, we provide a rigorous mathematical analysis of the quantization noise suppression and establish a theoretical bound on the gradient variance during Mixed-Precision Training.