We propose a hydrologic-specific Explainable AI (XAI) framework to extract nonlinear and nonstationary Impulse Response Functions (IRFs) used by Long Short-Term Memory (LSTM) models to simulate streamflow. IRFs reveal how LSTMs emulate streamflow generation and celerity propagation processes in response to impulses of precipitation (P), temperature (T), and potential-evapotranspiration (PET), enabling the evaluation of LSTMs' functional realism under short-term and long-term varying climate/weather conditions. Applying this framework to 672 catchments in USA and Canada, we found that while LSTMs achieve exceptionally high predictive accuracy, their extracted functionality often contradicts established hydrologic principles. Unexpectedly, the isolated effects of T or PET short-term variations on LSTM's simulated streamflow and celerity rate are often positive in direction. For example, in over 70% of raindominated catchments, particularly along the Pacific Coast, increased T within 1-14 days prior to streamflow events is associated with higher streamflow and celerity rates. Similarly, in the southeastern USA and California, LSTMs predict increased streamflow solely in response to PET rises. In snow-dominated catchments, particularly in the Rockies, LSTMs misinterpret PET (representing evaporation function) as a proxy for temperature, the primary snowmelt's driver. These behaviors, likely influenced by seasonality in dynamic inputs, data non-homogeneity, simplicity bias, and missing causal factors, undermine LSTMs' scientific reliability for streamflow forecasting and climate impact assessments. Our XAI framework integrates deep learning with hydrologic context through Differentiable modeling, providing a screening tool for the evaluation of hydrologic models' functional realism in short-and long-term forecasting, climate change studies, and generalization to ungauged basins.