This poster has been presented at AGU fall meeting 2025  Abstract ID: 1918944 Session: GP33B: Frontiers in Electromagnetic (EM) Geophysics II Poster Date & Time: Wednesday, 17 December 2025, 14:15 â 17:45 CST1In-person Location: New Orleans Convention Center, Halls E-G (Poster Hall)Deep learning methods show promise in geophysical inversion. We investigate the magnetotelluric (MT) inverse problem using a deep unsupervised inversion algorithm guided by physics, beginning with 1D MT inversion. Physics-guided neural network (PGNN) incorporates physical information, e.g., Partial differential equations, in the loss function and thus reduces the data dependence and achieves physical consistency. Goyes-Peñafiel (2025) introduced a Multilayer Perceptron (MLP) model, featuring an auto-differentiable solver, which they applied to predict media resistivity ranging from 1 to 103 Ωm. However, real-world geology (e.g., a granite bedrock containing highly conductive thin graphite layers) exhibits a far wider range of resistivity, from milli (e.g., 10-3 Ωm for graphite) to mega (e.g., 106 Ωm for dry granite). To handle strong resistivity contrasts in field data, we first explore various activation functions and loss functions; then we propose a multi-head architecture to predict the conductance, thickness and depth of the thin conductive layer, with known bedrock resistivity; next, we investigate the performance of various regularization methods (smoothness, total variation, minimum support, minimum gradient support). We analyze MCMC dropout and deep ensemble for uncertainty quantification. We then apply the algorithm to two field datasets where highly conductive graphite layers are present. We also demonstrate how to use a non-differentiable forward solver with externally computed gradients, allowing us to leverage legacy modeling codes without extensive modification. Additionally, we propose a Convolutional Neural Network (CNN) model and compare its performance against the MLP model.  In conclusion, PGNNs facilitate straightforward inversion of various parameter types when automatic differentiation is employed. However, the effectiveness of neural networks is inherently limited by the fidelity of the underlying physical model represented in the data. Additionally, hyperparameter selection must be tailored to the specific problem, and the quality of uncertainty quantification largely hinges on both the model architecture and the informational content of the input data.