This paper presents DA-VINCI, a dynamically configurable and precision-scalable activation function core for versatile iterative neuron implementation to address the increasing demand for diverse activation functions in AI workloads. It leverages the CORDIC methodology to support runtime reconfigurability between Swish, SoftMax, SELU, GELU, Sigmoid, Tanh, and ReLU, achieving a quality of results (QoR) of 98.5%. FPGA evaluations demonstrate reductions of up to 4.5× in LUT usage, 3.2× in FF usage, 1.3× in critical delay, and 5.4× in power consumption compared to state-of-the-art designs. Empirical ASIC analysis further emphasizes the core's efficiency, achieving reductions of up to 16.2× area, 7.8× delay, and 14.3× power at CMOS 45nm, and 1.8× area and 10× delay at CMOS 28nm respectively. These improvements make DA-VINCI a fundamental block for resource-efficient AI accelerators focused on DNNs, RNNs/LSTMs, and Transformers.