Large Language Models (LLMs) demonstrate impressive capabilities in diverse reasoning tasks, yet they struggle with the consistency and robustness required for complex mathematical reasoning, especially at the undergraduate level. Benchmarks such as UGMathBench reveal LLMs' significant instability when faced with minor variable perturbations in semantically similar problems, leading to low Effective Accuracy (EAcc) and a substantial Reasoning Gap. To overcome this limitation, we introduce the Robust Collaborative Mathematical Reasoning (RCMR-Math) framework. RCMR-Math integrates multimodal collaborative reasoning, multi-view verification utilizing external tools, and an iterative self-correction mechanism. Comprising five interconnected modules, the framework aims for enhanced consistency. Evaluated in a zero-shot setting on the challenging UG-MathBench dataset, with GPT-4o as its backbone, RCMR-Math significantly outperforms state-of-the-art LLMs. It achieves a substantial improvement in EAcc and a marked reduction in the Reasoning Gap. These results demonstrate that RCMR-Math effectively enhances the cross-version consistency and accuracy of LLM-based mathematical reasoning, setting a new benchmark for robust problem-solving in undergraduate mathematics.