Multi-hop question answering in high-stakes professional domains presents significant challenges, as Large Language Models (LLMs) often suffer from hallucinations and lack specific domain knowledge. Existing Retrieval-Augmented Generation (RAG) frameworks, while grounding LLM responses, often struggle with multi-hop reasoning due to static retrieval, inadequate contextual fusion, and lack of self-correction. To address these limitations, we propose Dynamic Multi-Hop RAG (DMH-RAG), built upon the LLaMA model. DMH-RAG integrates iterative query refinement with dynamic retrieval, a Contextual Belief Graph (CBG) for knowledge structuring, and a belief-driven self-correction for factual consistency. Experiments on financial QA benchmarks demonstrate DMH-RAG's superior performance in retrieval and answer generation, surpassing state-of-the-art baselines. Ablation studies and human evaluations further validate the critical contribution of each component to enhanced factuality, coherence, and completeness. While incurring increased computational overhead, DMH-RAG offers a significant step towards reliable and precise multi-hop QA in critically sensitive professional applications.