Automating literature review tasks remains a complex challenge due to the vast range of academic disciplines, multilingual data sources, and varying linguistic complexities. Introducing a novel two-stage fine-tuning approach, this research optimizes the performance of Llama through domain-specific instructions followed by multilingual corpus training, significantly improving both accuracy and efficiency in literature review generation. The dual-stage process enhances the model's ability to balance high-resource and low-resource language performance, ensuring scalability across diverse linguistic contexts while minimizing computational costs. Extensive experiments demonstrated the model's superior performance in identifying key research elements, maintaining coherence across languages, and offering a practical solution for multilingual literature synthesis. The outcomes demonstrate the importance of expanding access to advanced models, promoting inclusivity and democratizing their use in global academic research environments.