The development of a modular software platform using large language models (LLMs) for code generation presents unique opportunities and challenges. This paper explores the experiences and difficulties encountered during the creation of an AI-driven business intelligence platform built with LLM assistance. By using an iterative approach to generate modular code, the project aimed to accelerate development and automate routine tasks. However, challenges such as inconsistency in the generated code, hallucinations, lack of long-term memory, and integration complexities emerged. These limitations necessitated manual intervention for code refinement, debugging, and integration to ensure project-wide consistency. The study discusses strategies to address these issues, including structured prompting, automated testing, and iterative refinement. The findings reveal that while LLMs significantly reduce development time and facilitate rapid prototyping, they are not a complete substitute for human expertise. The paper offers practical insights into optimizing the use of LLMs in software engineering, demonstrating both the potential benefits and current limitations of AI-assisted code generation in modular software projects.