Researchers and academics are frequently challenged with processing vast amounts of literature, often requiring precise, targeted summaries that cater to specific informational needs. A novel approach has been developed to enhance the precision of automated academic summarization by leveraging prompt-based techniques that steer content focus in a more controlled manner. Through carefully designed prompts and fine-tuning processes, the method enables more relevant and concise summaries by directing the model's attention toward particular sections of a document, such as methodologies or key findings. The fine-tuned model not only improves content relevance but also adapts dynamically across diverse academic domains, demonstrating substantial advancements in generating high-quality, domain-specific summaries. Experimental results indicate that the model's capacity to produce coherent, focused summaries directly aligned with user prompts offers significant potential for streamlining literature reviews and other academic tasks that require processing extensive textual data. This research showcases the effectiveness of prompt-based content steering and fine-tuning in transforming the capabilities of LLMs, pushing the boundaries of automated academic summarization.