Purpose: This study compares the performance of small, fine-tuned LLMs with the large original models on two radiation oncology clinical tasks: treatment modality selection and ICD-10 code prediction. Methods: We used data of 403 treatment modality selection cases and 377 ICD-10 code prediction cases. The models used were fine-tuned LLaMA 2 7B and Mistral 7B, as well as the larger LLaMA 3.1 70B and 405B models for comparison. We evaluated the accuracy of all models on these two tasks. Results: Both fine-tuned LLaMA 2 7B and Mistral 7B outperformed LLaMA 3.1 70B and 405B in treatment modality selection task. The fine-tuned LLaMA 2 7B and Mistral 7B models achieved accuracy comparable to the larger models in ICD-10 code prediction. Conclusions: Small models, fine-tuned with radiation oncology data, can achieve performance comparable to-or even surpass-that of larger models.