Fine-tuning large language models (LLMs) on specialized text corpora has emerged as a crucial step in enhancing their performance on research tasks. This paper investigates various fine-tuning strategies for LLMs when applied to research text. We explore the impact of different factors, such as sample amount, neural structure, and optimization tech