NAACL 2025 Tutorial:
Adaptation of Large Language Models

1Salesforce AI Research 2Nanyang Technological University

Saturday May 3, 2:00-5:30pm @ Ballroom B, NAACL

About This Tutorial

This tutorial on adaptation of Large Language Models (LLMs) is designed to address the growing demand for models that go beyond the static capabilities of generic LLMs by providing an overview of dynamic, domain-specific, and task-adaptive LLM adaptation techniques. While general LLMs have demonstrated strong generalization across a variety of tasks, they often struggle to perform well in specialized domains such as finance, healthcare, and code generation for underrepresented languages. Additionally, their static nature limits their ability to evolve with the changing world, and they are often extremely large in size, making them impractical and costly to deploy at scale. As a result, the adaptation of LLMs has drawn much attention since the birth of LLMs and is of core importance, both for industry, which focuses on serving its targeted users, and academia, which can greatly benefit from small but powerful LLMs.

To address this gap, this tutorial aims to provide an overview of the LLM adaptation techniques. We start with an introduction to LLM adaptation, from both the data perspective, which has been widely accepted as one of the most important ingredients in LLM training, and the model perspective, which focuses on the training strategies to adapt the LLMs. We then emphasize how the evaluation metrics and benchmarks are different from other techniques. After establishing the problems in the aforementioned sessions, we explore various adaptation techniques. We categorize adaptation techniques into two main families. The first is parametric knowledge adaptation, which focuses on updating the parametric knowledge within LLMs, including methods like Continual Pre-Training (CPT), Instructional Tuning (IT), Supervised Preference Learning (SPL) via human or model feedback, and Reinforcement Learning (RL). The second kind of adaptation is semi-parametric knowledge adaptation, where the goal is to update LLM parameters to better interact with the external environment (shifting from standalone LLM to agentic system). As an example, we focus on how LLM leverage external knowledge through techniques like retrieval-augmented generation (RAG).

Schedule

Our tutorial will be held on Saturday May 3, 2:00-5:30pm (all the times are US Mountain time).


[ALL SLIDES]

Time Section Presenter
2:00pm - 2:40pm Section 1: Introduction and Motivation [Slides] Zixuan Ke
2:40pm - 3:00pm Section 2: Evaluation and benchmark [Slides] Zixuan Ke
3:00pm - 4:30pm Section 3: Parametric Knowledge Adaptation [Slides] Zixuan Ke
4:30pm - 5:00pm Section 4: Semi-Parametric Knowledge Adaptation [Slides] Zixuan Ke
5:00pm - 5:30pm Section 5: Summary, Discussion, QA [Slides] Zixuan Ke

Reading List

Bold papers are primary reference in our tutorial.



Section 1: Introduction


Section 2: Evaluation and Benchmark



Section 3: Parameteric Knowledge Adaptation


Section 4: Semi-parameteric Knowledge Adaptation


BibTeX

@misc{ke2025naacl2025tutorialadaptationlarge,
      title={NAACL2025 Tutorial: Adaptation of Large Language Models}, 
      author={Zixuan Ke and Yifei Ming and Shafiq Joty},
      year={2025},
      eprint={2504.03931},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2504.03931}, 
}