Papers
arxiv:2309.09530

Adapting Large Language Models via Reading Comprehension

Published on Sep 18, 2023
Β· Submitted by akhaliq on Sep 19, 2023
#2 Paper of the day
Authors:
,

Abstract

We explore how continued pre-training on domain-specific corpora influences large language models, revealing that training on the raw corpora endows the model with domain knowledge, but drastically hurts its prompting ability for question answering. Taken inspiration from human learning via reading comprehension--practice after reading improves the ability to answer questions based on the learned knowledge--we propose a simple method for transforming raw corpora into reading comprehension texts. Each raw text is enriched with a series of tasks related to its content. Our method, highly scalable and applicable to any pre-training corpora, consistently enhances performance across various tasks in three different domains: biomedicine, finance, and law. Notably, our 7B language model achieves competitive performance with domain-specific models of much larger scales, such as BloombergGPT-50B. Furthermore, we demonstrate that domain-specific reading comprehension texts can improve the model's performance even on general benchmarks, showing the potential to develop a general model across even more domains. Our model, code, and data will be available at https://github.com/microsoft/LMOps.

Community

This comment has been hidden

@librarian-bot recommend

Paper author

[2024/6/21]πŸ€— We release the 2nd version of AdaptLLM at Instruction-Pretrain, effective for both pre-training from scratch and continual pre-trainingπŸ€—

**************************** Updates ****************************

  • 2024/6/22: Released the benchmarking code.
  • 2024/6/21: Released the 2nd version of AdaptLLM at Instruction-Pretrain.
  • 2024/4/2: Released the raw data splits (train and test) of all the evaluation datasets
  • 2024/1/16: πŸŽ‰ Our research paper has been accepted by ICLR 2024 πŸŽ‰
  • 2023/12/19: Released our 13B base models developed from LLaMA-1-13B.
  • 2023/12/8: Released our chat models developed from LLaMA-2-Chat-7B.
  • 2023/9/18: Released our paper, code, data, and base models developed from LLaMA-1-7B.

Sign up or log in to comment

Models citing this paper 83

Browse 83 models citing this paper

Datasets citing this paper 16

Browse 16 datasets citing this paper

Spaces citing this paper 29

Collections including this paper 63