< back

From Base to Conversational: Japanese Instruction Dataset and Tuning Large Language Models

Masahiro SUZUKI, Masanori HIRANO, Hiroki SAKAJI

[Preprint] Sep. 7, 2023


Abstract

Instruction tuning is essential for large language models (LLMs) to become interactive. While many instruction tuning datasets exist in English, there is a noticeable lack in other languages. Also, their effectiveness has not been well verified in non-English languages. We construct a Japanese instruction dataset by expanding and filtering existing datasets and apply the dataset to a Japanese pre-trained base model. We performed Low-Rank Adaptation (LoRA) tuning on both Japanese and English existing models using our instruction dataset. We evaluated these models from both quantitative and qualitative perspectives. As a result, the effectiveness of Japanese instruction datasets is confirmed. The results also indicate that even with relatively small LLMs, performances in downstream tasks would be improved through instruction tuning. Our instruction dataset, tuned models, and implementation are publicly available online.

Keywords

Large Language Model (LLM); Instruction Dataset; Instruction Tuning; Japanese;


Paper

arXiv:2309.03412 (doi.org/10.48550/arXiv.2309.03412), ssrn.com/abstract=4564308 (doi.org/10.2139/ssrn.4564308)

doi

10.48550/arXiv.2309.03412


bibtex

@preprint{Suzuki2023-pre-conv,
  title={{From Base to Conversational: Japanese Instruction Dataset and Tuning Large Language Models}},
  author={Masahiro SUZUKI and Masanori HIRANO and Hiroki SAKAJI},
  doi={10.48550/arXiv.2309.03412},
  archivePrefix={arXiv},
  arxivId={2309.03412},
  year={2023}
}